Point out the wrong statement.
(a) A MapReduce job usually splits the input data-set into independent chunks which are processed by the map tasks in a completely parallel manner
(b) The MapReduce framework operates exclusively on <key, value> pairs
(c) Applications typically implement the Mapper and Reducer interfaces to provide the map and reduce methods
(d) None of the mentioned
I have been asked this question at a job interview.
I would like to ask this question from Introduction to Mapreduce in section Mapreduce of Hadoop