Point out the wrong statement.
(a) Spark is intended to replace, the Hadoop stack
(b) Spark was designed to read and write data from and to HDFS, as well as other storage systems
(c) Hadoop users who have already deployed or are planning to deploy Hadoop Yarn can simply run Spark on YARN
(d) None of the mentioned
I got this question in a national level competition.
My question is from Spark with Hadoop in chapter Apache Spark, Flume, Lucene, Hama, HCatalog, Mahout, Drill, Crunch and Thrift of Hadoop