Having multiple perceptrons can actually solve the XOR problem satisfactorily: this is because each perceptron can partition off a linear part of the space itself, and they can then combine their results.
(a) True – this works always, and these multiple perceptrons learn to classify even complex problems
(b) False – perceptrons are mathematically incapable of solving linearly inseparable functions, no matter what you do
(c) True – perceptrons can do this but are unable to learn to do it – they have to be explicitly hand-coded
(d) False – just having a single perceptron is enough
This question was addressed to me in unit test.
Asked question is from Neural Networks in section Learning of Artificial Intelligence