If a(i) is the input, ^ is the error, n is the learning parameter, then how can weight change in a perceptron model be represented?
(a) na(i)
(b) n^
(c) ^a(i)
(d) none of the mentioned
I have been asked this question in class test.
This interesting question is from Models in division Basics of Artificial Neural Networks of Neural Networks