By normalizing the weight at every stage can we prevent divergence?
(a) yes
(b) no
This question was addressed to me in unit test.
I would like to ask this question from Feedback Layer topic in portion Competitive Learning Neural Networks of Neural Networks