Learning laws exponential error convergence for recurrent neural networks
Date
1993ISBN
0-7803-1298-8Publisher
Publ by IEEESource
Proceedings of the IEEE Conference on Decision and ControlProceedings of the 32nd IEEE Conference on Decision and Control. Part 3 (of 4)
Volume
3Pages
2810-2811Google Scholar check
Keyword(s):
Metadata
Show full item recordAbstract
In this paper, we propose new learning laws for adjusting the weights of recurrent high order neural networks (RHONN) when they are used to system identification problems. The main advantages of these learning laws over the classical robust adaptive ones, is that the identification error converges to zero exponentially fast, and that such a convergence is independent of the number of high order connections of the RHONN.