Learning and Convergence Analysis of Neural-Type Structured Networks
Date
1992ISSN
1045-9227Source
IEEE Transactions on Neural NetworksVolume
3Issue
1Pages
39-50Google Scholar check
Keyword(s):
Metadata
Show full item recordAbstract
A special class of feedforward neural networks, referred to as structured networks, has recently been introduced as a method for solving matrix algebra problems in an inherently parallel formulation. In this paper we present a convergence analysis for the training of structured networks. Since the learning techniques that are used in structured networks are also employed in the training of neural networks, the issue of convergence is discussed not only from a numerical algebra perspective but also as a means of deriving insight into connectionist learning. In our analysis, we develop bounds on the learning rate, under which we prove exponential convergence of the weights to their correct values for a class of matrix algebra problems that includes linear equation solving, matrix inversion, and Lyapunov equation solving. For a special class of problems we introduce the orthogonalized back propagation algorithm, an optimal recursive update law for minimizing a least-squares cost functional, that guarantees exact convergence in one epoch. Several learning issues, such as normalizing techniques, persistency of excitation, input scaling and nonunique solution sets, are investigated. © 1992 IEEE