Show simple item record

dc.contributor.authorKosmatopoulos, E. B.en
dc.contributor.authorChristodoulou, Manolis A.en
dc.contributor.authorIoannou, Petros A.en
dc.creatorKosmatopoulos, E. B.en
dc.creatorChristodoulou, Manolis A.en
dc.creatorIoannou, Petros A.en
dc.date.accessioned2019-12-02T10:36:22Z
dc.date.available2019-12-02T10:36:22Z
dc.date.issued1997
dc.identifier.urihttp://gnosis.library.ucy.ac.cy/handle/7/57134
dc.description.abstractClassical adaptive and robust adaptive schemes, are unable to ensure convergence of the identification error to zero, in the case of modeling errors. Therefore, the usage of such schemes to 'black-box' identification of nonlinear systems ensures - in the best case - bounded identification error. In this paper, new learning (adaptive) laws are proposed which when applied to recurrent high order neural networks (RHONN) ensure that the identification error converges to zero exponentially fast, and even more, in the case where the identification error is initially zero, it remains equal to zero during the whole identification process. The parameter convergence properties of the proposed scheme, that is, their capability of converging to the optimal neural network model, is also examined, it is shown to be similar to that of classical adaptive and parameter estimation schemes. Finally, it is mentioned that the proposed learning laws are not locally implementable, as they make use of global knowledge of signals and parameters. Classical adaptive and robust adaptive schemes, are unable to ensure convergence of the identification error to zero, in the case of modeling errors. Therefore, the usage of such schemes to 'black-box' identification of nonlinear systems ensures - in the best case - bounded identification error. In this paper, new learning (adaptive) laws are proposed which when applied to recurrent high order neural networks (RHONN) ensure that the identification error converges to zero exponentially fast, and even more, in the case where the identification error is initially zero, it remains equal to zero during the whole identification process. The parameter convergence properties of the proposed scheme, that is, their capability of converging to the optimal neural network model, is also examineden
dc.description.abstractit is shown to be similar to that of classical adaptive and parameter estimation schemes. Finally, it is mentioned that the proposed learning laws are not locally implementable, as they make use of global knowledge of signals and parameters.en
dc.sourceNeural Networksen
dc.source.urihttps://www.scopus.com/inward/record.uri?eid=2-s2.0-0031106016&doi=10.1016%2fS0893-6080%2896%2900060-3&partnerID=40&md5=c942914cde228110ba64a3163229fa7a
dc.subjectMathematical modelsen
dc.subjectErrorsen
dc.subjectlearningen
dc.subjectarticleen
dc.subjectNeural networksen
dc.subjectalgorithmen
dc.subjectpriority journalen
dc.subjectIdentification (control systems)en
dc.subjectartificial neural networken
dc.subjectLearning systemsen
dc.subjectAdaptive algorithmsen
dc.subjectdynamical system identificationen
dc.subjectExponential identification error convergenceen
dc.subjectrecurrent high order neural networksen
dc.subjectrobust adaptive algorithmsen
dc.titleDynamical neural networks that ensure exponential identification error convergenceen
dc.typeinfo:eu-repo/semantics/article
dc.identifier.doi10.1016/S0893-6080(96)00060-3
dc.description.volume10
dc.description.issue2
dc.description.startingpage299
dc.description.endingpage314
dc.author.facultyΣχολή Θετικών και Εφαρμοσμένων Επιστημών / Faculty of Pure and Applied Sciences
dc.author.departmentΤμήμα Μαθηματικών και Στατιστικής / Department of Mathematics and Statistics
dc.type.uhtypeArticleen
dc.description.notes<p>Cited By :68</p>en
dc.source.abbreviationNeural Netw.en
dc.contributor.orcidIoannou, Petros A. [0000-0001-6981-0704]
dc.gnosis.orcid0000-0001-6981-0704


Files in this item

FilesSizeFormatView

There are no files associated with this item.

This item appears in the following Collection(s)

Show simple item record