Özet

Science of Neural Networks, and even much more so computing applications, have undergone developments beyond any predictions since McCullock-Pitts artificial neuron (1943) up via Hopfield’s neurons (1982, 1984) to Kasabov spiking-neurons neucube (2014) and evolving connectionist systems (2003). Still computational functionality of all kinds of neural network implies guaranteed operating steady-state equilibrium is fastreached first. On the other side of this spectrum Science of Neurophysiology yielded insights converging to Systems Biology approach Gayton-Hall (2006). It appeared, on the crossroad of these findings with Kolmogorov’s representation superposition and Hilbert’s Thirteen problem certain rater delicate subtle issues emerged Sprecher (2017). This paper gives one perception of these issues and suggested a revised view on the foundations of past developments, possibly by re-thinking own stability results for recurrent neural networks which possess time-varying delays.

Kaynak

2017 IEEE International Conference on Systems, Man, and Cybernetics (SMC)