"... We establish two conditions which ensure the non-divergence of additive recurrent networks with unsaturating piecewise linear transfer functions, also called linear threshold or semilinear transfer functions. As was recently shown by Hahnloser et al. (2000), networks of this type can be efficient ..."

We establish two conditions which ensure the non-divergence of additive recurrent networks with unsaturating piecewise linear transfer functions, also called linear threshold or semilinear transfer functions. As was recently shown by Hahnloser et al. (2000), networks of this type can be efficiently built in silicon and exhibit the coexistence of digital selection and analogue amplification in a single circuit. To obtain this behaviour, the network must be multistable and non-divergent and our conditions allow to determine the regimes where this can be achieved with maximal recurrent amplification. The first condition can be applied to nonsymmetric networks and has a simple interpretation of requiring that the strength of local inhibition must match the sum over excitatory weights converging onto a neuron. The second condition is restricted to symmetric networks, but can also take into account the stabilizing effect of non-local inhibitory interactions. We demonstrate the application of the conditions on a simple example and the orientation-selectivity model of Ben-Yishai et al. (1995). We show that the conditions can be used to identify in their model regions of maximal orientation-selective amplification and symmetry breaking.

.... The standard approach to obtain a non-diverging dynamics (5) in the general case of nonsymmetric weights is to choose the combined gains of the transfer function and weights sufficiently small (see =-=Steil 1999-=- for a review). A simple example is the condition given by Hirsch (1989) which is based on the property that all eigenvalues of the symmetrical parts of the Jacobians of the vector field in (5) must b...

"... This letter provides a brief explanation of echo state networks and provides a rigorous bound for guaranteeing asymptotic stability of these networks. The stability bounds presented here could aid in the design of echo state networks that would be applicable to control applications where stability i ..."

This letter provides a brief explanation of echo state networks and provides a rigorous bound for guaranteeing asymptotic stability of these networks. The stability bounds presented here could aid in the design of echo state networks that would be applicable to control applications where stability is required.

... Elman networks, liquid state machines, and echo state networks, the last-mentioned being the focus of this paper. For a review of these RNN and the problems associated with training them, see either =-=[1]-=-, [2], or [3]. The recent development of echo state networks[4] (ESNs) provides a class of RNN that alleviates the problem of training, but the design methodology of ESNs is still not fully understood...

"... We analyse the stability of the input-output behaviour of a recurrent network. It is trained to implement an operator implicitly given by the chaotic dynamics of the Roessler attractor. Two of the attractors coordinate functions are used as network input and the third defines the reference output. U ..."

We analyse the stability of the input-output behaviour of a recurrent network. It is trained to implement an operator implicitly given by the chaotic dynamics of the Roessler attractor. Two of the attractors coordinate functions are used as network input and the third defines the reference output. Using recently developed new methods we show that the trained network is input-output stable and compute its input-output gain. Further we define a stable region in weight space in which weights can freely vary without affecting the input-output stability. We show that this region is large enough to allow stability preserving on-line adaptation which enables the network to cope with parameter drift in the referenced attractor dynamics. 1 Introduction In recent years there is an increasing interest in using neural networks in the fields of control and engineering. As long as feedforward networks are concerned, which can be incrementally adapted to implement static input-output maps, this int...

...on of neural networks is a method of choice. For this example we provide a throughout input-output stability analysis using recently developed methods originating in non-linear feedback system theory =-=[10, 12]-=-. These methods allow to give stability bounds for the fixed adapted network as well as for the time-varying network subject to on-line learning. In Section 2 we describe the learning task, the networ...

"... Abstract. We provide a stability analysis based on nonlinear feedback theory for the recently introduced backpropagation-decorrelation (BPDC) recurrent learning algorithm. For one output neuron BPDC adapts only the output weights of a possibly large network and therefore can learn in O(N). We derive ..."

Abstract. We provide a stability analysis based on nonlinear feedback theory for the recently introduced backpropagation-decorrelation (BPDC) recurrent learning algorithm. For one output neuron BPDC adapts only the output weights of a possibly large network and therefore can learn in O(N). We derive a simple sufficient stability inequality which can easily be evaluated and monitored online to assure that the recurrent network remains stable while adapting. As byproduct we show that BPDC is highly competitive on the recently introduced CATS benchmark data [1]. 1

...errors ei(k+1) and the errors in the last time step es(k) weighted by a typical backpropagation term involving ϕ ′. 3 The operator framework Using the standard notation for nonlinear feedback systems =-=[8, 9]-=- 1 the network (1) is composed of a linear feedforward and a nonlinear feedback operator Φ: ˙x = −x + e, e = Wuu + Wϕ(y), y = x. The Laplace transformation of the linear part yields the forward operat...

"... . We present conditions for absolute stability of recurrent neural networks with time-varying weights based on the Popov theorem from non-linear feedback system theory. We show how to maximise the stability bounds by deriving a convex optimisation problem subject to linear matrix inequality constrai ..."

. We present conditions for absolute stability of recurrent neural networks with time-varying weights based on the Popov theorem from non-linear feedback system theory. We show how to maximise the stability bounds by deriving a convex optimisation problem subject to linear matrix inequality constraints, which can efficiently be solved by interior point methods with standard software. 1 Introduction One of the most exciting properties of recurrent neural networks (RNN) is their ability to model the time-behaviour of arbitrary dynamical systems [6]. With a number of schemes available which incrementally adapt a network using time-dependent error signals [13] recurrent networks can solve identification and adaptive control tasks in larger systems [8,14]. In such applications the proper functioning of the control system then crucially depends on the the dynamical behaviour of the network. Thus one of the most investigated issues in RNN theory is stability, especially the existence and uni...

... time-varying kij (t) wherewehaveqij =0. Remarks: 1 If W is normal (W W = WW )andA = I we showed previously that it is possible to derive efficient graphical tests involving the eigenvalues of W only =-=[15]-=-. 2 The condition (2) is a special case of (5) when W = 0. ThenwegetC = I and can choose P = DA; Q = A ,1 for any positive diagonal D, which removes the frequency dependence in (5) and yields (5)=[D(W...

"... We present local conditions for input-output stability of recurrent neural networks with time-varying parameters introduced for instance by noise or on-line adaptation. The conditions guarantee that a network implements a proper mapping from time-varying input to time-varying output functions using ..."

We present local conditions for input-output stability of recurrent neural networks with time-varying parameters introduced for instance by noise or on-line adaptation. The conditions guarantee that a network implements a proper mapping from time-varying input to time-varying output functions using a local equilibrium as point of operation. We show how to calculate necessary bounds on the allowed inputs to keep the network in the stable range and apply the method to an example of learning an input-output map implied by the chaotic Roessler attractor. 1

...ty with generalized Lyapunov functions [6, 7]. Other approaches, that originate in control theory, consider the class of inputs of finite L2 norm and derived respective input-output stability results =-=[8]-=-. Under the condition of incrementally sector bounded, non-linear transfer functions 1 Here we cite only a few early authors while the stability approach has generated a huge number of mostly technica...

"... No part of this publication may be reproduced, stored in a retrieval system, or be transmitted, in any form or by any means, electronic, mechanic, photocopying, recordning, or otherwise, without prior permission of the author. Preface The work presented in this thesis has been carried out at the Div ..."

No part of this publication may be reproduced, stored in a retrieval system, or be transmitted, in any form or by any means, electronic, mechanic, photocopying, recordning, or otherwise, without prior permission of the author. Preface The work presented in this thesis has been carried out at the Division of Mechanics at Linköpings Universitet, with partial financial support from the Swedish Research Council (VR). I would like to thank my supervisor Prof. Anders Klarbring, without whom there would have been no thesis. My co-supervisors Prof. Matts Karlsson and Prof. Petter Krus should also be acknowledged. Thanks to everyone at the Division of Mechanics for good company. Special thanks to Dr. Jonas St˚alhand, for your company, but also for introducing me to the field of mechanics. I wish to express my gratitude to my parents, Johan and Ulla, and my brothers, Andreas and Fredrik, as well as to my other relatives, and my friends outside the world of mechanics. Last but not least, I thank my lovely wife Eva and my beautiful sons, August and Eric.

...es, but one can at least mention a few references covering the use of RNNs for solving combinatorial optimization problem [116]; networks with delays [35]; networks with non-symmetric weight matrices =-=[117, 85]-=-, and algorithms for optimization — or in this context, ”training” — of such networks [8]. The relation between artificial neural networks and their biological counterparts is covered in [65, 108], an...