A Fixed-Distribution PAC Learning Theory for Neural FIR Models

Abstract

The PAC learning theory creates a framework to assess the learning properties of static models. This theory has been extended to include learning of modeling tasks with m-dependent data given that the data are distributed according to a uniform distribution. The extended theory can be applied for learning of nonlinear FIR models with the restriction that the data are unformly distributed.

In this paper, The PAC learning scheme is extended to deal with any FIR model regardless of the distribution of the data. This fixed-distribution m-dependent extension of the PAC learning theory is then applied to the learning of FIR three-layer feedforward sigmoid neural networks.

Preview

References

Bartlett, P.(1996). The Sample Complexity of Pattern Classification with Neural Networks: The Size of the Weights is More Important Than the Size of the Network. Amer. Statistical Assoc. Math. Soc. Transactions, 17, 277–364.Google Scholar