The parameters, or synaptic weights as they are also called, are always
stored in the first element of a network, as described in 3.2.3: Network Format. The meaning of the individual
parameters is better explained in 13.1: Change the Parameter Values of an Existing Network.
Most neural networks can be evaluated on a symbolic input vector; then
the meaning of the parameters can be illustrated as in the example below.

Load the package.

Generate a network for demonstration purposes.

The numerical values of the parameters are all placed in the first element
of the network.

You can extract the first element and perform whatever process you want with
Mathematica.

An easy way to identify the meaning of the individual parameter is obtained by
applying the network to a symbolic input.

If you need to implement the trained network in, for example, a C program,
you might find the following command useful.

For most types of neural networks you concatenate the data sets into one set
before the training. This is shown in the first example below. For dynamic
neural networks, neural ARX and neural AR, it is slightly more complicated
because the individual data items are correlated. How you manage in this case
is shown in the second example.

Nondynamic Neural Networks

In this data file, the input and output matrices are assigned to the variable
names x and y respectively. Once the data set has been loaded,
you can query the data using Mathematica commands. To better
understand the data format and variable name assignment, you may also want
to open the data file itself.

Load a data file containing input data x and output data y.

Show the contents of the input and output matrices.

Check the number of data items and the number of inputs and outputs for each.

There are 20 data samples; the input has one dimension, and the output has
two. Introduce a second data set.

The dimensionality of the input and output must be the same in the two data
sets.

A new, joined data set can now be constructed by concatenating the two input
data sets, and similarly with the output data sets.

Now you can start training your network with the combined data set.

Dynamic Neural Networks

Data from dynamic systems are correlated, and if two data sets are
concatenated in the way described in the previous example, then a transient
is introduced in the data. This transient might be small and unimportant.
There is, however, a way to avoid the transient totally. What you have to
do is to build the regressor, described in
2.6: Dynamic Neural Networks, for the two data sets and
then concatenate the result.

Put back the trained network at the first position of the neural ARX model.

You have now obtained a neural ARX model trained on both data sets. The
neural ARX model can be used, for example, to simulate the behavior of one
of the data sets.

Q:

How can I implement neural networks in other programming
languages?

Download this answer as a Mathematica
notebook.

You can convert a neural network to program code in either C or Fortran
using the Mathematica commands CForm or FortranForm.
This code can then be inserted into external programs. Here is an
illustration of how to do this conversion on a feedforward network.

Load the package.

Generate some data.

Generate a network.

You can now obtain the program code by first producing a symbolic
expression of the network and then using the Mathematica commands
CForm or FortranForm.

The input to the network should be a list of symbols, one symbol for each
input of the neural network. In this example there is only one input.