We elucidate a practical
method in Deep Learning called the minibatch which is very useful to avoid
local minima. The mathematical structure of this method is, however, a bit
obscure. We emphasize that a certain condition, which is not explicitly stated
in ordinary expositions, is essential for the minibatch method. We present a
comprehensive description Deep Learning for non-experts with the mathematical
reinforcement.

Deep Learning is one of Machine Learning methods and it has attracted much attention recently. Leaning in general is divided into the supervised learning and unsupervised learning. In this paper we discuss only supervised learning. For general introduction to Deep Learning, see for example [1] . The monographs [2] [3] [4] are standard textbooks.

*Dedicated to the memory of Professor Ichiro Yokota.

Deep Learning is based on big data, so the supervised learning will give a heavy load to the computer. In order to alleviate the burden we usually use a practical method called the minibatch (a collection of randomly selected small data from big data, see Figure 5). Although the method is commonly used it is not widely understood why it is so effective from the mathematical viewpoint. We point out in Theorem II that a certain condition, which is practically satisfied in ordinary applications, is essential for the effectiveness of the minibatch method.

In this paper we present a short and concise review of Deep Learning for non-experts and provide a mathematical reinforcement to the minibatch from the viewpoint of Linear Algebra. Theorem I and II in the text are our main results and some related problems are presented.

After reading this paper, readers are encouraged to tackle a remarkable paper [5] which is definitely a monumental achievement.

2. Simple Neural Network

As a general introduction to Neuron Dynamics see for example [6] [7] .

For non-experts let us draw a neuron model based on the step function
S
(
x
)
. In Figure 1 the set
{
x
1
,
x
2
,
⋯
,
x
n
}
is the input data,
z
=
S
(
y
)
is the output data and the set
{
w
1
,
w
2
,
⋯
,
w
n
}
is the weights of synaptic connections. The parameter q is the threshold of the neuron.

Here,
z
=
S
(
x
)
is a simple step function defined by

S
(
x
)
=
{
0
x
<
0
1
x
≥
0

Since an algorithm called the backpropagation in Neural Network System uses derivatives of some activation functions, the simple step function
S
(
x
)
is not suitable. Therefore, we usually use a function called the sigmoid function instead of the step function (see Figure 2). This function is used for a nonlinear compression of data.

The standard logistic function is characterized by
L
=
1
,
λ
=
1
,
x
0
=
0
. See a lecture note [8] as to why the function is so important.

Let us draw a neuron model based on the sigmoid function, See Figure 3.

The most important thing is to improve the set of weights of the synoptic connections
{
w
1
,
w
2
,
⋯
,
w
n
}
by hard learning. For the purpose we prepare m input data and teacher signals. In the following we assume
m
<
n
and
θ
=
0
for simplicity. This causes no problem for the present explanation. There is a suitable value of q for each specific application.

The result shows that there is no critical point in this process except for
E
=
0
. Therefore, when modifying
w
k
(
t
)
by the gradient descent (6) successively there is no danger of being trapped by points taking local minima. Let us summarize the result :

Theorem I Under the assumption (8) there is only one global minimum
E
=
0
.

Next, let us explain how to choose linearly independent data. We assume that n is huge and m is sufficiently smaller than n.

If m is not large, evaluation of the Gram’s determinant is not so difficult. It is well-known that the input data are linearly independent if the Gram’s determinant is non-zero (see for example [10] ).

Let us give a short explanation for non-experts. For simplicity we consider the case of
m
=
3
and set

We can show that there is no critical point in our system except for
E
^
=
0
. Therefore, when modifying
u
i
j
(
t
)
,
v
i
j
(
t
)
,
w
i
j
(
t
)
by the gradient descent (14) (replaced by
E
^
) successively there is no danger of being trapped by points taking local minima.

Proof: The proof is essentially the same as that of Section 2.2. However, the proof is the heart of the paper, so we repeat it.

Problem I What is the meaning of the assumption in the total flow of data ?

Last in this section let us comment on the minibatch of Deep Learning. When input data A is huge the calculation of time evolution of the weights of synaptic connections will give a heavy load to the computer. In order to alleviate it the minibatch is practical and very useful, see Figure 5.

where [k] is the Gauss symbol, which is the greatest integer less than or equal to k. For example, [3.14] = 3. However, I do not believe this is correct.

4. Concluding Remarks

Nowadays, studying Deep Learning is standard for university students in the world. However, it is not so easy for them because Deep Learning requires a lot of knowledge.

In this paper, we treated the minibatch of Deep Learning and gave the mathematical reinforcement to it from the viewpoint of Linear Algebra and presented some related problems. We expect that young researchers will solve the problems in the near future.

As a related topic of Artificial Intelligence, there is Information Retrieval. Since there is no more space, we only refer to some relevant works [11] [12] [13] .

Acknowledgements

We wish to thank Ryu Sasaki for useful suggestions and comments.

NOTES

1However, to choose ò in (6) correctly is a very hard problem.

2Figure 4 is of course not perfect because drawing a figure of a big size is not easy.

Le, Q.V., et al. (2012) Building High-Level Features Using Large Scale Unsupervised Learning. Proceedings of the 29th International Conference on Machine Learning, Edinburgh, June 27th-July 3rd, 2012, 11.