I'm having difficulties with some basics regarding the application of feed forward neural networks for regression. To be specific, lets say that I have an input variable $x \in \mathbb R^4$ and data that was generated from the unknown function $$f(x) = c_1 x_1^2 + c_2 x_2 x_3 + c_3 x_4$$ and my goal is to learn $f$ from samples $(x, y)$. How should I choose the network`s layers? I've read here that most networks will be fine with a single non-linear hidden layer. But which activation function should I use in that layer? I tried rectifiers and sigmoids, but neither gave promising results.

When I choose the constants $|c_1|, |c_2| \ll |c_3|$, s.t. the value of $f(x)$ is mostly determined by the linearly dependent $x_4$, than I get satisfying results from a linear network without hidden layers:

But as $|c_1|$ and $|c_2|$ grow, the prediction error becomes larger, and I think that the reason is that the linear layers aren't capable of capturing the non-linearities in the data:

1 Answer
1

Well, I'm still interested in a guideline or rule of thumb regarding: Given $n$ samples $\left(x, y\right)$, how to choose the hidden layers of a regression neural network? Proposals, comments and answers are highly welcome!

Nevertheless, in my question, I stated a particular situation. Despite of its exemplary nature, I think that the choice of the hidden layers should be approached from the following point of view: The non-linearity in the relation $x \to y$ can be captured through the combination of two concepts:

univariate Polynomials (e.g. $w_m x^m + \dots + w_0 x^0$)

"and"-junction of features (like $x_i x_k$)

Let me drop two side notes:

Yes, the combination of these two concepts is called multivariate polynomials, but for simplicity, I didn't want to deal with them here.

I think, it's a legitimate question: How do we actually know that these two concepts are involved in the unknown mechanism, which generates $y$ from $x$? Well, we don't know that. But we can guess. Maybe kernel-PCA will tell us.

With the same learning rate as before, the training failed for this layout and resulted in a root mean square (RMS) error of about $8.3 \cdot 10^9$. After reducing the learning rate to $1 \cdot 10^{-3}$, at least the SGD algorithm converged properly:

Though, the RMS error is much higher, than it was with the first layout. This suggests that, despite its lower complexity in terms of neurons counts, the second layout is somehow more sensitive to the learning rate parameter. I'm still wondering, where this comes from: Explanations are highly welcome! Might it be related to the nature of back propagation?

$\begingroup$Hi! interesting post, I am trying this myself using Keras. What did you use to squash the outputs of the and layer? a simple sigmoid, a couple of dense layers before the sigmoid maybe?$\endgroup$
– rllMay 27 '17 at 16:12