Impressive, but as far as I can tell they are training and testing with perfect (noiseless) data that would not be present in real-world applications like weather forecasting.

Also the networks seem to be quite large, around 1600 nodes per input grid cell. I don't know machine learning so maybe that's normal, but I have heard of overfitting.... (and heard audio examples of it).

I guess "all" it does is learn a function, in this case that particular PDE from input-output, which is very good of course.The connection with chaos seems a bit vague; if it learns a chaotic equation its output will be chaotic, i.e., sensitive dependency on initial conditions, hence no weather forecasting. But I guess the idea is we don't know the "weather equation" so it could figure it out. But if it's chaotic there can still be no long term prediction.The 8 Lyapunov times stuff means I think it computes to an accuracy of \( e^{-8} \approx 0.03\% \).

It would be fun to apply some machine learning (NN) method to say the Mandelbrot set, giving it pairs of (c iterations) and see what it comes up with.I would guess just an interpolation of the examples without any new features, but maybe you can do something smarter.

Thinking of the recent success of alpha-0 in unsupervised learning of chess, maybe you can do the reverse and make some scheme to learn underlying rules from examples. Say you give it many pictures and try to find an equation that would produce these pictures in some way analogous to M-set algorithms.