Graves' particular approach uses what's termed a "Long Short-Term Memory" recurrent neural network. As if the name isn't confusing enough, look at the basic network node, as shown in the figure. This is definitely not the simple neural network of a decade ago. The network operates by predicting one data point at a time.[3]

One node of a "Long Short-Term Memory" recurrent neural network, as described in ref. 3.

This type of network has been shown to generate an excellent simulation of handwriting.

The neural network required training data obtained "online;" that is, the (x,y) pen positions were obtained as the writing was taking place. The alternative "offline" data would just be images of handwriting.[3] The training data were taken from the IAM online handwriting database (IAM-OnDB), as described in ref. 6.[6] These data were obtained by having people write samples from a standard corpus (the Lancaster-Oslo-Bergen text corpus) on a "smart whiteboard."

Simulations can be as good, or as bad, as desired, depending on the amount of computing power used. Graves' network has a bias factor that allows generation of handwriting in various degrees of perfection, perfect handwriting being the average script of the multitude of writers. At low perfection level, the same letter is written in a slightly different manner at different parts of the script. The same line, rendered under different initial conditions, is shown in the figure.[4]