In this post, i'll submit two hypotheses about dream function. They are based on the following remarks :
=> dream imagery is produced from random impulses ;
=> neural networks(1) are effective brain simulators.
From these remarks, I will try to find out why random inputs can be useful in a neural network and therefore in brain.

(1) Neural networks are AI computer programs which attempt to imitate the way a human brain works. A neural network works by creating connections between artificial neurons, the computer equivalent of biological neurons. The organization and weights of the connections determine the output.

Dream imagery and random impulses :

In 1977, Drs. Allan Hobson and Robert McCarley of Harvard University found that in REM sleep,

Quote:

the forebrain is activated and bombarded with partially random impulses generating sensory information within the system. The activated forebrain then synthesizes the dream out of the internally generated information, trying its best to make sense out of the nonsense it is being presented with. [...] These stimuli, whose generation appears to depend upon a largely random or reflex process, may provide spatially
specific information which can be used in constructing dream imagery. [...]
The forebrain may be making the best of a bad job in producing even partially coherent dream imagery from the relatively noisy signals sent up to it from the brain stem."

Considering the dream functions, we will retain that Hobson and McCarley thought it was the maintaining process of the neurons, activating and testing at regular intervals the brain circuits that underlie our behavior-including cognition and meaning attribution ; and that this test program is essential to normal brain-mind functioning.

In 1983, Nobel Laureate Francis Crick and Graeme Mitchison proposed another function of dream sleep. Their theory was derived from the hypothesis that the cerebral cortex, as a completely interconnected network of neurons,

Quote:

"is likely to be subject to unwanted or 'parasitic' modes of
behavior, which arise as it is disturbed either by the growth of the brain or by the modifications produced by experience."

Thus, the dream function

Quote:

"is to remove certain undesirable modes of interaction in networks of cells in the cerebral cortex. [...] This is done in REM sleep by a reverse learning mechanism, so that the trace in the brain of the unconscious dream is weakened, rather than strengthened by the dream." [...]
This mechanism is based on the more or less random stimulation of the forebrain by the brain stem that will tend to excite the inappropriate modes of brain activity...especially those which are too prone to be set off by random noise rather than by highly structured specific signals."

In brief, it seems that random impulses in REM could be useful in maintaining, testing and correcting the brain neural network.

Comparison with artificial neural networks :

Moreover, we can wonder if we'll find an equivalent in computer neural networks.
And we'll find it in NN learning indeed. Learning data must be proposed to the NN randomly. If not, the network can be "traumatized" by the first data and learning fails.

That's my first hypothesis, and it's not so far from Hobson's. We can compare a NN with a river system. Imagine a landscape with hills and valleys. Random water drops on the landscape will follow different ways and irrigate different parts of the landscape. In case of NN trauma, it's like all the drops always go down in the same river : the output of the NN will be the same, disregarding of the inputs. It's the "parasitic mode of behaviour", whose Hobson is talking about.
Let's suppose a dream where a first random input is elephant, and the brain makes the following associations : elephant, trunk, sex. And for the second random input, cliff : cliff, grotto, vagina, sex. It must be a problem somewhere !
In a dream, while stimuled by random inputs, the brain could be able to recognize such problems and correct them.

Hobson also stresses the problems which could be caused by the modifications produced by experience.

These problems find their equivalent in artificial NN too, and are called catastrophic interferences :

Quote:

Near the end of the 1980’s a serious problem with many connectionist models came to light ─ namely, the problem of catastrophic forgetting (McCloskey & Cohen, 1989), where new learning completely destroys previously learned information. McClelland, McNaughton, and O’Reilly (1995) suggested that the brain’s way of avoiding this problem was the development of two complementary learning systems, the hippocampus and the neocortex. New information was learned in the hippocampus and old information was stored out of harm’s way in the neocortex. At about the same time, French (1997) and Ans & Rousset (1997, 2000) suggested dual-network connectionist architectures to overcome this problem. [...]
The aim of the dual-memory model is to transfer information from a "short term memory network" to a "long term memory network" using "pseudopatterns" (an input-output association consisting of a random input vector sent through the network and, thus, being associated with the network's output). Pseudopatterns provide an approximate means of transferring information from one network to the other. The LTM network learns the knowledge contained in the STM network by learning the output produced by the STM network when bombarded with random inputs. (Dr. Martial Mermillod, Cognitive Science Department. University of Liege, Belgium)

In conclusion, we can formulate the hypothesis that random inputs are useful in artificial neural networks in correcting learning problems due to excessively organized input data causing traumas, and enabling progressive learning.
It could be se same in human brain.

In my former post, I presented two concepts which could explain, in my opinion, the dream functions. These concepts were :

- random inputs and their effects ;
- dual neural network : one of them is randomly stimulated while the other collect the outputs of the first.

I found an scientific article where people explained that they could make a "Creativity Machine" from these two concepts. They call it a "dreaming" neural network.
For instance, imagine a neural network which has been trained in order to recognize fonts characters from A to Z. If it's stimulated by random inputs, it can create new letters from what it learned. Then, the second neural network collects the results.

It shows that during the dream state, the brain can create, from random inputs, new patterns which can be useful (or not) in concept attainment and world understanding.
Thus, randomness of inputs doesn't mean unconsistency of outputs, as Drs. Allan Hobson and Robert McCarley thought in 1977.

Geoffrey Hinton created Deep Belief Networks which are hierarchical generative models, i.e. neural networks stacked on top of eachother capabable of generating data, i.e. "dreaming" or "fantasizing". See a youtube talk here or a demo; here (click a digit top-left and press play)

Friston also uses the same kind of generative models, only in a different theoretical framework. He argues the brain learns using "active inference" which can be modelled by "emperical Bayes" in a dynamic hierarchial neural network.

these are pointers to more current / relevant literature about this subject.

I think I have to correct an information I brought in my second post : in Stephen Thaler's Creativity Machines, noise is not added as an input, yet in the weights of the connections between neurons. So, in my opinion, it doesn't emulate the actual brain functioning during REM sleep and we have to forget this.

As for Hinton's "dreaming" and "fantasying", thank you for this information. Now, as far as I understand his very interesting new neural network architecture (a long time has passed since I worked with NN's, and what he describes was not yet invented, thus I've some trouble getting back into theory), I think it should be better termed as "expectation". His method enhance learning by minimizing the "surprise" between expectations and observations (what Friston calls "minimizing the free-energy"). So, at first sight, it's more in relation with the way we learn when awake during the day and not truly related with random firing of neurons during REM sleep.

Another interesting point in Hinton's model is he found an efficient way of preventing an error to backpropagate through the hidden layers of the neural network, as it happened sometimes in old multilayers perceptrons. But I don't think there has been much investigation in the effects of this random firing in neural backpropagation.

Thanks Sploosh for having found the text of the patent. I didn't know it so that my comment above was based upon Stephen Thaler's articles on the Imagination Engines website.

The article Neural Networks That Autonomously Create and Discover emphasizes the gradual degradation of the network mapping through random disturbance applied to its connection weights. It explains how Creativity Machines are utilizing what Thaler calls the "virtual input effect". In A Quantitative Model of Seminal Cognition, this "virtual input effect" is explained as the network "perceiving" illusory inputs (when in fact they are set to zero) while its inner architecture is progressively destroyed by pruning its connections. This effect is compared by Thaler to "sensory deprivation, in effect hallucinating within a silent and darkened room".

A similar explanation can be read in The Imagination Engine® or Imagitron™ where the Imagination Engine is "internally 'tickled' by randomly varying the connection weights joining neurons".

"When supplied no external inputs, the production of meaningful activations by the network relies upon a different brand of vector completion than is normally discussed. Rather than fill in incomplete or corrupted input patterns, the net attempts to complete internal, noise-induced activation patterns within the net’s encryption layers. Therefore, any local or temporary damage to the network’s mapping is interpreted by downstream layers as some "familiar" activation pattern normally encountered upon application of a training exemplar to the network’s inputs (Thaler, 1995). Because of the many combinatorial possibilities in perturbing connection weights within a network, we arrive at a means for generating proportionately more novel schema than is possible with input perturbations alone. Furthermore, because the connection traces within a trained neural network generally correspond to the rules binding the underlying conceptual space together, such stochastic perturbation schemes serve to soften these rules, in turn allowing a gradual departure from the known space of possibilities. The result is a strictly neurological search engine whose internal noise level may be parametrically increased to achieve progressively more novel concepts. I call such a chaotic network an imagination engine or IE."

All these articles, when describing the Imagination Engine of the Creativity Machine, stress the addition of noise within connection weights. It sounds like input perturbations are considered as quite useless or less efficient so that inputs are generally set to a constant.

Now the US patent may cover aspects which are not used in the Creativity Machine strictly speaking. And indeed, in the Discussion of the prior art chapter of the US Patent, one can read:

"Therefore, a neural network trained to generate the surface profiles of some device or object such as a known mountain range would tend to produce very plausible but unfamiliar mountain ranges if the inputs are subjected to random stimulations. Similarly, a neural network trained to only produce classical music would tend to produce potential classical themes when exposed to random inputs".

Sorry, Hinton has done nothing new but create some misleading terminology and strategically omit critical references (see the numerous Thaler patents wherein critic algorithm may take on any form, and "A Proposed Symbolism for Network-Implemented Discovery Processes," World Congress on Neural Networks, 1996 where all manner of noise-driven cascades are described).

Sorry again. From a functional perspective, neural networks are largely synapses, representing a volume effect. Comparatively, input layers are a surface effect. Therefore, they are most sensitive to any "dinking around" with connections. noisy PGO bursts to cortex are likely only the "spark plug." Experience emotion in your dream (i.e., angst) and the endocrine system does its job permeating the entire synaptic structure with perturbations that can transmogrify your "customs house," as well as provide abrubt discontinuities within dream sequences.