Discussion

Molter generated graphs for both 1D and 2D scenarios. I can compare my results to his for 1D. Below are his 1D results. As is clear, he generated a clear forward-progressing pattern, that could initiate in any cell. His figures do not show any of the noise that mine do; I do not know if this is because his model has no noise or if the extraneous points are not shown.

My model was unable to verify Molter's results for 2D, but I can present his results and offer my thoughts. He produced two separate 2D scenarios. The first was a completely open area, the second was a maze. During reactivation of the open area, the "replay" was independent of the rat's path. Wherever the replay was initiated, the place cells were activated radially outwards. Molter claimed that this indicated the rat was learning the entire area, not just its path. The maze showed similar results. If there were two paths in the maze and the rat only "learned" one path, reactivation "replayed" both paths. Again, Molter claimed that the hippocampus allowed the rat to learn to optimize its path through the maze, even if it had never traversed the optimal path.
These results seem slightly suspect to me. It seems at least possible that this result is an artifact of the setup. The maze results are particularly startling. The alternative route of the maze must be coded from the beginning in order for the program to work, and are probably coded in the same matrix, using the same mechanism. A real rat should not necessarily have knowledge of a route it has never before traveled. Similarly with the open area, it seems unlikely that the place cells would respond as they did if they were not pre-wired into a matrix on a computer. I would have liked to test this, but my model failed at an earlier point, and never got to the 2D stage. I would like to see his results verified with a computer model or experimentally in vivo to validate them.