This appears to be a bug. In particular, this seem to cause an MnistDatasetIterator constructed with numExamples < 60000 to iterate over a new dataset on each call to reset(). The documentation is not super explicit but this does not seem to be the intended behavior.

This comment has been minimized.

edited

@Charele It is a design choice and of course you could go with the current behavior. Either way, the actual behavior should probably be clearly documented to avoid confusion.

In my case I wanted to train a classifier over a number of epochs on a limited set of examples. To do this I used a MnistDatasetIterator with numExamples set to, say 500, and call fit on a MultiLayerNetwork with the iterator as the argument. Now I would expect this to just train the classifier on a data set of size 500 but, because of this issue, the classifier is trained on a new random subset for each epoch. This was very confusing to me because I would end up with a very good classifier even when training on a very small data set.