There is a mathematical error in our publication entitled “Optimal stimulus encoders for natural tasks” ( Journal of Vision, 9(13):17, 1–16). The error is in text Equation 10 and 1, where Equation 10 is derived. The error has no affect on the first example application concerning “image patch identification,” and only a minor effect on the second example application concerning “foreground identification.” Nonetheless, under some circumstances, the mathematical error might have more substantial consequences. Here we provide replacements for text on p. 5 concerning Equation 10 and for 1.

where nk is the number of training samples from category k, and Z is a normalization factor. In keeping with the approximation in Equation 4, the logarithm of this formula gives the average relative entropy when the stimulus is s( k, l). Thus, Equations 5– 10 provide a closed-form expression for the average relative entropy of the posterior probability distribution (that the ideal observer computes) for arbitrary samples from the joint probability distribution of environmental categories and associated stimuli, p0( k, l).

To estimate the optimal linear receptive fields we use a ‘greedy’ procedure. In other words, neurons are added to the population one at a time, with each neuron's receptive field being selected to produce the biggest decrease in decoding error. Specifically, we proceed sequentially by first finding the encoding function r1( k, l) that minimizes

2; then we substitute the estimated r1( k, l) and r2( k, l) into Equation 10, and find the encoding function r3( k, l) that minimizes

D―

3, and so on.

Replacement for Appendix A:

Appendix A

Here we derive formulas for the posterior probability distribution that is computed by the ideal Bayesian observer when receiving a population response Rq( k, l) to a presentation of stimulus s( k, l). (Keep in mind that the ideal observer does not know that the stimulus is s( k, l), but does know the mean response of each neuron in the population to each stimulus in the training set.) According to Bayes' rule:

p(x|Rq(k,l))=p(Rq(k,l)|x)p(x)∑i=1mp(Rq(k,l)|i)p(i)

To derive text Equation 10 we expand the above equation using the definition of conditional probability:

Assuming that the samples are representative of the natural world, the prior probability of a category is the fraction of training samples from the category, and the prior probability of a particular sample from a category is the inverse of the number of training samples within the category: p( i) = ni/ n, p( j∣ i) = 1/ ni. Thus,