There are now numerous demonstrations that different sources of sensory information contribute to a perceptual estimate in accordance with their statistical reliabilities. Specifically, when combining two or more sensory cues about an object property, the system weights the cues in proportion to their reciprocal variances. In so doing it minimizes the variance of the estimate of the object property. Of course, variances change from one object property to the next and from one situation to another. Does the brain have to calculate or learn the variances associated with each cue for each property and situation? We propose a biologically plausible model in which explicit calculation of variances (or weights) is unnecessary. Consider the combination of information from two senses. In the model there are two populations of neurons, one for each sense. Each neuron is characterized by its tuning function for the object property in question and by the statistics of its responses (modeled after V1 neurons). The distribution of response across each population indicates the most likely value of the object property and the uncertainty (according to that sense). Multiplication of these two distributions (point-by-point where the two populations are in registration concerning the object property being estimated) yields another distribution. The peak of this distribution (obtained by fitting a smooth function) is the model's estimate of the object property in question. The model's behavior is quite similar to a maximum-likelihood integrator for a wide variety of situations. When the difference between the two inputs is relatively small, the combined estimate shifts toward the input of lower variance and has lower variance than either input by itself. When the difference between the two inputs is large, the model exhibits statistical robustness. The model can be expanded to incorporate inputs from several sensory cues.