Stochastic resonance (SR) is ubiquitous in nature and refers to a phenomenon that signals otherwise sub-threshold for a given sensor can, at least partially, be detected anyway by adding noise of a suitable intensity to the sensor input. Most objective functions to quantify such information transmission require knowledge of the signal to be detected. In a previous study we demonstrated that the autocorrelation of the sensor output, a quantity always accessible, can be used to quantify and hence maximize information transmission even for unknown and variable input signals. In a further study we demonstrated by implementing a phenomenological computational model that adaptive SR based on output autocorrelations might be a major processing principle of the auditory system that serves to partially compensate for acute or chronic hearing loss, e.g. due to cochlear damage, whereas the noise necessary for SR to work corresponds to increased spontaneous neuronal firing rates in early processing stages of the auditory brainstem. We proposed the possibility that the neuronal noise which is crucial for SR to work may be injected into the auditory system via somatosensory projections. In support of our model, Huang et al. (2017) demonstrated that electro-tactile stimulation applied to the index finger significantly improves speech perception thresholds. We here argue that somatosensory input driven SR in the auditory system by may be just one instance of a more general principle: multisensory integration causing SR-like cross-modal enhancement. We hypothesize that this mechanism corresponds to a universal neural computation and cognitive processing principle.