Information from multiple sources is exploited by the neural system to facilitate reliable and flexible information processing, in a near-optimal way as predicted by Bayes' rule. However, the underlying network architecture achieving the optimal integration is largely unknown, especially when the prior distribution describing the multiple sensory stimuli is only partially correlated. Here, we study a decentralized architecture, in which each module is a recurrent neural network processing input from one source, and cross-talks among them can facilitate integration. To achieve inter-modular communication, each module receives direct input from other sources through the cross-links, and indirect input from the other modules through reciprocal links. Through theoretical analysis and network optimization, we investigate how multisensory likelihoods and priors are encoded in different components of the network structure. We found that the multisensory prior information is encoded in the cross-talks in a distributed manner. The most striking discovery is that the cross-links and the reciprocal couplings form an antagonistic pair and play complementary roles. Our results reveal the crucial dependence of the optimal network structure on the statistics of multisensory stimuli, especially the prior information.