Approximate Decentralized Bayesian Inference

Recent trends in the growth of datasets, and the methods by which they are collected, have led to increasing interest in the parallelization of machine learning algorithms. Parallelization results in reductions in both the memory usage and computation time of learning, and allows data to be collected and processed by a network of learning agents rather than by a single central agent. Decentralized learning, a particular instance of parallelization, is particularly relevant to robotic sensor networks in which the network structure varies over time, agents drop out and are added dynamically, and no single agent has the computational or communication resources to act as a central hub during learning. However, decentralization of learning is a difficult problem. Asynchronous communication/computation, a lack of a globally shared state, and potential network or learning agent failure can all lead to inconsistencies in the model possessed by each agent.

The method proposed in this work takes a different tack from past decentralized learning methods, in that no iteration over a network is required. Each agent computes an approximate mean field variational posterior using only their local datasets, sends and receives statistics to and from other agents in the network asynchronously, and combines the posteriors locally on-demand. Due to the use of approximate inference, symmetries in the model are broken that must be accounted for in the combination step. However, rather than reaching a consensus over the network, the agents agree on the model implicitly by using an optimization that is run locally on each agent. The proposed method is highly flexible, as it can be combined with past streaming variational approximations, agents can share information with only subsets of the network, the network may be dynamic with unknown toplogy, and the failure of individual learning agents does not affect the operation of the rest of the network.