Probabilistic generative models are a standard Machine Learning approach. Generative methods are interpretable, robust and flexible, and they can be used for unsupervised, semi-supervised and supervised learning. Still, they are comparatively little applied to current large scale Computer Vision or Computer Hearing tasks. One main reason is that discriminative methods such as feed-forward neural networks are much more efficient at large scales. In my talk, I will, first, show how elementary generative methods can be scaled using novel variational approximations. Second, I will discuss more complex generative methods that can be scaled to very large data sets with similar approaches. The talk will conclude with some applications of the scalable generative methods developed by my research group. Application domains will include visual and bio-medical data, and quantitative comparisons will highlight advantages and disadvantages.