The Applied ML team I am a part of is Facebook's applied research arm. We work on core ML, on computer vision, on computational photography and on language technologies. We work very closely with Facebook AI Research (FAIR) who is pushing the state-of-the-art on these areas, and we are complementary in that we focus more heavily on applications. I would like to highlight a couple of recent pieces of research I find very exciting. This is by no means a complete list, and we're not doing this alone but in collaboration with FAIR and the many product teams we partner with.

In computer vision we have a system that processes every single image and video uploaded to Facebook, totaling well over 1B items per day. We predict the content of an image for example in order to generate captions for the blind, or to automatically detect and take down offensive content, improve media search results, automate visual captcha among many other use cases. We use deep convolutional networks with billions of parameters. The interesting thing about these models is the generalizability of the features. We recently shared how the features from these networks are used to generate population density maps using satellite imagery (check out this cool video).

(this is the cover image for the video on population density estimation)

There are many interesting research problems that the team is tackling: Universal vision models using multi task learning, representation learning (link to paper), large scale distributed training using Elastic SGD, space-time convolutional networks for videos (link to paper), cascade of networks for faster and better vision models (link to paper), learning from videos (link to paper).

If you are curious about further details of our work on applied computer vision at Facebook, try asking Manohar Paluri a Quora question!

In language technology, one thing we are trying to do is eliminate language barriers on Facebook. In order to do this we translate over 2B posts every single day, with over 1800 language directions representing more than 40 unique languages. We used to depend on Bing translate for a while and have built and deployed our own technology. And now we are pushing forth to evaluate deep learning for translation, hoping to achieve more human-like translations using neural networks. You can ask Alan Packer a Quora question if you would like to know more about what is going on in our language technologies applied research and product work.

In core ML, we focus on researching and shipping large scale and realtime ML/AI algorithms for some of the biggest ML applications in the world. Whenever a users logs into Facebook, these models are used to rank news feed stories (1B users every day, 1.5K stories per user per day on average), ads, search results (1B+ queries a day), trending news, friend recommendations and even rank notifications that a user receives, or rank the comments on a post. The Core ML team also builds state of the art text understanding algorithms using deep learning. These algorithms are integrated into the ML platform we've built to facilitate and scale ML from training to model deployment. This platform is used by every team that uses ML in production. To give an idea of how prevalent ML is at Facebook, a bit over 20% of all Facebook engineers (and even some non Engineers) actively use the platform. Our current wave of research involves deep learning models for event prediction, distributed learning for sparse modeling and deep learning, representation learning for text understanding through convolutional and recurring nets and model compression through multitask learning. If you want to learn more about Core ML at Facebook, ask Hussein Mehanna a question.

This questionoriginally appeared on Quora. - the knowledge sharing network where compelling questions are answered by people with unique insights. You can follow Quora on Twitter, Facebook, and Google+. More questions:​