Thought Leader

Hi all, I’m building a language model in Thai using the LSTM with dropout in Lesson 4 notebook. So far I’ve got quite acceptable performance with perplexity around 36 training on Thai wikipedia dump. I have several questions: I set the min count...

Natural Language Processing Algorithms are more of a scary, enigmatic, mathematical curiosity than a powerful Machine Learning or Artificial Intelligence tool. NLP AI is a rising category of algorithms that every Machine Learning Engineer should...

This mostly cites papers from Berkeley, Google Brain, DeepMind, and OpenAI
from the past few years, because that work is most visible to me.
I’m almost certainly missing stuff from older literature and other
institutions, and for that I apologize -...

Sebastian RuderDeep Reinforcement Learning Doesn't Work Yet: Super comprehensive blog post on the Difficulty of Deep Reinforcement Learning; touches on many obstacles in getting RL to work; by @AlexIrpan
7d

FacebookTwitterLinkedInFrequent and ongoing communication with customers and users is key to the success of any business. That’s where tools like Intercom and Zendesk excel by helping companies listen and talk to their customers in a seamless and...

In this article, Uber Engineering introduces our Customer Obsession Ticket Assistant (COTA), a new tool that puts machine learning and natural language processing models in the service of customer care to help agents deliver improved support...

Sebastian Ruder3/ Regarding transfer learning for RL, I was most impressed by self-supervised imitation learning, in particular the approach by Sermanet et al. using Time-contrastive networks where they train a robot on demonstrations provided by a human.
50d

Sebastian Ruder2/ Many interesting developments in domain adaptation in vision from people at Berkeley, OpenAI, Google and others about how to learn better from simulation using domain randomization, GANs, etc. Results on robotic grasping have been quite impressive, e.g.
50d