The show is part of a series that I’m really excited about, in part because I’ve been working to bring them to you for quite a while now. The focus of the series is a sampling of the interesting work being done over at OpenAI, the independent AI research lab founded by Elon Musk, Sam Altman and others. In this episode i’m joined by Dario Amodei, Team Lead for Safety Research at OpenAI. While in San Francisco a few months ago, I spent some time at the OpenAI office, during which I sat down with Dario to chat about the work happening at OpenAI around AI safety.

Dario and I dive into the two areas of AI safety that he and his team are focused on–robustness and alignment. We also touch on his research with the Google DeepMind team, the OpenAI Universe tool, and how human interactions can be incorporated into reinforcement learning models. This was a great conversation, and along with the other shows in this series, this is a nerd alert show!

Thanks to our Sponsor

Support for this OpenAI Series is brought to you by our friends at NVIDIA, a company which is also a supporter of OpenAI itself. If you’re listening to this podcast, you already know about NVIDIA and all the great things they’re doing to support advancements in AI research and practice. What you may not know is that the company has a significant presence at the NIPS conference going on this week in Long Beach California, including four accepted papers. To learn more about the NVIDIA presence at NIPS head on over to twimlai.com/nvidia, and be sure to visit them at the conference.

2 comments

Regarding feedback from users and preferences: something that might be worth exploring is getting a way to massively scale extraction user preferences by showing them clips while recording their facial expressions. Those expressions could be later extracted using image recognition. Also what crossed my mind is MRI, but that’s probably too expensive and can’t be scaled properly. The idea here is to avoid describing to users what they should look for and simply rely on them and their emotions of what looks right and what doesn’t. Plus it should be scalable using AWS Mechanical Turk or similar service.