Remember that

Listen to

Read that

Our approach, domain randomization, learns in a simulation which is designed to provide a variety of experiences rather than maximizing realism. This gives us the best of both approaches: by learning in simulation, we can gather more experience quickly by scaling up, and by de-emphasizing realism, we can tackle problems that simulators can only model approximately.

…

By building simulations that support transfer, we have reduced the problem of controlling a robot in the real world to accomplishing a task in simulation, which is a problem well-suited for reinforcement learning. While the task of manipulating an object in a simulated hand is already somewhat difficult, learning to do so across all combinations of randomized physical parameters is substantially more difficult.

Using the same algorithms and training code as OpenAI Five, even if in a different enviroment, with different parameters, etc. They’ve done it before, but this is cooler.

This should be kept in mind for future projects, too:

Generally, we found better performance from using a limited set of sensors that could be modeled effectively in the simulator instead of a rich sensor set with values that were hard to model.

“Our all-optical deep learning framework can perform, at the speed of light, various complex functions that computer-based neural networks can implement”

Not sure if 3D-printed diffractive deep neural networks are the future of AI, but being able to run DNNs from custom hardware sure looks cool.

Fin

P.S. You may have noticed the lack of any letter in the previous week. This is by design :), as from now on, Quotes, Songs, and Machine Learning will appear on a bi-weekly basis, due to time constraints on my part. Do let me know if that’s an issue.