OpenAI has gone against their standard practice of releasing a fully trained model and instead has released a smaller model for experimentation. They have stated that it is out of fear of it being utilized by people with malicious intent. Do people think that eventually technology like this will end up in the hands of people with malicious intent and if so is this just an exercise in delaying the inevitable?

We present a novel image editing system that generates images as the user provides free-form mask, sketch and color as an input. Our system consist of a end-to-end trainable convolutional network. Contrary to the existing methods, our system wholly utilizes free-form user input with color and shape. This allows the system to respond to the user's sketch and color input, using it as a guideline to generate an image. In our particular work, we trained network with additional style loss which made it possible to generate realistic results, despite large portions of the image being removed. Our proposed network architecture SC-FEGAN is well suited to generate high quality synthetic image using intuitive user inputs.

Anyone here think that control theory is actually useful for ML research?

Control engineering folks seems to use many of the same maths ML people are using. Only that they do mostly online, low-dimensional (<10) stuff but have strong theoretical guarantees and strive to get convexity whenever possible.

There are recurring similarities in both fields but few are cross-pollinating the research, to name a few: