In this episode we hear from Siddha Ganju, data scientist at computer vision startup Deep Vision. Siddha joined me at the AI Conference a while back to chat about the challenges of developing deep learning applications “at the edge,” i.e. those targeting compute- and power-constrained environments.

In our conversation, Siddha provides an overview of Deep Vision’s embedded processor, which is optimized for ultra-low power requirements, and we dig into the data processing pipeline and network architecture process she uses to support sophisticated models in embedded devices. We dig into the specific the hardware and software capabilities and restrictions typical of edge devices and how she utilizes techniques like model pruning and compression to create embedded models that deliver needed performance levels in resource constrained environments, and discuss use cases such as facial recognition, scene description and activity recognition. Siddha’s research interests also include natural language processing and visual question answering, and we spend some time discussing the latter as well.

Giveaway Update!

Thanks to everyone who took the time to enter our #TWiML1MIL listener giveaway! We sent out an email to entrants a few days ago, so please be on the lookout for that. If you haven’t heard from us yet, please reach out to us at team@twimlai.com so that we can get you your swag!