Comment viewing options

I did something similar with TesseractOcr where I add an OOB to ProgramAB to respond to "Read This". It then takes a photo and processed it, but instead of exporting to a text file I just set it to a string in python and then process it with text to speech.

I look forward to doing this same thing with Junior very soon. Does it find more than a single item?

Prediction took 0,04sec with a gtx 960 so i think it runs great on lower GPU. It takes more than 4 seconds with cpu only. Keep in mind this test used yolo-voc definition and not the tiny-yolo-voc def. Tiny yolo is low quality definition file for GPU with 1GB of memory . I will use coco-voc or yolo-9000 definition file which require 4GB of video memory.

Very cool, what you've done here! From what I see in your screenshot, it looks like you're calling an external program after saving the image to disk. This should work for one-off requests, but for anything faster or more frequent, you'll want to do everything in-memory. I'll be doing a lot of work on this for Nixie, using mrlpy as the interface between YOLO and MRL. Once this is implemented, one should be able to simply start a YOLO service and subscribe to topics pushing object classifications. Until this is finished, however, I'll be using your solution. To aid in using the data from YOLO, you could edit the example code and have it output a comma-separated list of classifications that could easily be parsed by MRL, instead of the human-readable text that it outputs now.

I'm curious, what specs does the motherboard have? I'd like to know how much power you can cram into one of these things. Trying to decide what board form factors I should be looking at, the STX looks just right though! :)