Yesterday, Jeff pointed me to an application Eric has worked in, a dice counter using a dual-core processor from Analog Devices, I'd like to ask, was the two cores feature of the processor a key to the project? Also, if possible, did you use OpenCV for that application?

Early next year we are hopnig to publish a white paper describing the process of developing this demo. It will appear on the Embedded Vision Alliance web site. For today I'll just point out that the ADI chip actually has three cores: two Blackfin CPU/DSP cores and one "PVP" coprocessor for vision tasks.

Yesterday, Jeff pointed me to an application Eric has worked in, a dice counter using a dual-core processor from Analog Devices, I'd like to ask, was the two cores feature of the processor a key to the project? Also, if possible, did you use OpenCV for that application?

Thanks, everyone for attending this class series. I hope you found it wortwhile. For additional (free) embedded vision educational resources, including discussion forums, please visit the Embedded Vision Alliance at www.Embedded-Vision.com.

Kind of levels the "multi-core" playing field, brings back an element of "portable". Haven't really played with it enough to judge it, but thought I'd mention it as YOU are much more likely to find good use for it.

Yesterday, Jeff pointed me to an application Eric has worked in, a dice counter using a dual-core processor from Analog Devices, I'd like to ask, was the two cores feature of the processor a key to the project? Also, if possible, did you use OpenCV for that application?

flared0one, might be easier to use GPS. accelerometer, etc to asertain velocity ;-) But ADAS (advanced driver assistance systems) use cameras for lots of purposes...including reading speed limit signs as you pass by them and telling you if you're going too fast ;-)

So if I was using a video camera in a car, I could actually do something VERY similar to how a laser mouse works -- [A] find the CURRENT visual features, compare to the LAST set of features, determine how things moved, add any NEW "best features to use", then loop back to [A]... if you have a calibrated imager, you could conceivably track your changing location, determine your velocity, etc -- just like a car-sized mouse... LOL.

If I remember from my Robotic Vision course, the detection algorithm was referred to as an "Expert" system, and each of the "A, B, C, D" feature types were referred to as individual "experts". And if you started with a wide range of random "features", you could figure out over time (while generating that "cascade", approximately) which features didn't matter, which ones to keep and use.

Ah, good clarification -- glad it IS working for you. Thought you were groaning that the update seemed to break stuff.

flared0one, what I meant is that Chrome is working fine for me, even with the latest Flash plugin installed, whereas it seems that Firefox and Safari folks need to downgrade to an older Flash version in order to have audio success

Face detection/recogition is a challenging problem to solve ;-) Android 4.x includes built-in face recognition support for homescreen unlock purposes, and Google claims that it works even with glasses on, but my personal testing results have been hit-and-miss

Oddly, Chrome (at least my copy, on Mac OS X) seems to have the latest Adobe Flash plugin version pre-installed. So it doesn't seem to specifically be the Flash plugin, per se, versus an odd browser-specific interaction

We haven't tried the Virtualbox virtual machine manager product, but it's open source and free. Formerly a Sun product, now owned by Oracle. I believe, but am not positive, that it will import VMware virtual machines

Vision apps: only personal interest. will be starting at ground level with an Arduino based frame grabber on breadboard with open source libraries. a simple ciucuit that also overlays MCU generated video back onto the live video.

The streaming audio player will appear on this web page when the show starts at 2pm eastern today. Note however that some companies block live audio streams. If when the show starts you don't hear any audio, try refreshing your browser.

Focus on Fundamentals consists of 45-minute on-line classes that cover a host of technologies. You learn without leaving the comfort of your desk. All classes are taught by subject-matter experts and all are archived. So if you can't attend live, attend at your convenience.