@JEFF: I found a light at the end of the tunnel for the question on Shannon/Nyquist interval. In a compression environment it is possible to reconstruct analog image from digital from sub-Nyquist/Shannon samples. In addition, Shannon/Nyquist interfal is a sufficient condition, but not necessary in discrete signal processing. That is cool news.

@JEFF: Could you suggest an innexpensive way to detecting 3D position in a limited space?

It really depends on what you mean by "inexpensive" and "detection 3D position". E.g., how precisely do you need to measure the position? What's the size of the object(s) you want to detect? Are they moving? If so, how fast? What's the lighting? Outdoors? Lots of variables here, and they all factor into the trade-offs among different sensor technologies and algorithmic approaches Tune in tomorrow for an intro to 2D and 3D sensors.

WILL be hoping to hear if you're familiar with the Leap sensor; 0.01mm repeatable position precision, between 32 and 200 frames per second, uses between 2% and 5% of a generic PC's CPU time running the algorithms that actually extract position data from the sensor's data stream...

The Leap sensor is very intriguing. So far, however, Leap Motion has not published details of how it works, so we won't be covering it this week. We're looking forward to hearing more about it ourselves.

I WILL be hoping to hear if you're familiar with the Leap sensor; 0.01mm repeatable position precision, between 32 and 200 frames per second, uses between 2% and 5% of a generic PC's CPU time running the algorithms that actually extract position data from the sensor's data stream... Range limitation is the chief drawback (at the moment); their initial product is designed as a desktop-worksurface solution, only monitors an eight cubic foot volume (which STILL manages to yield around 3.25 giga-bits of position data, if you do a ten-micrometer grid across the back surface of an approximately two foot radius field-of-view)... But that will be tomorrow's discussion, re "Fundamentals of Image Sensors", right?

Excelent start.! . In the next sessions, will you talk about how to select the lighting system in your vision application?

Good question. Unfortunately, due to time constraints, we're not going to cover lighting in this series. I suggest you post your lighting questions on the discussion forum on www.Embedded-Vision.com, and we'll do our best to get some constructive responses.

I'll be hoping for recommendations re "tool-box" applications that I can open up and apply to images stored on my PC -- stuff like edge-detect and segmentation, things that let me try out approaches without having to waste huge amounts of coding time only to find "well THAT doesn't do what I expected/needed"...

There are lost of options. We'll be talking about one free option, OpenCV, on Thursday and Friday. You may also want to check out the MATLAB vision toolbox, and National Instruments LabView Vision.

I'll be hoping for recommendations re "tool-box" applications that I can open up and apply to images stored on my PC -- stuff like edge-detect and segmentation, things that let me try out approaches without having to waste huge amounts of coding time only to find "well THAT doesn't do what I expected/needed"...

The amount of data coming from the image sensor is proportional to the frame rate and the resolution. The higher the frame rate (and/or resolution), the higher the data rate. The higher the data rate, the more processing power required.

@Jeff Just starting with OpenCV. Any suggestions for enhancing low light situations? Any good technical articles.

Low light is a real challenge -- if the sensor can't see the scene well, it's tough to intepret what's going on in the scene. One solution in some applications is to add lighting -- which is becoming much more practical now due to LED lighting options. Another is to consider a different kind of sensor with better sensitivity, or front-end processing to improve the quality of the capture images.

Great presentation. How far down can vision be scaled? Within range of high-end MCUs?

Excellent question. Jitendra Malik of U.C. Berkeley, a luminary in vision rseearch told me that people often overestimate the resolution required for vision algorithms. Some basic funtions, like face detection, can be performed with fairly low resolution. And if you don't have fast motion, you may be able to get by with a low frame rate, too. Low resolution X low frame rate = low data rate, and now we're getting into the realm of things that can be done on relatively low-end processors -- even MCUs in some cases. Check out the CMUcam as an example of this.

What would be the best low cost tool to start development of embedded vision application?

Great question. There are many option. One of the most popular environments for vision algorithm/application development on the PC is OpenCV. On the Embedded Vision Alliance website, you can download a free OpenCV kit that will get you up and runnnig quickly -- all you need is a PC, a webcam, and the free VMware player.

I don't do vision products myself, but my conpany does. One person actually designs vision algorithms. This class will give me a better insight into what goes into a vision system for embedded products.

Seems to be a problem with browsers or Operating system. Safari on OSX Lion will not playback audio. Running IE9 on Parallels Guest OS works fine. So it does not look like its the flash version per se.

(remind me not to do THAT again, during the presentation) 257, FWIW... ("MOST of them quality posts"??)

When I was working at a company using industrial-strength lasers to mark barcodes, etc on metal, PC boards, etc, field engineers having to fly out and reFOCUS the stupid lasers was THE most common (and expensive) remediation required. "barrel/pincushion" is NOT trivial, if you look at the optics.

11.4.402.265

I just sign up for these classes so I can get the Slide Deck because I have never been able to hear the audio (IT is blocking the radio site). This is the third Design News/ Digikey class I have missed because of lack of access to the audio.

I've never had a problem with audio until today. I had to switch from Firefox 14.01 to Internet Explorer. With Firefox I just got "BlogTalk Radio" then "Buffering". Tried refreshing, exiting and login, etc.

I have briefly used the Kinect SDK, but only as far as the example codes.

I have interest in gesture recognition applications, both in custom embedded hardware and in smartphones. The algorithms I would be interested in are gesture identification and tracking in real time video.

I still cannot get any audio for any of these classes because my IT department blocks the site where the audio comes from. I have been trying for six weeks to get IT to unblock to no avail. Any chance Digikey coud provide another method of getting the audio so I could attend any of these classes? Right now, I can only look at the Slide Deck.

Fairly familiar with Kinect, have researched it, looked at length at Microsoft's SDK; not specifically planning to use it. I'm signed up as a wanna-be developer for the Leap Motion "crowd-sourced" development project. I've been looking at OpenCV, interested in what tools you can point out.

Anyone not getting audio, you MAY want to try opening a different browser window -- I've found Chrome tends to work without hiccups. Or (what was it?) F11, perhaps?

Hey, guys! I'm ba-a-ack! I'll be very curious to see if Jeff is familiar at all with the latest CV product almost-on-the-market, coming from Leap Motion! Saw the Kinect in his slide deck -- the Leap device is about 200 times more precise... But I'll keep quiet (mostly).

The streaming audio player will appear on this web page when the show starts at 2pm eastern today. Note however that some companies block live audio streams. If when the show starts you don't hear any audio, try refreshing your browser.

Industrial workplaces are governed by OSHA rules, but this isn’t to say that rules are always followed. While injuries happen on production floors for a variety of reasons, of the top 10 OSHA rules that are most often ignored in industrial settings, two directly involve machine design: lockout/tagout procedures (LO/TO) and machine guarding.

Focus on Fundamentals consists of 45-minute on-line classes that cover a host of technologies. You learn without leaving the comfort of your desk. All classes are taught by subject-matter experts and all are archived. So if you can't attend live, attend at your convenience.