Do you have any comments using neural networks for vison applications?

Sorry, no. I haven't used neural networks since grad school, a loooong time algo. However, I know that some companies are using algorithms (and in some cases hardware architecturse) that are inspired by the human visual system. It's an intrigueing idea, considering how powerful the human visual system is.

If my application has to recognize size, color and texture from tomatoes, it suitable to use the EVS from National Instruments wich implements a CPU in a FPGA ??, or can i use a lower cost processor?

Much will depend on rates-- how quickly are things moving, how quickly do you need the result, how many tomatoes are in one frame. Depending on these and other parameters, a CPU may be adequate (perhaps ith GPU assistance), or you might need more oomph.

@zogoto3000: the issue with firefox and audio IS the latest flash. Downgrading flash a release or 2 (to 11.3.300.27x) resolved the audio issues for me. But now IE has no flash since process starts with uninstalling flash.

Jeff Bier, If my application has to recognize size, color and texture from tomatoes, it suitable to use the EVS from National Instruments wich implements a CPU in a FPGA ??, or can i use a lower cost processor?

Alex, Size and color would not be a problem for a SOC (ARM). I am not sure what you mean by texture.

ONE temptation with mobile processors is the possibility whether you can do any meshed processor arrays, any network-based offload...

Yes, this has potential. Also, it's interesting to think about what can be done by combining image data from multiple mobile devices. Check out this amazing project, for example: http://grail.cs.washington.edu/rome/

@artem...I had seen the board..(ESC desk XILINX) with ARM-9 Cortex processor....I was lookin out for voice processing.. it is one of the really cool stuff..I hav'nt dont much experimentation on it.. want to try it out..

Agreed, the ease of use is a big question. Right now I'm evaluating Zynq with the goal of building a next generation camra for computational photography and computer vision. It'll be the next version of Franken camera, if you are familiar with it: http://graphics.stanford.edu/papers/fcam/

Jeff Bier, If my application has to recognize size, color and texture from tomatoes, it suitable to use the EVS from National Instruments wich implements a CPU in a FPGA ??, or can i use a lower cost processor?

@jeff: amongst the four processor choices you presented, what is the most costly efficient way to start programming embedded vision applications?

That depends on what you mean by cost-efficient. Are you talking about the cost of stuff you have to buy to get started? The effort required to do the development? Or the cost of producing your product using that processor?

Your slide talks about implementing CPU in FPGA, but there is a new class of FPGA with embeded ARM A-9 core which in my mind is one of the best options for embeded vision. ZYNQ from Xilinx and HPS from Altera. Have you looked at those chips?

I'm glad you asked! I wanted to mention Zynq but ran out of time. I thihk the combination of the high-performance CPU subsystem integrated with the FPGA on one chip is very promising for many embedded vision applications. The key challenge will be making it easy to use.

Your slide talks about implementing CPU in FPGA, but there is a new class of FPGA with embeded ARM A-9 core which in my mind is one of the best options for embeded vision. ZYNQ from Xilinx and HPS from Altera. Have you looked at those chips?

The BDTI OpenCV Executable Demo Package is an easy-to-use tool which allows anyone with a Windows computer and a web camera to experiment with some of the algorithms in OpenCV v2.3. After downloading the installer zip file, double-click on the zip file to uncompress its contents, then double-click on the setup.exe file.

mobile devices generally have TIGHT restrictions on useage, usually don't have much excess space available -- keeping people from stomping on available memory, etc, can be a problem... App dev space IS pretty interesting -- check out "String", among others re AR video overlays...

Suggestion for @Ann -- I'm finding it would be useful (simplify what I'm doing in the background) if the PDF could include some active links (opening into a new tab/page) with samples of an ASSP for example, or a GPGPU, etc -- hardware references, since I'm online and slightly info-starved while I'm listening to voice (which is great, just slower than I can read)... Too bad that URLs embedded here aren't clickable (unless people use the "html" link under the chat window?)...

No specific "typically use"; yes, have implemented; memory useage, accessing the program data-space...

Vision application: starting with a data stream of not-exactly-image-data and generating connected-surfaces, looking to recognize object position-and-orientation info within the field-of-view of a sensor.

Mac users: yesterdays' looping issues aside, I (Mac OS 10.7 Lion) more generally was unable to hear either yesterday or Monday's lectures via either up-to-date Safari or Firefox via the up-to-date Flash plugin. However, Google Chrome worked (and today is once again working) fine for me.

I also observed from the comments that Windows folks running Firefox were also having problems. I'd suggest either IE or Chrome in this case; either seemed to work for others.

Looking forward to learning how low you can go in obtaining vision information.Wondering if anything is within reach of microcontrollers, or if it needs a multi-chip solution.

It is definitely possible to do some simple vision processing on a microcontroller. It all depends on your data rate (resolution x frame rate) and the complexity of your algoriths. If you just wanted to do face detection at close range, for example, and could tolerate latency of perhaps one second, that would likely be doable on an MCU. Most vision functions will require more processing power, however. Also, interfeacing image sensors to MCUs can be a challenge.

I apologize again for the problem with the streaming audio during yesterday's class. I hope that by now, everyone has had a chance to listen to the archived stream. However, if you haven't had a chance to do so, it's OK to wait until after today's class -- yesterday's session and today's are independent.

I'm thinking about "time of flight" for vision systems and seems it should use a reference pulse of light to measure the time-of-flight.

Do these systems use an expanded time sampling system to stretch 1ft/ns (2ft/ns RADAR time) to something much more measurable?

There are two methods used to measure TOF. The first method measure the TOF directly using counters on each pixel clocked in the Ghz range. The second method modulates the pulse using RF frequencies and measure TOF by the phase difference between the outgoing and incoming light pulse.

The streaming audio player will appear on this web page when the show starts at 2pm eastern today. Note however that some companies block live audio streams. If when the show starts you don't hear any audio, try refreshing your browser.

Industrial workplaces are governed by OSHA rules, but this isn’t to say that rules are always followed. While injuries happen on production floors for a variety of reasons, of the top 10 OSHA rules that are most often ignored in industrial settings, two directly involve machine design: lockout/tagout procedures (LO/TO) and machine guarding.

Focus on Fundamentals consists of 45-minute on-line classes that cover a host of technologies. You learn without leaving the comfort of your desk. All classes are taught by subject-matter experts and all are archived. So if you can't attend live, attend at your convenience.