This five day course could have been a [the] capstone seminar. A lot of excellent topics and well versed presenters helped tie the course / seminar together. However, as a whole I would ask the Design News Content Director and Embedded Vision Alliance folks to review the approach and content -- starting with the intended audience. The depth of coverage, the sequence, starting with the introduction, a short history of the vision field (just a few slides would help), the tie to current state of the art, the migration path and timeline, and details of applications would make this a wonderful course to review. Otherwise just a lot of teasing, hinting, and alluding to concepts that really need better coverage.

For those of you interested in learning more about embedded vision, I recommend the website of the Embedded Vision Alliance, www.embedded-vision.com, which contains extensive free educational materials.

For those who want to do some easy and fun hands-on experiments with embedded vision first-hand, try the BDTI OpenCV Executable Demo Package (for Windows), available at www.embedded-vision.com/platinum-members/bdti/embedded-vision-training/downloads/pages/introduction-computer-vision-using-op

And for those who want to start developing their own vision algorithms and applications using OpenCV, the BDTI Quick-Start OpenCV Kit (which runs under the VMware player on Windows, Mac, or Linux) makes it easy to get started: www.embedded-vision.com/platinum-members/bdti/embedded-vision-training/downloads/pages/OpenCVVMWareImage

Thank-you everyone for all the questions and for listening today. I look forward to perhaps meeting some of you at the Embedded Vision Summit in Boston on 25 April. I am signing off. Have a great weekend.

QUESTION: Regarding collaboration with universities: Are embedded vision projects very suitable for student projects/graduations? In the previously mentioned design flow, I can imagine that the high modularization is well suited for collaboration.ADiewi

ANS: Yes. We actually have a face recognition and gesture recognition research project in place with students at Ottawa University and would like to support expansion of this type of research work at other universities. Colloboration works well where the university is involved primarily on the algorithm analysis and sometimes in helping set up initial image test base and then the optimization and productization is managed primarily by the partner company.

Hmm, I wonder if backing up over some very broken up or noisy concrete might fool the system; OR if a chalk drawing of a bike complete with shadow or human had been drawn on the concrete would fool it. Probably ok for the driver to get occasional false positives. I don't suppose the system can tell the difference between a flat image and a real 3D object of same? The latter would change differently while getting closer to it.

Historically, all inventions mostly come by accident of some sort (problems), so problem has to be worked out to become a solution. Therefore, each error it just scaled down to microlevel problem in SW/HW and we really do not know how to treat that event as "deadly error" or "potencial invention on microlevel".

QUESTION: How is your hardware and software reliability in the field?mrybczyn

ANS: Good question. The chip used is automotive qualified and highly reliable and needs to meet low DPM. Freescale is delivering this type of chip, called SCP2201. The sw platform and library is also highly reliable and production proven.

We do not know the customer's physical module design hardware reliability, but we have seen at least in the prototype and pre-production designs by different customers a range in reliability.

QUESTION: Simon, would an FPGA have been a better alternative, maybe not cheaper?noor811

ANS: Not likely in this case. Power needed to be under 500mW at 85C for after-market and for before market under 105C so the power at room temp really needs to be < 350mW. The chip used also needs to pump out analog NTSC video. Furthermore, some customers are implementing their own image and vision processing algorithms while others are looking for a tool kit or almost turn key solution to start with. The chip footprint including system memory needed to fit into <100mm2 area.

QUESTION: Simon, why was an analog camera used rather than digital/CMOS?noor811

ANS: Actually a digital CMOS sensor was used, but the de-warped raw digital video with overlay is sent to the display in the car by twisted pair as NTSC analog. This is the most common and cheapest method. We have seen new modules designed that can send araw digital video over NTSC and for before market some automotive OEMs are moving to use Ethernet AVB to connect the cameras to the display.

@wahaufler: Very true. We need to remind ourselves what software engineers tell in every lecture/presentation. If an error is not detected in the step where it emerges, the cost to remove it will increase by 10 through every additional step in development. This not very different for us.

QUESTION: What criteria is for golden image test database? Is it specific to the field?Anatoliy1086

ANS: None and customers did not specify any except there is a guideline for testing ultra-sonic sensors. We tried to build a robust test data base by thinking of all the possible environmental and use case conditions (indoor, outdoor, multiple objects, people, cars etc...) and create a test video clip that addresses the condition.

Regarding collaboration with universities: Are embedded vision projects very suitable for student projects/graduations? In the previously mentioned design flow, I can imagine that the high modularization is well suited for collaboration.

QUESTION: Is there a Pedestrian detection library? Does it meet safety guidelines?sarimarcosa

ANS: Not for this type of implementation. The customers did not want to detect only pedestrians so classification of objects as pedestrians did not need to be done. The customers wanted to detect and warn the driver that the car is going to hit something regardless. There are European and US NHTSA safety guidelines that we are aware of, but I am not aware of any that currently specify that the rear camera needs to detect objects, determine distance and warn the driver. This requirement is being driven mostly by after-market customers and automotive OEMs who want to provide this safety feature as a differentiator.

Your final statement is very true. The great engineers see the main aspects in the front. But even they miss many aspects. The difficulty is to gather the critical points that are able to endanger a project, which takes a LOT of experince to even recognize them as critical. Sometimes, it's even impossible to do so.

ANS: Good question and hard to sum up as some of the work ended up being applied to provide other products such as Image Processing Algorithm Tool Kit or PC based Look-Up Table (LUT) Generation Tool. CogniVue is only a 30 person company with a team of 24 R&D so it consumed a huge portion of that team' time for about 18 months. We also colloborated with the University off Ottawa Vision lab so some grad students assisted during this period.

wahaufler: I expect it's common. In my machine vision system, assumptions about glare and reflections turned out to be some of the biggest "unspoken assumptions"; we had a lot more of both than we first expected.

@noor811: At the moment HDR-cameras are still a niche product. You have to take what you get. Although digital interfaces will succeed the analog ones in near near future, they are mostly still available for cheap products.

While working for Boeing, I helped develop the User Interface (UI) for the Space Vision System (SVS), used for robotic assembly of the space station. However, as I suspected, after some research I discovered some hidden design constraints. We had assumed it would be a typical VGA computer display, but it was only apparent after digging deep in the schematics and documents of the system that the GUI would have to be displayed on an NTSC video signal, severely limiting resolution and requiring use of large plain fonts. The hardware used also constrained us to an 8 color palette. This was further exacerbated by the severe and harsh lighting environment in the space shuttle.

Lesson learned: be careful of unspoken wrong assumptions. These are hard to ferret out. Is this a fairly common lesson in the industry, as I suspect?

ANSWER: The application did not require what is called classification, but determining the distance of the cloest feature. In particular, it required what is called feature detection and tracking and one of the primary sub-algorithms used to track the features is an algorithm called Good-Features-to-Track.

Question:Were you able to use C+ on the embedded platform with simplified data structures etc. or did you have to port it to C, like many embedded platform do not support OOP? Thanks!ADiewi

ANS: We used C on the PC, but on the platform it was ported to C. The master processor was an ARM9 RISC processor and the primary image cognition processor engine was a block called APEX which contains a massive SIMD processor called the Array Processor Unit. The APU was programmed in SIMD Engine Language.

@cghaba, there will not be a live stream of the event, but as with last September's Summit, we will be posting the videos of the various presentations in the weeks following the event. See: www.embedded-vision.com/industry-analysis/Video%2BInterviews%2BDemos

mrybczn: I have to tell you that in this particular application (car backup camera), road dirt will mean that you can't even get two-nines reliability. The lens of the backup camera on my Volt gets dirty-to-obscuration after *ANY* precipitation, rain or snow.

ADiewi: That's a tricksy sort of a question; remember that you can write valid C and pass it through a C+ compiler. If the compiler's good, the produced code should be as efficient as code that was processed by a true C compiler.

JimmyL: I've gotten the impression (across quite a few of these sessions) that it is a load-related problem, either on the server or on our local systems (and I lean towards the latter because it was awful when I was running a software build in parallel with the session).

StephMcGUBM: I'm actually using Internet Exploder on Win7, although that's not my usual browser. Tuesday, I had no drop-outs running Safari on a Mac.

Atlant: Are you using either Chrome or Firefox? The player works the best on them. The audio breaks on IE for a lot of people. I use Chrome and have no problems. However, rarely it does take 2 browser refreshes for the play button to be active at the very beginning of the session.

@Atlant At the start of the show, I click the play/pause button about 20 times, rapidly turning the player on and off. I don't have any problems with it dropping out. {For the pool, put me down for 20 clicks.} :)

Good morning, everyone! It's another beautiful day in Walnut Creek, California: www.mtdiablocam.com.

I am looking forward to Simon's presentation this morning. It's an updated version of the highly rated talk he presented at the Embedded Vision Summit conference in September in Boston. And Simon will also be giving a related talk next month at the Embedded Vision Summit in San Jose, California on April 25th. See www.embedded-vision.com/embedded-vision-summit for info on this free educational event.

Focus on Fundamentals consists of 45-minute on-line classes that cover a host of technologies. You learn without leaving the comfort of your desk. All classes are taught by subject-matter experts and all are archived. So if you can't attend live, attend at your convenience.