Heterogeneous Multicore & the Future of Vision Systems

Afterward, information needs to be put together in such a way -- either through a rule-based approach, a learning-based approach, a hybrid model, or perhaps something different -- to come out with an end vision system that meets the desired functionality.

Keep in mind that while both extracting information and making use of it require great amounts of processing, these functions map better to different types of processing architectures. An increase in processing points to a need for multicore. Optimizing the solution leads to a heterogeneous multicore.

Overlapping approaches
Even with increases in resolution and better usage of information, it is likely that in complex systems a single approach will not be able to reduce a system’s false positives to the level where it can function in the real world. When this happens, overlapping approaches is a great way to improve system performance. The key aspect of overlapping approaches is to use several methods to process inputs and make decisions rather than relying on just one or two. No method is 100 percent accurate for all situations, so combining multiple methods helps fill in the gaps of each method and reduce the number of false positives. The more redundancy there is in the system, the greater the accuracy. Obviously, the increase in the number of approaches implemented has a similar increase on the processing capability.

Heterogeneous multicore
The trend toward increasing information, using advanced algorithms, and requiring redundancy within a system all point toward the processing need continuing to grow in vision systems. However, it turns out that processing cores can only get so big before their yield starts to plummet. Similarly, process nodes are not shrinking as rapidly or providing the power reduction they once did. Naturally, single-core processors have yielded to multicore processors to reach higher levels of performance.

Heterogeneous multicore goes a necessary step further by increasing the efficiency of the processing by using a mix of different processing cores so that each type of core handles the part of the system for which it is best. Think of the different types of processing needed in advanced vision systems. A digital signal processor (DSP) is tailor-made for implementing vision functions. The DSP specializes in real-time signal processing of math-intensive functions resulting in high performance and predictable latency, both of which are essential in vision systems to ensure acceptable response times from external stimuli. However, a RISC processor is more efficient at putting the information returned from the DSP(s) together, running the high level OS and control code. An ideal computational platform for vision systems would consist of both RISC and DSP cores.

The importance of the overall device architecture is worth noting. Without a well thought out architecture that provides enough bandwidth, memory, and efficient communication, bottlenecks can severely limit the performance of the device. Only in the last few years has there been an efficient heterogeneous architecture that delivers full processing entitlement of all the individual elements.

The future of vision systems
Vision systems will move from the labs to the real world. The high performance delivered from heterogeneous multicore devices will enable improvements in system accuracy by providing the processing capability needed by the increase in information, the use of advanced algorithms, and the overlapping of several approaches. Real-world requirements on size and power in areas like security cameras, industrial automation, and automotive driver-assist systems will similarly be met. By providing an optimal mix of cores, the heterogeneous multicore device will consume less power and enable lower system costs over discrete processing solutions. With the advances in vision systems powered by heterogeneous multicore, soon we’ll be able to stop talking about what is lacking in vision systems and, instead, spend our time imagining where vision systems will take us.

Mark Nadeski is business development manager, multicore processors, for Texas Instruments.

One approach that is being used is distributing the processing as well. Many of the high level functions do not need to be performed at the camera level. Putting the "low level" image functions in the camera reduces the amount of data that needs to be transmitted. The systems you speak of are capable of going through feature extraction at the camera level and then communicating that higher level information to a centralized system (or a hierarichy of processors) to provide system function.

Image analysis and machine vision should be an open-source field. I don't want to re-invent the wheel in this area. I have a few applications I would love to use such tech in, but I have shied away from the task due to the daunting work in entails.

Agreed, the camera should just take pictures. The high-level work should be handled by more powerful computer systems.

Industrial workplaces are governed by OSHA rules, but this isn’t to say that rules are always followed. While injuries happen on production floors for a variety of reasons, of the top 10 OSHA rules that are most often ignored in industrial settings, two directly involve machine design: lockout/tagout procedures (LO/TO) and machine guarding.

Load dump occurs when a discharged battery is disconnected while the alternator is generating current and other loads remain on the alternator circuit. If left alone, the electrical spikes and transients will be transmitted along the power line, leading to malfunctions in individual electronics/sensors or permanent damage to the vehicle’s electronic system. Bottom line: An uncontrolled load dump threatens the overall safety and reliability of the vehicle.

While many larger companies are still reluctant to rely on wireless networks to transmit important information in industrial settings, there is an increasing acceptance rate of the newer, more robust wireless options that are now available.

To those who have not stepped into additive manufacturing, get involved as soon as possible. This is for the benefit of your company. When the new innovations come out, you want to be ready to take advantage of them immediately, and that takes knowledge.

Focus on Fundamentals consists of 45-minute on-line classes that cover a host of technologies. You learn without leaving the comfort of your desk. All classes are taught by subject-matter experts and all are archived. So if you can't attend live, attend at your convenience.