This website requires certain cookies to work and uses other cookies to help you have the best experience. By visiting this website, certain cookies have already been set, which you may delete and block. By closing this message or continuing to use our site, you agree to the use of cookies. Visit our updated privacy and cookie policy to learn more.

This Website Uses CookiesBy closing this message or continuing to use our site, you agree to our cookie policy. Learn MoreThis website requires certain cookies to work and uses other cookies to help you have the best experience. By visiting this website, certain cookies have already been set, which you may delete and block. By closing this message or continuing to use our site, you agree to the use of cookies. Visit our updated privacy and cookie policy to learn more.

Whether an automated machine vision system measures a spark plug’s gap, locates an assembly line item, reads 2D code, counts parts, checks for flaws or performs a host of other tasks, it is running at a greater speed and higher resolution than ever before.

“Whatever industry [machine vision] is being used in, it has helped the companies have better quality and productivity,” Automated Imaging Association President Jeff Burnstein says. “Talking about the high-speed applications, those are really critical drivers for companies becoming more successful. There are certain industries like electronics and semi-conductors where you can’t do these things by hand. You have to use vision systems. You look at the explosion of computers and cellphones and all the tech industries, vision is a key driver of that.”

Finding its Footing

For the last 20 years, the industry has seen exponential growth, Burnstein says, following a brief false start in the late ‘80s and early ‘90s.

“The automotive guys, especially GM — they bought into five companies — and there seemed to be disappointment in the technology,” Burnstein recalls. “It seemed that it wasn’t capable of doing what they thought on the factory floor. And this was going on with some other technologies, too. Robotics went through a dip around the same point.

“AIA was founded in 1984. When we first started, when all the excitement was there, we probably had 80 or 90 members. At one point we were down to less than 20 members. And now we have more than 330 members in 32 countries.”

Burnstein says a combination of improved camera and computer technology, falling prices and an increase in professionals that specialize in how to integrate vision systems helped match performance with manufacturer’s expectations.

Now, in most cases, a working vision system has to be at least 99.7 percent accurate, Automated Vision Systems President Perry West says. “There seems to have been a number of studies that have reproduced the same results: that a human inspector is only 80 percent accurate in the long term. So it means that the quality level of products, when using machine vision, is generally much higher.”

Advances in Cameras

West recalls the industry’s infancy in the 1980s, when camera resolution was limited to 640 by 480, or about one-third of a million pixels. Today, a common automated machine vision camera captures images in the 2-to-5 megapixel range, with the option of going much higher.

Not only are the images high-definition, they are taken in rapid speed, which has been aided by a recent shift from CCD (charged coupled device) sensors to CMOS (complementary metal oxide semiconductor) sensors.

“CMOS sensors are less complex,” West explains. “They tend to use technology that’s more common in semi-conductor manufacturing. They tend to be lower-power, high-speed. They tend to have lower sensitivity and more noise. So CCD is still ahead in image quality and sensitivity. However, there are far more engineers working on developing CMOS sensors than there are working on developing CCD sensors.”

Today’s CMOS cameras can transfer an image three times faster than a CCD camera, adds Douglas Kurzynski, project manager for Keyence Corp. of America’s machine vision technology division. A standard CMOS camera offered by the company has a transfer time of 1.7 milliseconds.

“I’ve seen speed just infinitely get faster,” he says. “When I first started here … the fastest you could go was 34 milliseconds. And that was just counting pixels. If you added a pattern algorithm, you were talking 100 to 200 milliseconds. And that wasn’t that long ago, we’re talking 13 years.”

Component Systems vs. Smart Cameras

A manufacturer today may choose to build a component system, typically with one-to-three cameras linked to a single computer that automatically runs pre-programmed algorithms. Another popular option is a smart camera with the computing and software capabilities built-in.

Smart cameras “tend to be a little less expensive than buying the components separately,” West says. “And in integration, most of them tend towards being easier to use. Which means they’re less flexible, flexibility brings complexity with it. They tend to be economical for single-camera requirements.”

“You can kind of speculate what this means for processors for machine vision,” he says. “I don’t think it’ll mean that we’ll see a decline in processing power. But I think it may incentivize different processing architecture in the future. Certainly, it bodes well for smart cameras.”

Lenses Play Catch-up

As cameras have improved, resolution is outpacing current lens technology, West said.

“Back in the days when we were stuck with what we called VGA resolution — 640 by 480 — a lot of times lenses designed for closed-circuit TV applications worked fine. But as we got higher and higher resolution cameras, those lenses didn’t produce the image quality and didn’t have the optical resolution to match the cameras.

“So there have been companies that are now making a series of lenses that are for machine vision that will support the higher resolution. But we’re still seeing sensors that still have very small pixels that are really beyond what any commercially available lens can match. We have cameras for which you cannot buy a commercial lens to work with it. Therefor, you’re buying megapixels but you’re really not going to be able to get the resolution you think you’re going to be able to get.

“We see the same thing with cell phones and inexpensive digital cameras. People buy megapixels but the lenses just aren’t a good match. So we’re waiting for a real breakthrough in lenses, and I can’t tell you where that will come from.”

One new exciting idea in research is the possibility of lensless cameras.

“One (company) that is talking about it is Rambus,” West said. “They’re making image sensors, similar to a conventional image sensor, but they’re developing a refractile element on top of the camera. What you get out of the camera looks like a big blur, but you put it through post-processing, and out of that post-processing you get a useable image. So this is going to make extremely tiny cameras — I mean very tiny cameras — available because of the lack of the lens.

“We now have cameras as small as one millimeter square by several millimeters long that are used in endoscopes in the medical imaging world. But lensless cameras look to be, probably, like a very small grain of sand.”

Rise of 3D

The human eye is better at depth perception, but there is a host of machine vision technologies improving 3D vision.

“It’s definitely a hot item nowadays,” Kurzynski says.

One form of 3D vision incorporates laser displacement with traditional machine vision.

“It’s a 2D laser that sends out a beam and profiles the target, the height,” he says. “We take that and we incorporate that into the vision. We take several profiles. We take line-by-line and build an image from those profiles. We build a height image, if you will.”

Another technique lights the object with striped pattern lighting.

“Based off the response of that light, it creates a 3D image from the way the light bounces back,” he adds.

3D vision can be helpful in guiding a robot’s depth perception as it reaches into a bin for a product, or for capturing hard-to-read data like the black lettering on a black auto tire, he says.

Its applications extend beyond manufacturing to consumer electronics, like the Microsoft Connect device for computer games, or Google’s self-driving car that using a technology called Lidar, a combination of the words “light” and “radar,” West adds.

Similar to the expanded applications for 3D vision, the vision industry has begun finding increased applications outside of its initial manufacturing home.

“In the past, the term machine vision has really been defined as a system used in manufacturing in conjunction … with a number of different automation equipment,” says Automated Imaging Association Director of Market Analysis Alex Shikany. “The term machine vision is heavily tied to manufacturing. But in recent years, the way applications and other industries have opened up to these companies, we’re trying to move away from that, or at least the distinction that it’s manufacturing only. I think our members tell us that more than 50 percent of sales are outside of manufacturing. Pretty much name an industry and we’ve had interest expressed in how vision could be used.”

The Quality Show, launched in 2015, is a 3-day tradeshow dedicated to connecting quality manufacturing professionals with process improvement vendors for all things QUALITY. Thousands of manufacturing professionals and engineers come to network, attend education sessions and take home actionable solutions that can improve quality in your manufacturing planning.