A robot ‘‘sees’‘ a dunnage, reaches in and grabs a box. The robot with a single electronic eye removes a raw casting for a car engine cylinder head from the box and loads it into a machine where it is drilled. Another cycloptic robot ‘‘sees’‘ the machine has completed its work and removes the cylinder from the machine, loads it back into the box and moves it down the line to an engine, where it is installed by yet another robot.

This scenario is A) a scene from a Philip Dick science fiction novel B) impossible or C) a Ford Motor Co. facility that utilizes single-camera 3D robot guidance. If you answered C, you are correct.

In recent years, 3D machine vision has turned manufacturing activities once thought to be impossible or impractical to automate into reality, such as robot guidance. In industry after industry, increases in microprocessor computation has sped up the most complex machine vision tasks for fast pattern matching and 3D triangulation and calibration. Today, new methods are putting this powerful tool into smaller, single camera packages that can provide similar 3D performance at a fraction of the price and computational requirement.

‘‘[Single camera 3D machine vision] is enabling technology for flexible manufacturing where you have a lot of variety among parts,’‘ says Babak Habibi, president and COO of Braintech Inc. (Vancouver, B.C., Canada). ‘‘It is particularly useful in the automotive industry, where, for example, you have V-6, V-8 and V-10 left and right engine heads arriving in mixed dunnages…you can’t put precision fixtures in bins. Before …3D vision, robots would crash into parts, crash parts into engines, and things got damaged. Two-dimensional vision was not sufficient to calculate how objects move in 3D space – how they pitched, rocked, rolled.’‘

Today, Habibi adds, with proper calibration and part training, single-camera 3D robot guidance can save companies money and time and increase flexibility on the production line. But, as with any job, the six ‘‘P’s’‘ apply: proper planning and preparation prevent poor performance.

Calibration and Part TrainingThe typical 3D coordinate determination process begins with a compact CCD camera that is integrated into the robot end-effector. The camera acquires an image of the part and passes it along to a PC host or other image-processing engine. The image processor identifies patterns representative of key features within the image and calculates the 3D position and orientation of the part based on spatial relationships of each imaged pattern against patterns trained at various known orientations between camera and part as well as the relationship among nearby patterns. The resulting offset is sent to the robot controller, guiding the robot to the part for pick up and handling. But before any of this can take place, the vision systems, robot and real world all have to agree on where the part is.

‘‘One of the complaints about 3D vision guidance systems was the difficulty in calibration,’‘ admits Dion Spurr, engineering manager for system integrator ABB Systems, (Norwalk, Connecticut, USA). ‘‘In multiple-camera systems, if you bump one camera, everything must be recalibrated. That’s not the case with single-camera systems. Plus, multiple cameras mean multiple cables and that’s not good. More parts to break down; more ways for the system to fail.’‘

Braintech's eVisionFactory software uses an image from a single camera to calculate six degrees of freedom – x, y and z positions, and within z, the roll, pitch and yaw angles– to locate a part’s position in 3D space. Habibi says one of the highlights of single-camera 3D robot vision systems is that they can be calibrated automatically and with less complexity than multi-camera systems. In automated calibration operations, the robot captures 50 or more ‘‘snapshots’‘ or views of a part and feeds the images to the vision guidance system, which uses algorithms to perform 3D calibration. The process compensates for lens distortion, physical pixel size and transforms coordinates in the digital image to real-world millimeters and inches.

Adds Habibi, ‘‘System software memorizes the geometric relationship between the trained features.’‘ Once the system has been trained to recognize features of a part, the part is moved and rotated to ensure the system can accurately calculate its 3D position.’‘

Once calibration is performed, the system is ready for part training. In order to use single-camera 3D algorithms to provide information to the machine vision system, Habibi suggests a part meets these criteria:

It must have unique features visible to the camera and the features used to recognize the part must be in one camera field of view.

It must be stationary.

It cannot overlap another part.

If the overlap is an issue, mounting the camera to the robot may be the answer. ‘‘This way, the camera is moved into a position where the chosen features can be viewed. An image is acquired and the model is trained from this image,’‘ explains Kevin Taylor, ISRA Vision Systems (Lansing, Michigan, USA) national sales manager.

ISRA Systems, developer of mono-3D machine vision systems for the automotive industry, typically mounts a machined calibration template in a known location within the robot’s reach. The robot is programmed to position the camera in front of the calibration template and the calibration procedure is performed through the vision software, connecting the known location of the plate, the robot movements to the plate, and the vision system's 3D coordinate space into a unified global coordinate system. ISRA combines feature extraction techniques with structured light, typically provided by a lamp source projected through a grating, so that single camera 3D vision systems can work on parts without standard teeth, holes, or other standard features that vision systems use to locate a part in an image. Using this technology, ISRA has successfully installed numerous 3D vision systems in manufacturing processes that include hard to image parts, such as sheet metal, car panels and glass windshields.

IntegratorsIntegrators like ABB Systems strive to apply new technology into vision systems as quickly, seamlessly and cost-effectively as possible, says Spurr. ‘‘We are constantly improving our component selection,’‘ he says, ‘‘and we have a ‘standard’ offering that includes our 6600 series robot and an industry-proven cable management system.’‘

At times, even the best components ‘‘fail’‘ to provide the desired result, he notes. ‘‘If the application is not suitable to a single-camera 3D vision system, it doesn’t matter if the best software, camera, cables and robots – the pieces and parts – are all there. The system won’t work for the application,’‘ says Spurr. Part of his job as an integrator is to advise customers about the best systems, and system parts, available to suit their needs.

Every end-user has unique challenges, he adds. At a company like Ford Motor Co., for example, the challenge is to give the system the same look and feel, the same user interface and functionality, for applications within the same plant and at different plants in other parts of the country.

And the biggest challenge overall, says Spurr, is that ‘‘customers don’t divorce the software running with a broken cable. Any breakdown becomes, ‘I have a vision system that doesn’t work.’’‘

Initially, any error was reported as a ‘‘vision fault.’‘ In an effort to categorize the way faults are grouped and to provide customers with more precise feedback, Spurr says ABB has expanded its list of faults to encompass ‘‘3D calculation error,’‘ ‘‘camera snap error,’‘ ‘‘unknown template error,’‘ ‘‘3D work object moved beyond deviation limits,’‘ and ‘‘3D anchor features not found.’‘

Overall, Spurr admits, ‘‘single-camera 3D systems make operations involving large quantities of parts – especially large quantities of heavy parts – more manageable from a technical and financial point of view.’‘

Babak Habibi has his own take on the technology, ‘‘Single-camera 3D robot guidance systems is delivering science to the factory floor’‘.

Comments:

There are currently no comments for this article.

Leave a Comment:

All fields are required, but only your name and comment will be visible (email addresses are kept confidential). Comments are moderated and will not appear immediately. Please no link dropping, no keywords or domains as names; do not spam, and please do not advertise.