A human eye transmits data to the brain at a rate of approximately 10 million bits a second, which is about the equivalent of the capacity of some Ethernet connections.

This was the finding of a study by researchers at the University of Pennsylvania School of Medicine, and while that may be debatable, and perhaps doesn’t tell the whole story of the complexity of the human eye, it’s probably a widely accepted idea that our eyes collect and transmit more data than do our other “sensors”, if they can be called that – the ones for sound, touch, smell and taste – which, with sight, make up our five human senses.

Robots, of course, only have senses because their makers want to integrate sensors into them.

In the past, the makers and integrators didn’t always use vision sensors. In the traditional method of programming an industrial robot, the programmer would input co-ordinates in an abstract three-dimensional space inside the computer, which would then be applied to a real working environment, such as a robotic cell.

Sounds like an impossibly complex mathematical task, but that’s the way it was and is done in many instances, says Rick Weidinger, founder and chief executive officer of Robotic Vision Technologies.

Rick Weidinger, CEO of RVT, pictured in his role with A1Team Motorsport. Picture courtesy of Motorosport.com

In an exclusive interview with Robotics and Automation News, Weidinger says the vision market for growing at an exponential rate.

“The vision market currently is divided into two segments,” explains Weidinger. “One is the inspection, which is 80 per cent of the market, but we feel the inspection side is a commodity, small margins, 20 per cent.

“But the market that’s growing is the guidance. That’s growing rapidly, judging from all of the statistics and numbers that we try to follow. And from a business owner’s perspective, that’s where the margins are.

“Guidance is what allows a robot to see, and then think, and then do. And the ‘do’ part of it is guide the robot.”

Robot see, robot do

First things first, an introduction to the company, Robotic Vision Technologies. “We established the company in May, 2010, and we acquired technology that had been developed probably about 8 to 10 years prior to that,” says Weidinger. “It’s the technology that we have today, which we built up and enhanced into the 3D vision guidance market.

“We have the first patents ever filed and granted on single-camera 3D – we have two of those. And we also have random bin-picking patents… We have a total of 20 patents or pending patents in our portfolio.

“But at the moment, we are centred on the guidance of vision, specifically in 3D – that’s kind of our niche.”

What’s interesting about RVT’s technology is that it can take what’s called a two-dimensional image, like a flat photograph, and convert that into a 3D image in the computer, or the company’s custom-built “image processor”.

This is a process which has impressed a lot of the manufacturing companies using the RVT system over the years.

Weidinger says: “A lot of people are surprised by that – that you can take a 2D image from a camera and produce a 3D image from that, and that’s what our algorithms do, it’s what our code does, and we do it with a single camera, and we do it within a tenth of a second from image capture to command.

“It’s very accurate, too. We’ve had testing done quite recently from a very large manufacturer in the auto sector, and our 3D vision guidance is as accurate as 20 microns.

“And 20 microns… if you take a strand of your hair and split in four – one of those split strands is approximately 20 microns.”

A micron is a millionth of a metre, for those who have a memory like a sieve rather than a computer.

Such accuracy is achieved mostly because of the software. In fact, the system’s accuracy is only as high as the camera’s and the robot’s physical capabilities allow.

“That’s an interesting question,” says Weidinger. “Some of my background is in auto racing, some of which is like Formula 1. The question in those situations is often: ‘How much is it the driver, and how how much is it the car?’

“Our accuracy is, in fact, limited by the hardware. We are limited by the accuracy of the robot and the accuracy of the camera, and we use principally gigabit-ethernet cameras, so our accuracy barrier, if you will, is the robot and the camera itself.”

One-eyed robot

Most of the solutions RVT offers to the market involve just a single camera, rather than two. You’d have thought a solution using two cameras would be preferred, but apparently not. So, maybe in the land of the blind robots, the one-eyed robot is king.

“We do offer two-camera solutions, but we like to present solutions to our customers with a single camera because obviously it’s less expensive and it’s fewer possibilities of under-performance – fewer risks of smudges on the lens, or move out of position, or accidentally displace, and there’s less vibration effects with one camera than there is with two cameras.

“So we like to find solutions for our customers where we apply a single camera, using our single-camera 3D vision guidance. But we do have solutions where we can use multiple cameras.”

As for the location of the cameras, it’s not always the case that it’s attached to the industrial robotic arm itself.

“The camera can be located on the arm or it could also be stationary, depending on what the application is and the specific part, and how you want to guide it.

“But let’s say that it’s on the robot. We connect the robot controller directly to our own image processing system, and it controls the robot through that. Basically it send commands to the robot.”

Eyes on the prize

RVT counts all the major automakers as customers, but it’s not yet gone into the autonomous car market, like the auto companies themselves are increasingly doing.

“Autonomous cars mostly use sensors, whereas we’re camera-based,” explains Weidinger. “We have over 300 installations in the industrial space, with industrial robots, principally in the three major automakers in America.

“But recently we’ve moved the company into the collaborative robot space because it’s a growing market. I’ve seen studies which say that about 20,000 units of the collaborative robot were sold last year, but the forecasts for the next four or five years more than 100,000 collaborative robots are going to be sold.

“And we’re very attracted to that market because that market is the small and medium size enterprise. In North America, and America specifically, 94 per cent of all manufacturing is done through the SME market, and the collaborative robot is a very useful and efficient machine for that market.

“It’s smaller, less expensive, and it can work side by side with a human. So we’re positioning our company also into that space and we’ve spent the last six to eight months integrating our software into the Universal Robots machine.

“Just recently, we were in New York City, at the Jacob Javits Centre, and we demonstrated our 3D vision guidance system on a Universal UR5 robot, and it got a really good response.

“So, although historically we’ve been on the industrial side, with large robots, principally ABB, Fanuc, Motoman and Kuka, we’re also now into the collaborative robot space, into the the Universal robot and soon to be in the Kuka collaborative robot. So it’s exciting.

“Currently our biggest market is the large industrial robots, but we don’t think it will be that way. I think the mix is going to more towards collaborative because, frankly, there’s more of those smaller manufacturers.

“As a matter of fact, I moved the company from Michigan into northern Virginia, the Washington DC area. And you’d be very surprised at how much manufacturing there is just in the state of Virginia.

“Two weeks ago, we were at the Hewlett Packard Foxconn plant, outside of Richmond, Virginia, where they do all the assembly for the HP printer cartridges, and that’s in Virginia.

“And the collaborative robot is a perfect example of a robot that could really be utilised very well in a plant like that, because these cartridges don’t have heavy payloads, and there’s more plants like that.

“So we’re learning as fast as we can about this market, and we’re very bullish about it. It’s making a lot of sense for us.”

Reinventing the wheel

One of the challenges for computer programmers is often having to develop for a variety of systems which don’t talk to each other or have any compatibilities.

This means having to essentially build the software from the ground up each time for each system. Does this cause a problem for RVT?

“Let me separate the answer to the same conceptual question into two parts, at a higher level,” says Weidinger.

“We’ve integrated our software, our 3D vision guidance, into the Universal Robots machine. Our next task – and we’ll probably have this done within four weeks – we will be integrating our system into the Kuka collaborative robot.

“And yes, we had to do different work to integrate it into each of those two different robots.

“However, on the industrial side, we’re already integrated into the Kuka large robots, so our integration into the Kuka collaborative robot was a much quicker, easier task for us because we’re already talking to the heavy-duty industrial versions of the machine.

“That was really good news for us.

“Secondly, we had to consider whether we go and develop a vision system that only really applies to what we might call a ‘one-off’, or do we try to develop vision guidance solutions where we, for example, we can sell 10 of them to a Ford Motor Company, or we can sell the same 10 things to an HP, or we can sell the same 10 applications to a Foxconn, and so on.

“We are trying to get into that market strategy, whereas previously in the auto industry, it was basically it was a one-off.

“For example, 18 months ago, we did a large project for one of the large bathroom manufacturers, and we spent some substantial time with them on an industrial robot.

“It happened to be an ABB robot, and we got done with it, and they were able to reduce their defect rate. They were previously doing the application manually. We introduced vision guidance to it, and their defective rate went from 40 per cent to 2 per cent – and that’s their calculation – which was great for them.

“But the thing that we found out was that the whole application was proprietary to them, so it wasn’t an application of a vision system that we could take and apply to another customer.

“So what we’re trying to do is find applications in the collaborative robots space where we can sell five or 10 or 20 or a hundred of these applications, so we don’t have to recreate the wheel every time we’re looking at a customer, trying to solve their automation challenge.

“That solution was very particular to the bathroom manufacturing client, and we understood that, and those are what we call one-offs.

“And historically, we’ve many of those, and they’ve been very big challenges that we’ve met. But now what we’re trying to do is make applications where we many of them.”

Perhaps the open-source movement can help, with more industrial robots adopting the Robot Operating System and generally experimenting with modular systems, but the big companies are unlikely to change their strategies much, says Weidinger.

“My developers think it’s great. But I think our competitors keep their cards close to their chest, as we do – we protect our intellectual property,” he says. “In the collaborative space, they’re much more open.

“But with the ABBs, the Fanucs, the Motomans, they’re pretty good at changing their model numbers, their controller numbers and their controllers, and we always have to play catch-up, integrating and commanding your software to that robot, and that’s a bit of a nuisance but that’s the market.”

Think small

One of the most exciting trends in industrial robotics now is the growing popularity of small, lightweight, collaborative robots, which are increasingly appealing to small and medium sized enterprises.

Given that the majority of manufacturers – and other types of companies – in the US, as it is in other countries, are SMEs, the potential for growth is huge, and Weidinger is hoping RVT can be part of it.

“We’re in the process of integrating our interface into the Universal Robots teach pendant, and we’re probably going to release that in about four to six weeks through the UR+ app shop.

“That would allow the user to have just one interface. They can pick up the Universal teach pendant and see our software on that pendant, as opposed to a separate monitor or a wireless tablet, so it’s all in one.

“It’s ease of use – we’re really into making things easy, especially in the collaborative robots space, everything has to be easy.

“We’ve really paid attention to that, and scaled down the app. We have about a million lines of code now, and we’ve scaled a lot of it down to fit the Universal robot, for ease of use.

“The typical user for a Universal robot is not going to be from these huge companies which have engineers that specifically manage the robots for a living. The SMEs are not likely to have someone dedicated to the robots, so you have to make it easy to use and be very efficient.”

Weidinger says some studies show that a human worker can improve their own efficiency by 85 per cent by working with a collaborative robot.

This also means that relatively small companies with few staff can be much more productive and grow fast, and scale as and when necessary – a strategy RVT itself follows.

“We’re a software company, and we’ve been able to keep the company small and nimble, and our customers in the collaborative space really appreciate that because they’re often the same, so we can all sit together at a conference and actually make decisions there and then about what solutions would be best for them, and that’s really important.

“So although we’ve kept our staff, which is basically composed of software developers, robot support, and installation engineers, we’ve kept it small so we have a lot of operating leverage in our company.

“In order to scale our technology, we don’t have to add more people. We can scale it without adding a lot more personnel and a lot more overhead.

“We do have a partner in the intellectual property field, and we have several partners in the installation field, support – representing over 300 installation support personnel.

“And then we also have two software development groups that we manage, and they’re located outside the US.”

Pick and put together

Most of the hardware RVT uses to build a solution – the industrial robotic arm, the camera, lenses, computers, cables, and so on – are from other companies.

“But we’ve designed specifically an image processor that’s integrated into our software optimally, so we do have that,” says Weidinger.

“The customer can choose the robot, they can choose the camera, they can choose the lens, and the computer, and our licence controls multiple robots which enables us to do multiple jobs.

“So, for example, one of our manufacturing customers specifies a particular camera and a particular computer.

“And other customers ask, ‘What is your optimal vision system?’

“Then we decide the hardware that we use. But we’re flexible. We can design solutions based on whatever we think is best for them.”

And the company has updated its software to work on a variety of networks, enabling remote access. “You can look into on our platform remotely,” says Weidinger.

“The idea of connectivity and the industrial internet is something we like because our software already operates on networks.

“One of our customers, a large manufacturer, is controlling three or four robots in a single cell on one of our licences, and they’re connected, they talk to each other.

“So we like the connectivity, because we can control multiple robots doing multiple jobs, using multiple vision solutions. So for us, the more connected, the better.”

And as well as the fundamental change that the so-called “Industry 4.0” and its all-encompassing connectivity is bringing about, vision systems are also changing.

“We are seeing more and more 2.5D and 3D,” says Weidinger. “2.5D measures the depth, 3D measures not only the depth but also the tilt.

“We’re all going towards pure automation, but the enabler for pure automation is vision guidance.

“And although these statistics are incredibly hard to determine, we are seeing an increase right now in 3D vision, and that’s what in our opinion is going to drive pure automation.

“But with the increasing availability of processing power, and cheaper processing, better cameras and more knowledge of vision guidance, the market is now accepting it, we feel.

“In 2010, maybe we saw the guidance market at 10 per cent. Now, and this only my guess, but I would say it’s probably a little over 20 per cent.

“But we see it going up to 40 and 50 per cent in the next three to five years.

“The market has really come to accept 3D vision guidance.”

An impossible task

The idea of mapping co-ordinates in an abstract 3D space sounds impossible and it’s difficult to imagine anyone programming a robot that way, instead of using vision systems.

“I think they did it manually,” says Weidinger. “Then they started 2D, and then 2.5D, and now I think the market is accepting, understanding 3D, because it’s being educated in it.

“A lot of this is education and training. As a matter of fact, we think our greatest challenge in the collaborative robots space is to educate people on 3D vision guidance, and then train them on it.

“And I think once you do those two things – education and training – the products are, you know… I mean, we have never had a warranty claim on our software.

“We had some guests in six months ago, and we went and saw a large cell that had been performing with our software for 12 years.

“So this stuff, once it’s designed properly, installed properly and tested properly, it runs and runs flawlessly.”

Weidinger says RVT tends to develop preferences for hardware based on ongoing assessment of how effective and capable they are.

“We have three standard cameras that we like to use, and we have two computer species we like to use. But we prefer to use ours – we’ve designed and built our own image processor.

“Our image processor looks like a little black box that’s self-cooled, fanless, and I can tell you that box is getting smaller and smaller. It contains two boards and plenty of inputs and outputs.

“We have two or three different lens manufacturers we like to use, two or three different cable manufacturers, and we like to stay consistent.

“But as I say, some of our customers dictate the camera and the PC, while others ask, ‘What is your optimal vision system – what do you suggest from top to bottom?’

“And that’s what they purchase and that’s what we install.”

A robot for all reasons

Currently, an industrial robotic arm is typically connected via cables to a computer, often a programmable logic controller, and it’s that computer that acts as its brain and essentially tells it what to do.

That computer is almost always separate because the size of the hardware needed to generate such processing power is too big to build into the robotic arm itself.

But a senior executive at Fanuc predicts that there will soon be a time when processors and computers are small enough to integrate into the robotic arm itself and there will be no need for a separate computer connected via cables.

“That’s the first I’ve heard of that,” says Weidinger. “That would be wonderful.

“From our perspective, these image processors, these PCs if you will, are getting smaller and smaller, and getting more and more capable.

“We recently ordered a computer that’s about 9.5 by 3.5 by 6 that we think fits nicely into the Universal robot controller box.

“So then we would put our piece of hardware inside their robot controller, so it’s one less piece of hardware outside of the robot and a lot less cords.

“It depends on what solution they’re running. With 3D vision, they do [need a large computer], because a lot of power is needed to push a 3D vision application, but Universal is working on that.”

The programming languages RVT’s coders generally use include C++, C#, Python, and several others, and the operating system is usually Windows.

“We have a developer scheme, so a certain customer can develop their own vision application on their plant floor.

“We are developing a software development kit for customers. So if you’re a robot maker, and you have 2 and 2.5D, but don’t have 3D, then what you could do is come to us and buy our SDK and integrate it into your robot and all of a sudden you’d have your 3D button.”

The company is currently looking to expand in China and says it’s close to finalising some aspects. “We’re very excited about the opportunity,” says Weidinger.

Character limit

Weidinger says he’s a big fan of President Donald Trump’s drive to bring back manufacturing to the US, one of the key issues which probably won him the White House.

“We love it,” says Weidinger. “We think Trump is spot on with manufacturing in the United States, because if you look at the manufacturing sector, there is what they call a multiplier effect to it.

“For every manufacturing job and every manufacturing solution or project, it multiplies – it leads to other wealth creation surrounding it.

“So we think bringing manufacturing back to the United States is a wonderful, excellent idea.

“The other things is, robotics doesn’t necessarily displace humans. I know a lot’s been written about that – a lot’s been written that it does, and a lot’s been written that it doesn’t.

“We’re kind of in the camp, from what we’ve seen, that says it doesn’t. These automakers, their folks, sure it’s replaced positions but those people have been repositioned and they’ve been retrained.

“And there’s going to be a lot of jobs available for training and programming with these new robots – that’s going to be a whole other field, and that’s going to add workforce as opposed to decrease workforce.

“So we’re all for manufacturing coming back to the United States, and we salute President Trump on that.”