Flexibility, Collaboration Key When Building Robots That May Cure Cancer

For biopharmaceutical companies looking to discover the next big breakthrough in the fight against cancer, automating their processes can literally be a matter of life and death. With billions of dollars being spent to research drug discovery, finding ways to more quickly automate research is a high priority. HighRes Biosolutions aims to help pharmaceutical and biotech companies automate laboratory operations in areas that include drug discovery, high-throughput genotyping, sequencing sample preparation, and molecular diagnostics.

Founded in 2004, the Beverly, Mass.-based company offers a modular and flexible docking station and workstation “pod” that allows scientists to swap testing devices in and out in about 30 seconds. This lets researchers deploy automated systems in their labs for multiple use cases by reconfiguring them quickly as needed. It also allows for optimization of equipment and lab space.

Peter Harris, CEO of HighRes Biosolutions

HighRes Biosolutions recently launched a new liquid handling integrated system, called Prime, which provides liquid handling in a vertical orientation and labware storage in a compact space, as well as collaborative features and integration with its modular system.

Robotics Business Review recently spoke with Peter Harris, the CEO of HighRes Biosoltions, about the company, automation trends within the lab space, and his thoughts on artificial intelligence.

Why is it so important for biotech and pharmaceutical companies to automate their processes?

Harris: The first thing to understand here – discovering new therapies is 90% failure. You spend all this time working on things that pan out to not be valuable. So it’s a giant trial-and-error system, done by expensive people in expensive facilities. Before a company launches a new drug, there’s a trail of carnage of things they tried that didn’t work.

Today, when we say a new drug costs about $3 billion to develop, there’s a whole bunch of that money in clinical trials. But even at the research stage, there’s about $1.5 billion spent on all of the stuff that didn’t work in order to get to the one thing that did.

Let’s say there’s a disease or condition that we want to go after, and we identify something in the body – a protein, an enzyme, a type of cell, maybe even a strand of DNA – that’s related to the disease – known as a target.

We also have a giant library of materials — some were made by chemists in our labs, some we found in environments like the Amazon rainforest. We need to know which compounds react with the target.

What we need to do is a giant set of experiments to decide what stuff in our library has a relationship with the target. Once we figure that out, we’re going to run all kinds of experiments to see what that relationship does, what the side effects are, etc. If I have a compound library of 5 million compounds, I have a lot of experiments to run.

“Before a company launches a new drug, there’s a trail of carnage of things they tried that didn’t work.”

— Peter Harris, CEO, HighRes Biosolutions

Historically, there was a scientist with a handheld pipette, standing at a lab bench, moving liquid from object to object, and looking at it. That’s slow, it’s totally repetitive, boring, laborious, and it’s pretty inaccurate.

The scientist has to be very precise about repeatability, or you’re just propagating the chances that you’re moving bad data down the road in a circumstance where you already have failure as your primary cost.

So back in the 1980s, high-throughput screening was developed, basically using automation to get a better handle on running many more experiments that scientists could run, using robots in labs.

What’s not well understood by the general public is that these pharmaceutical companies are some of the most sophisticated robotic users around these days. They’ve got giant, complex systems doing incredibly complicated science, all done by robots because they get the benefit of letting the scientists focus on science and not doing manual processes.

How does a robot achieve better precision and accuracy within a lab experiment?

Harris: Robots typically are not faster than people [in this space]; they’re just relentless. The robot does not pause to ask Suzy what she did this weekend. The robot never leaves the sample for 62 seconds in the incubator instead of 57 seconds because it was pausing by accident.

Harris: The founder of the company came out of the lab automation space and had an idea to use docking systems to allow these larger complex robotic systems to be modular.

If you envision a traditional automotive robotic assembly line, you picture a bunch of robots with a lot of hard tooling. Each robot is $60,000, and the tooling and guarding around it is $1 million. You have a monolithic thing that can’t be changed very well. When you decide that what you want to do is make a different car, you have a problem.

Science changes a lot. You think you’re going to do something, and then you end up wanting to do a different sequence of steps. With our systems, you can plug and unplug pieces: You can take one piece required for one set of tasks, unplug it and put in a different piece to run a different set of tasks, just like a modular set of Legos.

What challenges do you typically face from customers?

Harris: Most of the challenges lie around interfacing with scientists to work a translation scheme, from “I’m a scientific thinker” into “We have to have a practical, mechanical thing that does whatever we’re thinking.”

For example, if you were selling to an industrial customer, they’re going to say, “I have a drawing, I want to make this part or put these parts together, and I want to achieve this many per hour in this footprint – what’s that going to cost me?”

Our scientists don’t have a drawing. They have an idea for what they want to test, and they want to be able to. They sort of exist at this complex intersection where they want absolute certainty and reliability, but they also want absolute flexibility.

They want to come in and say, “I’ve changed my mind because I’m a scientist, and I want to do something else today.” So we have to build systems that are very robust, stable, and reliable – but also flexible, so if you want to change your workflow, you can get in there and tinker with it.

Harris: We absolutely see this shift. We are pushing the shift, because the benefits of collaborative robots are amazingly large. We’ve put tons of systems in the field with [traditional] industrial robots, because frankly, until recently, there weren’t even collaborative robots that could do things with the precision we wanted.

Each square foot of horizontal lab space is extremely valuable to these scientists. So when we can take away the guarding, a few things happen. One is that the system gets smaller, which is great.

Second, the scientists can interact with it, so they can reach in, look at it while it’s running, and make observations. We want the robot to do what the robot does best, and the scientist to do what the scientist does best.

The third thing that happens is that now we can bring robots to places that historically would have been very complicated.

For example, if you have a cell culture process, where you’re growing certain types of cells to produce for testing – maybe you do this now and again, you don’t have a dedicated space where it takes place all the time, where a fixed, guarded robot system would live. Maybe instead, you want to do a 10-hour run of a cell culture, but it’s still super boring and expensive for a person.

Now, we can put a robot on a cart, and that robot can live in the corner – you can pull it out and put it there, it can grow cells for you for a while, then it can stop. Collaborative robots are greatly expanding the range of things that can be automated in the lab space.

Where are you seeing AI in this market?

Harris: The world at large uses the terms “robot” and “artificial intelligence” frequently in places where we should just be using the term “software.” Software is replacing many more tasks than mechanical robots are replacing.

Today, there’s a lot of “A” and very little “I.” In order to get intelligence, you need to batter it with tons and tons of data, because it’s not very intelligent. In order to learn, it has to get completely immersed in giant data sets.

For example, let’s say we want to use AI to help us develop new CAR T-cell therapies to go after rare disease states. Well, there isn’t much data on that. To get that data, you need to start doing volumes of experiments to build the intelligence. The more we want software and AI to get more intelligent at analysis, the more data production we need to do, which means the more robotics and automation are in demand.

Prime, the new liquid handling robotic system from HighRes Biosolutions.

AI has reportedly being used to assist in drug-discovery processes. What are you seeing?

Harris: There’s a big debate that people in this area are having: Can we use computing power to model and/or predict and/or solve for what treatments and products we should use? There are a lot of people working on this.

I come from the school that says, “inside a cell, let alone inside a body, is a pretty chaotic process.” But at the same time, there are places where [AI] has a presence. Digital pathology is making a lot of progress. [It’s] using actual machine learning, reading images to do a better job than a lot of pathologists, at identifying things like lung cancer.

Finally, what excites you about the work HighRes Biosolutions is doing?

Harris: First and foremost, we make robots that are trying to cure cancer. We get to come to work every day thinking, when we deliver that project, it’s doing something amazing. It’s not like we’re just delivering a toothbrush in a hotel room.

Keith Shaw is the Editor-in-chief for Robotics Business Review. Prior to joining EH Media, he worked as an editor for Network World, Computerworld and various newspapers across Massachusetts, New York, and Florida. He holds a degree in journalism from Syracuse University.