Rolling Out the Robot

It’s a very busy time at my company: one of the kids is about to leave for college.

We’ve been planning this for a while, as you’d expect, but there’s always a last-minute flurry of activity before you get the youngster out the door. And you always worry about the kids doing well when they’re out in the world on their own, even if you’ve trained them the best you can.

It’s a very busy time at my company: one of the kids is about to leave for college.

We’ve been planning this for a while, as you’d expect, but there’s always a last-minute flurry of activity before you get the youngster out the door. And you always worry about the kids doing well when they’re out in the world on their own, even if you’ve trained them the best you can.

OK, in this case the kid is a robot, so it’s not likely to get drunk or post embarrassing photos online. But it could definitely trash peoples’ experiments or break down just after we shipped it halfway around the world, and we’d all like to avoid that.

This is my second rollout of a new scientific instrument since I joined the company two years ago, and the first where I’m one of the primary developers. Other people in the company handle almost all of the machine design and manufacturing. I write a lot of the firmware – that’s the computer program that runs an electronic device – and I also supervise the biological testing and make a few contributions to the machine itself.

In some ways, the work reminds me very much of writing a major paper or getting a grant application out. Coming up with the idea is the fun part, but (just like in a scientific study!) there are a lot of design changes, unexpected surprises and hard-won victories on the way to the finished product. But there are differences, and one of them is that I’ve been working hard to make myself invisible.

Scientists are never truly invisible. In my career as a research biologist, my bench work and writing were meant to be objective and reproducible, but they still reflected my scientific approach and personality. And that was fine, because it was my name on the paper.

But now, I’m building scientific machines, and my job is to give the user a well-made tool so she can do her own science in her own way. As Kent L. Norman, a user interface pioneer, put it in 1991: “The user is either an extension of the system or the system is an extension of the user.” I’m building the system, so I should make it an extension of the scientist who uses it, not force her to conform to my own preferences and quirks. Turns out that it takes a lot of creativity to not get in the way of other peoples’ creativity.

The design process has been especially interesting for this product, which is an automated Western blot processor. Early on, we decided that we wanted the user to be able to use his own preferred protocol, rather than having to accept a generic method that we programmed in. It’s that whole “system as an extension of the user” thing: we wanted the machine to act like a robotic version of the experimenter.

I think we put more effort into that function than into all of the rest of the program combined. It is not simple to build a machine that clearly asks the user what he wants, and one of the trickiest parts is to anticipate all of the things the user might want. Especially when dealing with a protocol that is as much art as science. (To get an idea of the variety of Western protocols out there, take a look at https://promo.gelifesciences.com/gl/artofwesternblotting/tips-and-tricks.html.)

Another related problem we had to think about was making sure our machine is robust, and can’t be thrown out of whack if the user presses the wrong button or moves a part in a way we didn’t expect. A good machine is forgiving of the user’s learning curve, and it shouldn’t force the user to learn a bunch of special tricks and workarounds to get it to behave properly.

Unfortunately, none of us on the design team can do a thorough job of this kind of testing. We all know exactly how the device is supposed to work, and so we won’t make mistakes correctly. For example, when I’m setting up one of our machines, I can literally envision the code I am interacting with because I wrote it. I’m not going to do the setup wrong, and I’m definitely not going to be wrong in all of the ways it’s possible to be wrong.

So, over the last several months we have had a team of interns and volunteers try to make our robot unhappy. We introduce them to the machine, give them very minimal instruction (or no instruction at all!) and see what they do. For the blot processor, we’ve had two groups of people in: ones who know how to do a Western but have never seen the machine, and also people who don’t even know what a Western blot is.

The job of the first group is to make sure the processor can do what they think of as a “good Western blot”. The job of the second group is to see if they can break the machine. Can they make the software freeze, get the motors to make horrible grinding noises, or make the pumps spray buffer everywhere? In the early phases, yes, some of them could. (Not anymore, which is why we’re ready for rollout.) I like to think of this as a particularly robust form of peer review.

And here we are. I now feel very much like I did anytime I knew a paper was going to come out: hoping it does well, and looking forward to hearing everyone’s reactions. There will be more problems to fix as we move into production, but we’ve done this before and we know how to handle them.

And, of course, we’re already well into the work on the next product, just like I would have already started work on my next paper back during my research days. Some things never change.