Quantum logic could make better robot bartenders

Creating robots with multiple personalities may be one way to make them act more like us.

Despite their reputation as a psychological disorder, we all have multiple personalities. Controlled personality switching helps us juggle our daily roles – as workers, parents and spouses, for instance. In contrast, robots are usually programmed to respond to situations by following a single set of rules, which makes them inflexible. To trigger a personality switch in robots, quantum logic, with its limitless and apparently random outcomes, could do the trick.

Quantum logic is a new type of robotic control, so roboticists needed a task to test its performance. They took the unusual step of including a science fiction story in the process of designing their quantum programming challenge. A story about a robot bartender provided the model of how to test quantum-driven programs with multiple personalities.

Advertisement

In the story, Jimmy the robot dutifully fills its creator’s endless drink orders, even though his customer leaves the glasses untouched. The robot does not think to question the situation until he is taught that he possesses different identities&colon; his bartender persona is obliged to respond to drinks orders, but he can also switch to a less constrained persona to ask why a customer would order the same drink, again and again, without taking a sip.

The story, written by Brian David Johnson of chip manufacturer Intel, is a “science fiction prototype“. It starts with current scientific thinking and uses the creative freedom of fiction to explore the implications of technology.

Upon reading the tale, Simon Egerton, a roboticist at the Sunway Campus of the University of Monash in Bandar Sunway, Malaysia, realised that Jimmy’s personality change provided an ideal testing ground for quantum logic programs.

“That inspired me to take that scene in the story and feed it back into the science now,” Egerton says. His competition, set to begin in October, is designed to find a program that eventually questions the customer’s nonsensical drink orders.

Quantum personalities

Typical computer programs execute a sequence of simple yes-or-no decisions using classical logic. Quantum logic expands the number of possible outcomes and sprinkles a touch of randomness into the mix.

In a trick of quantum mysteriousness, a bit of information – called a “qubit” in quantum computing – can occupy several states at once. A qubit running through the personality selection program can hop between robot identities, activating a drink-making bartender, a polite waiter or a child-like, inquisitive persona.

Thus, the same drink order, or input, can result in different personality outputs, such as a question or a sassy refusal to fill the order.

Although each switch happens randomly – which could seem irrational – each personality occurs with a known probability. Over time, with successive interactions between the bartender and the customer, a pattern of personalities emerges. The outcome is “not irrationality, but a very structured way of jumping between different contexts”, says Diederik Aerts of Vrije Universiteit Brussel in Brussels, Belgium, who was not involved with the work.

Egerton and his colleagues plan to release a set of quantum logic controllers for programmers to incorporate into their own bartender programs for the competition. Anyone can log into a virtual bar, assume the character of the customer and evaluate the robot bartenders on their level of independence.

“I don’t know if it’s going to work, or what the results will be,” Egerton says. Perhaps the quantum controllers will match, or even outperform, personality switching programs using traditional logic, he adds.

Others disagree. “Quantum computing does not solve any problems that classical computing cannot tackle now,” says Martin Lukac of Tohoku University in Sendai, Japan. “It just does the calculations faster.”

Free will

Egerton says the quantum controllers could raise the question of the existence of robotic free will. However, even free will in humans is not yet fully understood, responds Aerts.

“I personally think that they are onto something, but I think it is much more complex than they imagine,” he says.

Still, Aerts finds the competition intriguing. “I think science advances with wild ideas,” he says. “And if science advances, many amazing things will be found, much more amazing than science fiction writers can imagine.”

Science fiction offers plenty of warnings about humans losing control of wilful robots. Could this experiment be dangerous?

Egerton suggests science fiction could offer the answer to this question, too. Writer Isaac Asimov’s three laws of robotics – a robot must not hurt humans, must always be obedient and must protect its own existence – could rein in a robot’s free will, Egerton says. “I think that’s one of the many things [Asimov] got right.”