Why we made this change

Visitors are allowed 3 free articles per month (without a subscription), and private browsing prevents us from counting how many stories you've read. We hope you understand, and consider subscribing for unlimited online access.

Neural networks don’t understand what optical illusions are

Machine-vision systems can match humans at recognizing faces and can even create realistic synthetic faces. But researchers have discovered that the same systems cannot recognize optical illusions, which means they also can’t create new ones.

Human vision is an extraordinary facility. Although it evolved in specific environments over many millions of years, it is capable of tasks that early visual systems never experienced. Reading is a good example, as is identifying artificial objects such as cars, planes, road signs, and so on.

But the visual system also has a well-known set of shortcomings that we experience as optical illusions. Indeed, researchers have identified many ways in which these illusions cause humans to misjudge color, size, alignment, and movement.

The illusions themselves are interesting because they provide insight into the nature of the visual system and perception. So ways of finding new illusions that explore these limits would be hugely useful.

Concentric circles?

Which is where deep learning comes in. In recent years, machines have learned to recognize objects and faces in images and then to create similar images themselves. So it’s easy to imagine that a machine-vision system ought to be able to learn to recognize illusions and then to create its own.

Enter Robert Williams and Roman Yampolskiy at the University of Louisville in Kentucky. These guys have attempted this feat but found that things aren’t so simple. Current machine-learning systems cannot generate their own optical illusions—at least not yet. Why not?

First some background. The recent advances in deep learning are based on two advances. The first is the availability of powerful neural networks and one or two programming tricks that make them good at learning.

The second is the creation of huge annotated databases that machines can learn from. Teaching a machine to recognize faces, for example, requires many tens of thousands of images containing faces that are clearly labeled. With that information, a neural net can learn to spot characteristic facial patterns—two eyes, a nose, and a mouth, for example. And even more impressive, a pair of them—called a generative adversarial network—can teach each other to create realistic, but totally synthetic, images of faces.

Williams and Yampolskiy set out to teach a neural network to identify optical illusions in the same way. The computing horsepower is easily available, but the necessary databases are not. So the researchers’ first task was to create a database of optical illusions for training.

That turns out to be hard. “The number of static optical illusion images is in the low thousands, and the number of unique kinds of illusions is certainly very low, perhaps only a few dozen,” they say.

That represents a challenge for current machine-learning systems. “Creating a model capable of learning from such a small and limited dataset would represent a huge leap in generative models and understanding of human vision,” they say.

So Williams and Yampolskiy compiled a database of over 6,000 images of optical illusions and then trained a neural network to recognize them. Then they built a generative adversarial network to create optical illusions for itself.

The results were disappointing. “Nothing of value was created after 7 hours of training on an Nvidia Tesla K80,” say the researchers, who have made their database available for others to use.

Nevertheless, this is an interesting result. “The only optical illusions known to humans have been created by evolution (for instance, eye patterns in butterfly wings) or by human artists,” they point out.

In both cases, humans play a crucial role by providing valuable feedback—humans can see the illusion.

But machine-vision systems cannot. “It seems unlikely that [a generative adversarial network] could learn to trick human vision without being able to understand the principles behind these illusions,” say Williams and Yampolskiy.

They may not be easy, because there are crucial differences between machine-vision systems and the human visual system. Various researchers are developing neural networks that resemble the human visual system ever more closely. Perhaps an interesting test will be whether they can see illusions or not.

In the meantime, Williams and Yampolskiy are not optimistic. “It seems that a dataset of illusion images might not be sufficient to create new illusions,” they say. So for the moment, optical illusions are a bastion of human experience that machines cannot conquer.

You've read
of three
free articles this month.
Subscribe now for unlimited online access.
You've read
of three
free articles this month.
Subscribe now for unlimited online access.
This is your last free article this month.
Subscribe now for unlimited online access.
You've read all your free articles this month.
Subscribe now for unlimited online access.
You've read
of three
free articles this month.
Log in for more, or subscribe now for unlimited online access.
Log in for two more free articles, or subscribe now
for unlimited online access.