In the atrium of the JADS-building in ’s-Hertogenbosch, the heart of Data Science research and education of Tilburg University and Eindhoven University of Technology, Eric Postma explains where his fascination for artificial intelligence (AI) stems from. Just how appropriate this location is, quickly becomes apparent in the interview. There is no AI without data, that much is clear. And without humans, there is no AI. To Eric, humans are central, and that is at least remarkable for a professor Artificial Intelligence. ‘But it fits Tilburg University’s unique identity and future perfectly.

Where did you get your fascination for AI?

“I started out studying physics, but the program did not get me anywhere in my search for answers. I decided to switch to a new program that was being offered in psychology: Cognitive Science. And that turned out to be just right for me. Next to this program, I took courses in computer science and in the field of neuro-sciences, and that is how I grew into AI, which is the field I got my PhD in.

For me, the driving force has always been the human brain. To be perfectly honest, for me a computer is an interesting instrument, not as a machine per se, but rather as a means to understand humans and the human brain. To arrive at a more complete understanding there, you also need to include how people fit into in their environment. And that is where culture, where society comes in. To be able to teach computers certain human skills, you need to improve your understanding of humans. Many of the AI prophets of doom, like Elon Musk for instance, don’t have a clue how complex human intelligence actually is.”

There is also a downside to the new technologies. I am thinking of ‘fake news’ and all the reports in the media on Trump and Brexit. Do you leave ethical questions up to philosophers, or do you see a role for yourself there as well?

“I certainly consider it part of my responsibility to alert people to the impact AI is having on society. It is a lot to do with the way people interact with it. Fake news has been around for as long as humans have been around, but it has been tremendously reinforced by machines. As a society, we need to do something about these excesses. The business model employed by Google and Facebook is focused entirely on getting us to behave in a certain way. Their automatically learning algorithms try to influence our clicking behavior. In the near future, we may well be communicating with a friendly virtual human-like face on the screen. Because of our sensitivity to subtle social signals like smiles, we come to regard the face as a virtual friend, as someone you pose questions to. But as far as the big technology companies are concerned, this friendly face is there to push your behavior in a certain direction, to get you to buy products. This is something we are familiar with, because that is also what happens when you get served in a store, but the computer-generated face is manipulating us on the basis of an automatically learning algorithm that stimulates purchasing behavior and that can even influence our opinion. People should be aware of this. I want to show them how the mechanism works.

Ironically, AI itself can provide answers to the downside of AI. There are people in our group that are working on that: AI can be employed to protect privacy for instance.”

Can you tell us what research you are currently engaged in?

“At the moment, I am involved mainly in the workings of the technology that is responsible for the current AI revolution: deep learning. This technique makes it possible to train computers to recognize objects and patterns, for example, or to predict whether or not people will get a loan. In the latter example, it is very difficult to explain exactly how the computer arrives at a prediction. But this really applies to people as well: They take decisions and then are asked to explain why. This explanation often is a reconstruction that manages to convince rather than an explanation that is factually correct. People often do not know why they take a particular decision. Deep learning involves a very complicated mathematical formula, which needs to be translated somehow, in a way that people will understand.

The ways in which deep learning can be employed are limitless. We will be seeing all kinds of applications in the next ten years. Think of automatic voice recognition, recognition of emotions, text analysis, recognition of paintings, recognition of exoplanets, recognition of skin cancer or breast cancer. In all cases where large amounts of data are available, systems can be trained to recognize patterns.

I collaborate a lot with companies and with the public sector. Within the framework of Mind Labs, located in the Deprez building next to Tilburg Central Station, our department, under the supervision of Max Louwerse, is working together with national and international companies. Within JADS, I am collaborating with KPN. This latter collaboration mainly concerns the transparency of algorithms.”

You would expect to find a professor of AI at a University of Technology. Do you feel you are in the right place nevertheless, at Tilburg University?

“Being a university of Humanities and Social Sciences, Tilburg has a unique profile, but it does need to move with the times. Particularly in a society that is digitalizing rapidly, it can make the difference by strategically reinforcing its technical profile and bridging the gap between humanities and social sciences on the one hand and technical sciences on the other. It is for that reason that we have created the Cognitive Science & AI program and that we coordinate the Data Science for Society Master’s program. It is a good thing that Tilburg is participating and investing in initiatives such as JADS ‘s-Hertogenbosch and in Mind Labs. Seeing as I am into AI first and foremost to understand humans, this is the right place for me to be.”