The Dawn of Artificial Personality: Data Analytics and Election Engineering

Spring 2016 in Cambridge. I am sitting in the front row at the unusual lesson about Big Data Analytics held by Dr. David Stillwell, a young lecturer at the Judge Business School, University of Cambridge. His voice is calm and plain, but what he is telling is fascinating. His research team has recently developed a model capable of scanning any social media user’s digital record, applying machine learning algorithms, and extrapolating precious information about habits, behaviours, choices, personalities.

I walk down Trumpington Street, beside Pembroke College porter’s lodge and I turn left on Mill Lane towards the river. The picturesque sunset and 800 years of history cast a unique charm on this illustrious English city. Over 90 Nobel Prize winners and the brightest human minds have walked down the same path over the centuries and I can feel that unique vibe. The magic of Cambridge is accurately described by a few words welcoming travellers at the Platform 1 in Cambridge railway station.

” Dear World, the people who arrive in this city change Cambridge. The ideas that leave this city change the world. ”

As I thought about the great potential of Dr. Stillwell’s research and the impact it could have on the digital economy, I knew someday I would hear about it in the news.

Our digital footprint reveals who we are

Let’s go back to 2008. Michal Kosinski, a young Polish researcher, arrives at the University of Cambridge to pursue his Ph.D. at the Psychometrics Centre, joining David Stillwell’s team. The collaboration between the two would turn out to be one of the most fruitful and prolific, with astonishing results strangely overlooked by the public.

” 70 Facebook likes were enough to outdo what a person’s friends knew about him, 150 what their parents knew, and 300 likes what their partner knew. More likes could even surpass what a person thought they knew about themselves. ”

The accuracy of predictions is mainly due to the amount of data collected and the advanced statistical analysis behind the model. Underpinning these predications is the fact that the two Cambridge researchers owned the myPersonality database, which counts more than 6 million data entries and more than 4 million individual Facebook profiles that come from a vast range of age groups, backgrounds, and cultures.

As Hannes Grassegger and Mikael Krogerus brilliantly reported in an article in late 2016 in the Zurich-based Das Magazin, “a few weeks after the publication of the research of Kosinski and Stillwell, they would receive two simultaneous phone calls: a threat of a lawsuit and a job offer. Both from Facebook.”

Within a few days, Facebook likes became private by default. Before the study went public, anyone could land on our Facebook page and check our likes. Of course, it was too late to prevent pre-existing databases from collecting this information – in fact it was already being exploited by private marketing agencies.

Facebook likes are just a little example of the clues we leave behind as we surf and consume the Internet. Think of your search chronology, browsing history and frequencies, calendar entries, geolocalization, friends’ networks, purchasing habits. All of this data is a valuable resource in the pockets of those who own them.

The inner secrets of personality

Psychometric theories applied to the analysis of digital records allowed researchers to encode millions of personality nuances of real users (with their consent) into monster correlation matrices. Who we are, what interests us, what makes us happy, what scares us, what we would say in some circumstances, what we would keep secret, what we would buy, when and why.

And nothing prevents us from applying the algorithm the other way around: set up a research for specific profiles and personality traits, and address them with specific marketing and political messages, amplifying persuasion and trust.

If you know everything about me, likely you know how to convince me.

Coincidences

In 2014, Michal Kosinski was approached by Aleksandr Kogan, an assistant professor at the Psychology Department of the University of Cambridge. He asked Kosinski for access to his research database for personal interest. Several months later, it became clear that Kogan was asking on behalf of the Strategic Communication Laboratories (SCL), an obscure company with a focus on “election influencing”. One year earlier in 2013, SCL had spun off a startup named Cambridge Analytica, which will hide from the press for a couple more years.

Then, in November 2015, Nigel Farage announced he hired a big data analytics company to support the Leave EU campaign online. That company was Cambridge Analytica and their value proposition is “innovative political marketing – microtargeting – by measuring people’s personality from their digital footprints”. Do you see any analogies with the predictive model Stillwell and Kosinski developed a few years earlier in Cambridge?

And there is more.

In October 2016, one month before the American presidential elections, Donald Trump tweeted a cryptic post: “Soon you’ll be calling me Mr. Brexit”. Very few connected the dots and found that his marketing team had just hired a small big data analytics company to support the presidential campaign. Guess who? Cambridge Analytica.

Once the secrets of personality are revealed, and a powerful data analytics model proves that behaviors can be steered and changed, what will be the next step?

We already see that every new prototype of artificial intelligence is now focusing on human interface and communication. Gradually, machines will pervade all work environments and social spheres. And human-machine integration will be possible only by advancing interface technologies.

In Cambridge, David and Michal discovered the algorithm that could easily decode our personality. Future robots might also want to create their own artificial personality capable of adapting to any human interface, thus maximizing a specific result, be it a purchase decision, a political vote, or a generic behavior.

Wouldn’t it be tempting to set up an automatic personality vending machine?

Future machines will be equipped with a customizable artificial personality, whose features will depend on context, users’ preferences, company culture or even specific social and moral principles. In fact, personality compatibility is considered one of the driving factors for productivity and job satisfaction.

Downloadable personality

In January 2015, Google filed a curious patent regarding the possibility to create and store in the cloud a customizable artificial personality set. The choice of one specific feature can be triggered either by a voluntary command or by some checkpoints, or clues, that steer the decision to implement one distinct personality interface. Things like the time of the day, date, voice tone, weather, etc.

A too well-known AI routine: Understanding, Emulating, Improving.

Just like industrial automation, the artificial personality would be more effective and efficient than any human counterpart. And to stay competitive, affected companies will need to utilize this.

And so another prerogative of good old humanity falls apart.

About the author

CiroBorriello holds an aerospace engineering BSc and MSc at the Politecnico of Torino and an MBA at the University of Cambridge. Previously R&D and Innovation Project Manager at Airbus, now Space Programme Management officer at EUMETSAT, and technology entrepreneur.

I guess a lot can be gleaned from the facebook likes we leave behind and it is frightening to think what our information can be used for.. The concept of a ‘downloadable personalities’ intrigues me and I wonder if it would be used for good or bad? x

Quite an interesting take on our future and A.I. It’s amain what could be ahead for us as a species but also scary thinking you can choose a “designer baby” or have your consciousness transferred to a machine.

It is scary to think about what we may be leaving behind as digital footprints. Technology has provided us so much convenience, as well as so much anxiety and distrust. My family and I are being vigilant about securing our online safety.

It’s an interesting theory here that somehow computers can opt for artificial intelligence. I don’t know. Theoretically we tell an intelligence to do certain things. But I suppose this is not unsimilar to the views of the universe where we argue biological determinism versus sociological impact of events on our lives. Nature vs. nurture. But I still think in this case the fact that we set the parameters of the AI makes it still a determined function. And the variance we perceive in life is not perceivable by the AI itself.