Didn't Get the Job? The Robots May Not Have Liked Your Social Media Activity

Didn't Get the Job? The Robots May Not Have Liked Your Social Media Activity

2018 was a dark year for tech with anxiety about surveillance, data collection, and artificial intelligence. Picture: iStockSource:istock

We may remember 2018 as the year when technology’s dystopian potential became clear, from Facebook’s role in enabling the harvesting of our personal data for election interference, to a seemingly unending series of revelations about the dark side of Silicon Valley’s ‘connect-everything’ ethos.

The list is long: Hi-tech tools for immigration crackdowns. Fears of smartphone addiction. YouTube algorithms that steer youths into extremism. An experiment in gene-edited babies.

Doorbells and concert venues that can pinpoint individual faces and alert police. Repurposing genealogy websites to hunt for crime suspects based on a relative’s DNA. Automated systems that keep tabs of workers’ movements and habits. Electric cars in Shanghai transmitting their every movement to the government.

A person on their smartphone while on the toilet. There seems to be no end in sight for smartphone addiction. Picture: iStock.Source:istock

It’s been enough to exhaust even the most imaginative sci-fi visionaries. “It doesn’t so much feel like we’re living in the future now, as that we’re living in a retro-future,” novelist William Gibson wrote this month on Twitter. “A dark, goofy ‘90s retro-future.”

More awaits us in 2019, as surveillance and data-collection efforts ramp up and artificial intelligence systems start sounding more human, reading facial expressions and generating fake video images so realistic that it will be harder to detect malicious distortions of the truth.

“Something that was heartening this year was that accompanying this parade of scandals was a growing public awareness that there’s an accountability crisis in tech,” said Meredith Whittaker, a co-founder of New York University’s AI Now Institute for studying the social implications of artificial intelligence. The group has compiled a long list of what made 2018 so ominous, though many are examples of the public simply becoming newly aware of problems that have built up for years.

Political data-mining firm Cambridge Analytica gathered the information of millions of Facebook users for the purpose of controlling electoral outcomes. Picture: AP PhotoSource:AP

Among the most troubling cases was the revelation in March that political data-mining firm Cambridge Analytica swept up personal information of millions of Facebook users for the purpose of manipulating national elections.

“It really helped wake up people to the fact that these systems are actually touching the core of our lives and shaping our social institutions,” Mr Whittaker said.

That was on top of other Facebook disasters, including its role in fomenting violence in Myanmar, major data breaches and ongoing concerns about its hosting of fake accounts for Russian propaganda.

It wasn’t just Facebook. Google attracted concern about its continuous surveillance of users after The Associated Press reported that it was tracking people’s movements whether they like it or not.

It also faced internal dissent over its collaboration with the U.S. military to create drones with “computer vision” to help find battlefield targets and a secret proposal to launch a censored search engine in China. And it unveiled a remarkably human-like voice assistant that sounds so real that people on the other end of the phone didn’t know they were talking to a computer.

Those and other concerns bubbled up in December as politicians grilled Google CEO Sundar Pichai at a congressional hearing — a sequel to similar public reckonings this year with Facebook CEO Mark Zuckerberg and other tech executive due to the widening gap of distrust between technology companies and people.

Internet pioneer Vint Cerf said he and other engineers never imagined their vision of a worldwide network of connected computers would morph 45 years later into a surveillance system that collects personal information or a propaganda machine that could sway elections.

“We were just trying to get it to work,” recalled Mr Cerf, who is now Google’s chief internet evangelist. “But now that it’s in the hands of the general public, there are people who … want it to work in a way that obviously does harm, or benefits themselves, or disrupts the political system. So we are going to have to deal with that.”

Internet pioneers say they never intended connectivity to become a form of surveillance, propaganda, or data mining industry. Picture: SuppliedSource:Getty Images

Contrary to futuristic fears of “super-intelligent” robots taking control, the real dangers of our tech era have crept in more prosaically — often in the form of tech innovations we welcomed for making life more convenient.

Part of experts’ concern about the leap into connecting every home device to the internet and letting computers do our work is that the technology is still buggy and influenced by human errors and prejudices.

Uber and Tesla were investigated for fatal self-driving car crashes in March, IBM came under scrutiny for working with New York City police to build a facial recognition system that can detect ethnicity, and Amazon took heat for supplying its own flawed facial recognition service to law enforcement agencies.

“It became obvious to a lot of people that the rhetoric of doing good and benefiting society and ‘Don’t be evil’ was not what these companies were actually living up to,” said Mr Whittaker, who is also a research scientist at Google who founded its Open Research group.

At the same time, even some titans of technology have been sounding alarms. Prominent engineers and designers have increasingly spoken out about shielding children from the habit-forming tech products they helped create.

Microsoft President Brad Smith wants to limit facial recognition technology to stop the year 2024 turning into George Orwell’s 1984. Picture: AFPSource:AFP

And then there’s Microsoft President Brad Smith, who in December called for regulating facial recognition technology so that the “year 2024 doesn’t look like a page” from George Orwell’s 1984.

In a blog post and a Washington speech, Mr Smith painted a bleak vision of all- seeing government surveillance systems forcing dissidents to hide in darkened rooms “to tap in code with hand signals on each other’s arms.” To avoid such an Orwellian scenario, Smith advocates regulating technology so that anyone about to subject themselves to surveillance is properly notified. But privacy advocates argue that’s not enough.

Such debates are already happening in states like Illinois, where a strict facial recognition law has faced tech industry challenges, and California, which in 2018 passed the nation’s most far-reaching law to give consumers more control over their personal data. It takes effect in 2020.

“It’s funny in a way because this online environment was supposed to remove friction from our ability to transact,” said Mr Cerf. “If in our desire, if not zeal, to protect people’s privacy we throw sand in the gears of everything, we may end up with a very secure system that doesn’t work very well.”