AVATAR Robotic Kiosk Ready To Detect Lies In Travelers

Activist Post Editor’s Note: Despite the happy spin put on this research, this is one of the cornerstones to the new pre-crime world we are embarking upon. It’s an inversion of the American principle that you are innocent until proven guilty in a court of law. Now, an algorithmic overlord will decide who is presumed to be guilty and should have their travel restricted … or worse. Notice, too, that this system is not intended to stop only at border control — which might be an area that even the liberty-minded could accede to — additional uses mentioned include “law enforcement, job interviews and other human resources applications…” According to these researchers, the unit is ready to deploy, they are merely waiting for it to be accepted by the government. What could possibly go wrong?

“AVATAR has been tested in labs, in airports and at border crossing stations,” Elkins noted. “The system is fully ready for implementation to help stem the flow of contraband, thwart fleeing criminals, and detect potential terrorists and many other applications in the effort to secure international borders.”

The impending evisceration of the global job market by artificial intelligence and robotic automation is well-trodden territory. Various estimates suggest the American employment mill could shrink by 30% by the year 2025. The United Nations’ assessment is even grimmer. They project two-thirds of the human workforce will be replaced in the next decade. Usually, the major sectors included in these loss reports are manufacturing, retail, and blue collar jobs. However, a new analysis suggests white collar jobs are not immune, and now the world’s largest hedge fund is replacing its managers with artificial intelligence.

The firm Bridgewater Associates, which manages $160 billion worth of assets, tasked a team of its engineers with creating AI software that can automate decision-making and eliminate emotion from financial analysis. Leading the effort is the same man, David Ferrucci, who helmed IBM’s supercomputer Watson, which became famous in 2011 for beating humans at Jeopardy!

Now Ferrucci is developing the ambitious PriOS management software that Bridgewater anticipates will make three-quarters of its decisions within just five years.

Goodbye privacy, hello Alexa: here’s to Amazon echo, the home robot who hears it all

Alexa is the name of Amazon’s Echo, a voice-controlled personal assistant. Unlike rivals such as Apple’s Siri, Microsoft’s Cortana and Google Now, it is a physical presence: a 20cm-tall black cylinder, about the size of two Coke cans, which contains Wi-Fi, two speakers, seven microphones and connects to the cloud. Priced $179.99, it sits in your home, plugged into the wall, awaiting commands.

Ellen Ullman, a writer and computer programmer in San Francisco, sounded much more worried. The more the internet penetrates your home, car or body, the greater the danger, she said. “The boundary between the outside world and the self is penetrated. And the boundary between your home and the outside world is penetrated.”

Ullman thinks people are mad to use email supplied by big corporations – “on the internet there is no place to hide and everything can be hacked” – and even madder to embrace something like Alexa.

Such devices exist to supply data to corporate masters: “It’s going to give you services, and whatever services you get will become data. It’s sucked up. It’s a huge new profession, data science. Machine learning. It seems benign. But if you add it all up to what they know about you … they know what you eat.”

Ullman, the author of Close to the Machine: Technophilia and Its Discontents, is no luddite. She writes code. But, she warned, every time we become attached to a device our sense of our lives is changed. “With every advance you have to look over your shoulder and know what you’re giving up – look over your shoulder and look at what falls away.”

Artificial Intelligence and Corporate Control

Amazon, Google and DeepMind, Facebook, IBM, and Microsoft, have recently announced their establishment of the Partnership On AI. Given the belief that artificial intelligence technologies can raise “the quality of people’s lives and can be leveraged to help humanity address important global challenges such as climate change”, the Partnership aims to support “best practices”, “advance understanding”, and “to create an open platform for discussion and engagement”, on AI matters.

A Corporate Led Initiative

For those interested in, or concerned by, corporate influence, three aspects of the Partnership’s establishment stand out. First is the Partnership’s explicit suggestion that it “does not intend to lobby government or other policymaking bodies”. Second is the Partnership’s implied suggestion that AI systems are currently not able to be understood, interpreted, or explained, by the general public (and in certain instances, AI professionals). Third is the Partnership’s concern to “educate and listen to the public and actively engage stakeholders to seek their feedback on our focus, inform them of our work, and address their questions”.

Taken together, these considerations suggest that it is the Partnership’s five corporate founders that will be primarily responsible for addressing and defining the AI concerns identified as being of global ethical importance. Indeed, as the five corporate partners are key players in an AI talent grab that some suggest is underpinned by monopolistic intent, it might be argued that there are very few other organizations that could readily identify, let alone comprehend, the full scope of human rights concerns, existential risks, and so on, that AI gives rise to.

Thus, and whereas other prominent initiatives concerned to promote corporate responsibility have generally been led by international organizations (e.g., the United Nations Global Compact), or by market and civil society actors in relative unison (e.g., the Forest Stewardship Council), the Partnership on AI is notable for its clearly being led by its founding corporate members.

Some Important Considerations

Nevertheless, the Partnership is keeping up (multi-stakeholder) appearances by suggesting “it will share leadership with independent third-parties”, and by promising that its board will be comprised by an equal number of corporate and non-corporate members.” Some important considerations here, then, are how non-corporate board members are to be selected (presumably by the five founding corporate partners), and who it is that ultimately gets selected.

Given the Partnership’s anti-lobbying intentions – and in light of ongoing revelations regarding state actors and Internet surveillance (think of Snowden’s National Security Agency revelations for example) – it seems unlikely that representatives of state bodies will be included amongst the non-corporate board members. On the other hand, and as the Partnership’s website and initial press release suggest, it does appear likely that non-corporate board members will include academics, along with representatives from relevant professional and scientific organizations.

Whether or not these actors will be meaningfully independent, however, is less than certain. The likes of Google, for example, are not just building up significant AI research teams internally, but are continuing to maintain extensive links with the academic community externally (e.g., through Google’s Faculty Research Awards). As a result, and as leading AI scholar Yoshua Bengio has previously suggested, there is a significant risk that corporate tendencies to short-term thinking and secrecy will saturate the Partnership’s intentions from the start.

Knowledge is Power

Given that concerns regarding corporate overlords are longstanding, it is important to note that where the current developments differ, is in terms of the Partnership’s five founding corporations controlling so much of the relevant (human) intelligence. By way of contrast, citizens more generally do not possess the basic knowledge required to make any real sense of developments in AI, and are thus at a significant disadvantage when it comes to engaging and debating the global ethical concerns that AI raises. At the current juncture, then, it appears that most of us can do little more than hope that those with great power in AI matters use it wisely, and that they choose to help educate rather than program us.