Even if we don’t know yet how to align Artificial General Intelligences with our goals, we do have experience in aligning organizations with our goals. Some argue corporations are in fact Artificial Intelligences – legally at least we treat them as persons already.

The Foresight Institute, along with the Internet Archive, invite you to spend an afternoon examining AI alignment, especially whether our interactions with different types of organizations, e.g. our treatment of corporations as persons, allow insights into how to align AI goals with human goals.

While this meeting focuses on AI safety, it merges AI safety, philosophy, computer security, and law and should be highly relevant for anyone working in or interested in those areas.

Why this is really really important:

As we learned during last year’s Ethical Algorithms panel, there are many different ways that unchecked black box algorithms are being used against citizens daily.

This kind of software can literally ruin a person’s life, through no fault of their own – especially if they are already being discriminated against or profiled unfairly in some way in real life. This is because the algorithms tend to amplify and exaggerate any biases that already occur in the data being fed into the system (that it “learns” on).

Algorithms are just one of many tools that an an AGI (Artificial General Intelligence) might use in the course of its daily activities on behalf of whatever Corporation for which it operates.

The danger lies in the potential for misinterpretation by these AGIs should they be making decisions based on the faulty interpretations of unchecked black box algorithmic calculations. For this reason, preservation of and public access to the original data sets used to train these algorithms is of paramount importance. And currently, that just isn’t the case.

The promise of AGIs is downright exciting, but how do we ensure that corporate-driven AGIs do not gain unruly control over public systems?

Arguably, corporations are already given too many rights – those rivaling or surpassing those of actual humans, at this point.

What happens when these Corporate “persons” have AGIs out in the world, interacting with live humans and other AGIs, on a constant basis. (AGIs never sleep.) How many tasks could your AGI do for you while you sleep at night? What instructions would you give your AGI? And whose “fault” is it when the goals of an AGI conflict with those of a living person?

Joi Ito, the Director of the MIT Media Lab, wrote a piece for the ACLU this week, concluding that AI Engineers Must Open Their Designs to Democratic Control -“The internet, artificial intelligence, genetic engineering, crypto-currencies, and other technologies are providing us with ever more tools to change the world around us. But there is a cost. We’re now awakening to the implications that many of these technologies have for individuals and society…

AI is now making decisions for judges about the risks that someone accused of a crime will violate the terms of his pretrial probation, even though a growing body of research has shown flaws in such decisions made by machines,” he writes. “A significant problem is that any biases or errors in the data the engineers used to teach the machine will result in outcomes that reflect those biases…

Joi explains that the researchers at the M.I.T. Media Lab, have been starting to refer to these technologies as “extended intelligence” rather than “artificial intelligence.” “The term “extended intelligence” better reflects the expanding relationship between humans and society, on the one hand, and technologies like AI, blockchain, and genetic engineering on the other. Think of it as the principle of bringing society or humans into the loop,” he explains.

Sunday’s seminar will discuss all of these ideas and more, working towards a concept called “AI Alignment” – where the Corporate-controlled AGIs and humans work toward shared goals.

The problem is that almost all of the AGIs being developed are, in fact, some form of corporate AGI.

That’s why a group of AGI scientists founded OpenCog, to provide a framework that anyone can use.

Aaron Swartz Day is working with OpenCog on building an in-world robot concierge for our VR Destination, and we will be discussing and teaching about the privacy and security considerations of AGI and VR in an educational area within the museum – and of course on this website :-). Also #AGIEthics will be a hackathon track this year, along with #EthicalAlgorithms :-)

So! If this is all interesting to you – PLEASE come on Sunday :-) !

There will also be an Aaron Swartz Day planning meeting –> way early this year –> because really we never stopped working on the projects from last November –> you are gonna love it! –> The meeting is at the Internet Archive on May 23, 2018 at 6pm. There will be an RSVP soon – but save the date! :-)