Our laws lag, but corporations are not the biggest threat

I'm writing generally to praise the
"view" by Vivek Wadhwa ["Our Lagging Laws", for some reason not available on line in its hardcopy July/Aug form, even to subscribers]. In fact I hope you develop this problem into a
full article by one of your investigative reporters in the near future. However, I think his comment about the
threat by the NSA is very misleading. The NSA has (or at least recently
had) full access to the databases of Google, Apple, Facebook and more,
so it is misleading to say that companies are a greater threat or have
more information.

Also, that suggestion misses an important
point that any future article should be promoting. We are never going
to go back to not sharing data, just as our governments are never going
to be too weak to break into our homes. But we do not expect our
governments (or anyone else) to break into our homes without
extraordinarily good reason. Neither should we expect anyone to misuse
the data we entrust to them. The law should be ensuring that all data
is obfuscated and anonymised in a way that makes it unusable for
anything but its intended purpose, and ensure that no organisation
beyond that we entrust it to should ever obtain access to it, even in
cases such as bankruptcy when ordinary assets would pass to new owners.
Data (like much commercial software) should be viewed as leased, not
owned by those we entrust it to.

Data and AI are growing at
a sufficient rate that we are already able to make unprecedentedly good predictions
about our own and others' behaviour. This will fundamentally change
governance as well as commerce. We need to be moving forward now in
accelerating legislation, enforcement of current privacy laws, and
public education.

Popular Posts

Since our Science paper came out it's been evident that people are surprised that machines can be biased. They assume machines are necessarily neutral and objective, which is in some sense true -- in the sense that there is no machine perspective or ethics. But to the extent an artefact is an element of our culture, it will always reflect bias.

I think the problem is that people mistake computation for math. Math really is pure, has certain truth, it's eternal, it would be the same without any particular sentient species looking at it. That's because math is an abstraction that doesn't exist in the real world. Computation is a physical process. It takes time, energy, and space. Therefore it is resource constrained. This is true whether you are talking about natural or artificial intelligence. From a computational perspective there's little difference between these.

People are smart because we are able to exploit the selected "best of" other people…

The good news: We know where word meanings come from
We have a paper in Science,Semantics derived automatically from language corpora contain human biases (a green open access version is hosted at Bath). What this paper shows is that you can find the implicit biases humans have just by learning semantics from our language. We showed this by using machine learning of semantics from the language on the Web, and comparing that to implicit biases psychologists have documented using the Implicit Association Test (IAT). The IAT uses reaction times to show that people find it easier to associate some things than others. For example, it's easier to associate flowers with pleasant terms and bugs with unpleasant terms than the other way around. Notice that the actual statistics underlying the IAT is always about these slightly complicated, dual relative measures. It's easier to: group {flowers and pleasant terms} together, AND {unpleasant terms and insect names} (both those groupin…

When Demis Hassabis said he would join Google if they didn't work with the US military, I told BBC Newsnight that this was a red herring. "Murder kills five times more people than war" was a short way to say there's a lot more to ethics than just avoiding the military, an obvious example being avoiding selling arms to paramilitaries (or school children.) In fact, many military officers often really are major advocates for peace and stability, including policies like reducing developing-world diseases and poverty because they contribute to instability.

But by far the worst and most disturbing thing I heard in the many AI policy meetings I was invited to attend last year was in the only one in the US. That was the Artificial Intelligence and Global Security Summit on Nov 1, 2017 in Washington, DC, hosted by the Center for a New American Security. Note: the CNAS have videos and transcripts of all the talks and discussions linked on that page.