New data analysis competitions

Privacy

Law
enforcement agencies have embraced
facial recognition. And contractors have returned the embrace, offering
up a variety of "solutions" that are long on promise, but short
on accuracy. That hasn't stopped the mutual attraction, as government
agencies are apparently willing to sacrifice people's lives and freedom
during these extended beta tests.

The latest example of widespread failure comes from the UK, where the
government's embrace of surveillance equipment far exceeds that of the
United States. Matt Burgess of Wired obtained
documents detailing the South Wales Police's deployment of automated
facial recognition software. What's shown in the FOI docs should worry
everyone who isn't part of UK law enforcement. (It should worry law
enforcement as well, but strangely does not seem to bother them.)

Do you use Verizon, AT&T, Sprint, or T-Mobile?
If so, your real-time cell phone location data may have been shared
with law enforcement without your knowledge or consent.

How could this happen? Well, a company that provides phone services to
jails and prisons has been collecting location information on all
Americans and sharing it with law enforcement—with little more than a
“pinky promise” from the police that they’ve obtained proper legal process.

Tech

Today we announce Google Duplex, a new technology for conducting natural
conversations to carry out “real world” tasks over the phone. The technology
is directed towards completing specific tasks, such as scheduling certain
types of appointments. For such tasks, the system makes the conversational
experience as natural as possible, allowing people to speak normally, like
they would to another person, without having to adapt to a machine.

The concept comes from researchers at OpenAI, a nonprofit founded by several
Silicon Valley luminaries, including Y Combinator partner Sam Altman,
LinkedIn chair Reid Hoffman, Facebook board member and Palantir founder
Peter Thiel, and Tesla and SpaceX head Elon Musk.

The OpenAI researchers have previously shown that AI systems that train
themselves can sometimes develop unexpected and unwanted habits. For
example, in a computer game, an agent may figure out how to “glitch” its way to a
higher score. In some cases it may be possible for a person to supervise
the training process. But if the AI program is doing something
impossibly complex, this might not be feasible. So the researchers suggest
having two systems discuss a particular objective instead.