from the don't-be-evil,-but-AI-first? dept

The present global verve about artificial intelligence (AI) and machine learning technologies has resonated in China as much as anywhere on earth. With the State Council’s issuance of the "New Generation Artificial Intelligence Development Plan" on July 20 [2017], China's government set out an ambitious roadmap including targets through 2030. Meanwhile, in China's leading cities, flashy conferences on AI have become commonplace. It seems every mid-sized tech company wants to show off its self-driving car efforts, while numerous financial tech start-ups tout an AI-driven approach. Chatbot startups clog investors' date books, and Shanghai metro ads pitch AI-taught English language learning.

That's from a detailed analysis of China's new AI strategy document, produced by New America, which includes a full translation of the development plan. Part of AI's hotness is driven by all the usual Internet giants piling in with lots of money to attract the best researchers from around the world. One of the companies that is betting on AI in a big way is Google. Here's what Sundar Pichai wrote in his 2016 Founders' Letter:

Looking to the future, the next big step will be for the very concept of the "device" to fade away. Over time, the computer itself -- whatever its form factor -- will be an intelligent assistant helping you through your day. We will move from mobile first to an AI first world.

This Center joins other AI research groups we have all over the world, including in New York, Toronto, London and Zurich, all contributing towards the same goal of finding ways to make AI work better for everyone.

Focused on basic AI research, the Center will consist of a team of AI researchers in Beijing, supported by Google China's strong engineering teams.

So far, so obvious. But an interesting article on the Macro Polo site points out that there's a problem with AI research in China. It flows from the continuing roll-out of intrusive surveillance technologies there, as Techdirt has discussed in numerous posts. The issue is this:

Many, though not all, of these new surveillance technologies are powered by AI. Recent advances in AI have given computers superhuman pattern-recognition skills: the ability to spot correlations within oceans of digital data, and make predictions based on those correlations. It's a highly versatile skill that can be put to use diagnosing diseases, driving cars, predicting consumer behavior, or recognizing the face of a dissident captured by a city's omnipresent surveillance cameras. The Chinese government is going for all of the above, making AI core to its mission of upgrading the economy, broadening access to public goods, and maintaining political control.

As the Macro Polo article notes, Google is unlikely to allow any of its AI products or technologies to be sold directly to the authorities for surveillance purposes. But there are plenty of other ways in which advances in AI produced at Google's new lab could end up making life for Chinese dissidents, and for ordinary citizens in Xinjiang and Tibet, much, much worse. For example, the fierce competition for AI experts is likely to see Google's Beijing engineers headhunted by local Chinese companies, where knowledge can and will flow unimpeded to government departments. Although arguably Chinese researchers elsewhere -- in the US or Europe, for example -- might also return home, taking their expertise with them, there's no doubt that the barriers to doing so are higher in that case.

So does that mean that Google is wrong to open up a lab in Beijing, when it could simply have expanded its existing AI teams elsewhere? Is this another step toward re-entering China after it shut down operations there in 2010 over the authorities' insistence that it should censor its search results -- which, to its credit, Google refused to do? "AI first" is all very well, but where does "Don't be evil" fit into that?

from the perhaps-AI-can-help-us-deal-with-AI dept

Most people don't understand the nuances of
artificial intelligence (AI), but at some level they comprehend that it'll be
big, transformative and cause disruptions across multiple sectors. And even if AI
proliferation won't lead to a robot uprising, Americans are worried about how AI and automation will
affect their livelihoods.

Recognizing this anxiety, our policymakers
have increasingly turned their attention to the subject. In the 115th Congress,
there have already been more mentions of “artificial intelligence” in proposed legislation and in the Congressional Record than ever before.

While not everyone agrees on how we should
approach AI regulation, one approach that has gained considerable interest is
augmenting the federal government's expertise and capacity to tackle the issue.
In particular, Sen. Brian Schatz has called for a new commission on AI; and Sen.
Maria Cantwellhas
introduced legislation setting up a new committee within the Department of
Commerce to study and report on the policy implications of
AI.

This latter bill, the “FUTURE of Artificial Intelligence Act” (S.2217/H.4625), sets forth a bipartisan proposal that
seems to be gainingsometraction. While
the bill's sponsors should be commended for taking a moderate approach in the
face of growing populist anxiety, it's not clear that the proposed advisory
committee would be particularly effective at all it sets out to do.

One problem with the bill is how it sets the
definition of AI as a regulatory subject. For most of us, it's hard to
articulate precisely what we mean when we talk about AI. The term “AI” can describe
a sophisticated program like Apple's Siri, but it can also refer to Microsoft's
Clippy, or pretty much any kind of computer software.

It turns out that AI is a difficult thing to define, even for experts.
Some even argue that it's a meaningless
buzzword. While this is a fine debate to have in the academy, prematurely
enshrining a definition in a statute – as this bill does – is likely to be the
basis for future policy (indeed, another recent bill offers a totally different definition). Down
the road, this could lead to confusion and misapplication of AI regulations. This
provision also seems unnecessary, since the committee is empowered to change
the definition for its own use.

The committee's stated goals are also overly-ambitious.
In the course of a year and a half, it would set out to “study and assess” over
a dozen different technical issues, from economic investment, to worker
displacement, to privacy, to government use and adoption of AI (although,
notably, not defense or cyber issues). These are all important issues. However,
the expertise required to adequately deal with these subjects is likely beyond
the capabilities of 19 voting members of the committee, which includes only
five academics. While the committee could theoretically choose to focus on a
narrower set of topics in its final report, this structure is fundamentally not
geared towards producing the sort of deep analysis that would advance the
debate.

Instead of trying to address every AI-related
policy issue with one entity, a better approach might be to build separate, specialized
advisory committees based in different agencies. For instance, the Department
of Justice might have a committee on using AI for risk assessment, the General
Services Administration might have a committee on using AI to streamline
government services and IT
infrastructure, and the Department of Labor might have a committee on worker displacement
caused by AI and automation or on using AI in employment decisions. While this
approach risks some duplicative work, it would also be much more likely to
produce deep, focused analysis relevant to specific areas of oversight.

Of course, even the best public advisory
committees have limitations, including politicization, resource constraints and
compliance with the Federal Advisory Committee Act. However, not
all advisory bodies have to be within (or funded by) government. Outside
research groups, policy forums and advisory committees exist within the private
sector and can operate beyond the limitations of government bureaucracy while
still effectively informing policymakers. Particularly for those issues not
directly tied to government use of AI, academiccenters, philanthropies and other groups
could step in to fill this gap without any need for new public expenditures or
enabling legislation.

If Sen. Cantwell's advisory committee-focused
proposal lacks robustness, Sen. Schatz's call for creating a new “independent federal
commission” with a mission to “ensure that AI is adopted in the best interests
of the public” could go beyond the bounds of political possibility. To his
credit, Sen. Schatz identifies real challenges with government use of AI, suchas those posed by criminal justice applications,
and in coordinating between different agencies. These are real issues that
warrant thoughtful solutions. Nonetheless, the creation of a new agency for AI
is likely to run into a great deal of pushback from industry groups and the
political right (like similar proposals in the past), making it a difficult
proposal to move forward.

Beyond creating a new commission or advisory
committees, the challenge of federal expertise in AI could also be
substantially addressed by reviving Congress' Office of Technology Assessment
(which I discuss in a recent paper with
Kevin Kosar). Reviving OTA has a number of advantages: OTA ran
effectively for years and still exists in statute, it isn't a regulatory body,
it is structurally bipartisan and it would have the capacity to produce deep-dive
analysis in a technology-neutral manner. Indeed, there's good reason to
strengthen the First Branch first, since Congress is ultimately responsible for
making the legal frameworks governing AI as well as overseeing government
usage.

Lawmakers are right to characterize AI as a big deal. Indeed, there are trillions of
dollars in potential economic benefits at stake. While
the instincts to build expertise and understanding first make for a commendable
approach, policymakers will need to do it the right way – across multiple
facets of government – to successfully shape the future of AI without hindering
its transformative potential.

from the alexa,-subscribe-and-share-too dept

Always-on, voice-operated assistants are on the rise, and most of the industry seems to have agreed that Amazon's Alexa is at the top of the pack. Podcast host Dennis Yang was and is an early adopter of these devices, so this week he's brought along Alexa, Google Now and Siri as guests for a discussion about the future of this technology.

from the urls-we-dig-up dept

We have computers that can beat us at games like chess and Go (and Jeopardy!), but we haven't seen too many robots that can beat humans at more physical sports like soccer or tennis. We've seen some air hockey robots that are nearly unbeatable, so it's really only a matter of time before robots learn how to play sports with a few more dimensions. Here are some badminton robots that are inching toward playing better than some of us.

Badminton robots are getting better slowly. This robot has binocular vision from two cameras and was built by students at the University of Electronic Science and Technology of China. However, it cheats a little bit by using two rackets....

from the urls-we-dig-up dept

Robots are getting better at performing complex tasks all the time. It won't be too long before they can drive cars and deliver packages (and replace about a quarter of a million human workers who drive for UPS/FedEx/USPS/etc). The technology isn't quite there yet, but it doesn't seem to be too far off in the future. However, we're nowhere near seeing a Rosie the Robot servant, predicted in the 1960s, but we're getting closer. Check out these marginally helpful robots for the home that could beat flying cars and pneumatic tube transportation to becoming a reality.

from the urls-we-dig-up dept

The old -- Garbage In, Garbage Out -- GIGO principle originated during the early days of computing, but it may be even more applicable today. With the explosion of data available that can be collected, there's a temptation to assume that analyses and meta-analyses can make sense of all that data and produce incredible insights. However, we should probably have some skepticism before we jump into the deep end of data and expect miraculous results.

from the urls-we-dug-up dept

It's a source of wonder and excitement for some, panic and concern for others, and a whole lot of cutting edge work for the people actually making it happen: artificial intelligence, the end-game for computing (and, as some would have you believe, humanity). But when you set aside the sci-fi predictions, doomsday warnings and hypothetical extremes, AI is a real thing happening all around us right now — and achieving some pretty impressive feats:

from the urls-we-dig-up dept

The accomplishments of artificial intelligence are making it a popular topic in the news again, both for its wins and its (apparent) failures. General artificial intelligence hasn't quite lived up to its full potential yet, but more open source AI projects could help speed up development. Here are just a few reminders that open source AI projects are making progress -- hopefully towards a more 'John Henry' type of AI and less of a scary Skynet program.

from the urls-we-dig-up dept

In case you missed it, humanity has been dealt a decisive intellectual blow by a go-playing computer program called AlphaGo. We mentioned AlphaGo back in January when Google announced that it had defeated European Go champion Fan Hui and was challenging Lee Sedol next. So now that the results are in, AlphaGo has shown the world that artificial intelligence can best the best of humanity at our most difficult games. We've seen this already with chess, and if you don't remember, people tried to make a variant of chess called Arimaa that humans could hold up as a game people could win over computers (ahem, that didn't work). We still have Calvinball, Diplomacy and certain forms of poker....

from the urls-we-dig-up dept

The rise of fantasy sports and realistic video games for every major sport has expanded the audience and engagement incredibly. Even if you can't throw a spiral, you can still manage a fantasy football team. Sabermetrics changed baseball, and deep learning algorithms are about to change how a lot of other sports are played. Computers aren't just going to beat people at chess and Go. They might become better talent scouts and strategists for every major sport.