from the I,-for-one,-welcome-our-new-AI-chatbot-overlords dept

As artificial intelligence (AI) finally begins to deliver on the field's broken promises of the last forty years, there's been some high-profile hand-wringing about the risks, from the likes of Stephen Hawking and Elon Musk, among others. It's always wise to be cautious, but surely even AI's fiercest critics would find it hard not to like the following small-scale application of the technology to tackle the problem of phishing scams. Instead of simply deleting the phishing email, you forward it to a new service called Re:Scam, and the AI takes over. The aim is to waste the time of scammers by engaging them with AI chatbots, so as to reduce the volume of phishing emails that they can send and follow up:

When you forward an email, you believe to be a scam to me@rescam.org a check is done to make sure it is a scam attempt, and then a proxy email address is used to engage the scammer. This will flood their inboxes with responses without any way for them to tell who is a chat-bot, and who is a real vulnerable target. Once you've forwarded an email nothing more is required on your part, but the more you send through, the more effective it will be.

Here's how the AI is applied:

Re:scam can take on multiple personas, imitating real human tendencies with humour and grammatical errors, and can engage with infinite scammers at once, meaning it can continue an email conversation for as long as possible. Re:scam will turn the table on scammers by wasting their time, and ultimately damage the profits for scammers.

When you send emails to Re:Scam, it not only ties up the scammers in fruitless conversations, it also helps to train the underlying AI system. The service doesn't require any sign-up -- you just forward the phishing email to me@rescam.org -- and there's no charge. Re:Scam comes from Netsafe, a well-established non-profit online safety organization based in New Zealand, which is supported by government bodies there. It's a nice idea, and it would be interesting to see it applied in other situations. That way we could enjoy the benefits of AI for a while, before it decides to kill us all.

Some have worried about very broad patents being issued in the AI space. For example, Google has a patent on a common machine learning technique called dropout. This means that Google could insist that no one else use this technique until 2032. Meanwhile, Microsoft has a patent application with some very broad claims on active machine learning (the Patent Office recently issued a non-final rejection, though the application remains pending and Microsoft will have the opportunity to argue why it should still be granted a patent).Patents on fundamental machine learning techniques have the potential to fragment development and hold up advances in AI.

As a subset of software development, AI patents are likely to raise many of the same problems as software patents generally. For example, we've noted that many software patents take the form: apply well-known technique X in domain Y. For example, our Stupid Patent of the Month from January 2015 applied the years-old practice of remotely updating software to sports video games (the patent was later found invalid). Other patents have computers do incredibly simple things like counting votes or counting calories. We can expect the Patent Office to hand out similar patents on using machine learning techniques in obvious and expected ways.

Indeed, this has already happened. Take U.S. Patent No. 5,944,839, for a "system and method for automatically maintaining a computer system." This patent includes very broad claims applying AI to diagnosing problems with computer systems. Claim 6 of this patent states:

A method of optimizing a computer system, the method comprising the steps of:

detecting a problem in the computer system;

activating an AI engine in response to the problem detection;

utilizing, by the AI engine, selected ones of a plurality of sensors to gather information about the computer system;

determining, by the AI engine, a likely solution to the problem from the gathered information; and

when a likely solution cannot be determined, saving a state of the computer system.

Other than the final step of saving the state of the computer where a solution cannot be found, this claim essentially covers using AI to diagnose computer problems. (The claim survived a challenge before the Patent Trial and Appeal Board, but the Federal Circuit recently ordered [PDF] that the Board reconsider whether prior art, in combination, rendered the claim obvious.)

A more recent patent raises similar concerns. U.S. Patent No. 9,760,834 (the '834 patent), owned by Hampton Creek, Inc., relates to using machine learning techniques to create models that can be used to analyze proteins. This patent is quite long, and its claims are also quite long (which makes it easier to avoid infringement because every claim limitation has to be met for there to be infringement). But the patent still reflects a worrying trend. In essence, Claim 1 of the patent amounts to 'do machine learning on this particular type of application.' Indeed, during the prosecution of the patent application, Hampton Creek argued [PDF] that prior art could be distinguished because it merely described applying machine learning to "assay data" rather than explicitly applying the techniques to protein fragments.

More specifically, the patent follows Claim 1 with a variety of subsequent claims that amount to ‘When you're doing that machine learning from Claim 1, use this particular well-known pre-existing machine learning algorithm.' Indeed, in our opinion the patent reads like the table of contents of an intro to AI textbook. It covers using just about every standard machine learning technique you'd expect to learn in an intro to AI class—including linear and nonlinear regression, k-nearest neighbor, clustering, support vector machines, principal component analysis, feature selection using lasso or elastic net, Gaussian processes, and even decision trees—but applied to the specific example of proteins and data you can measure about them. Certainly, applying these techniques to proteins may be a worthwhile and time-consuming enterprise. But that does not mean it deserves a patent. A company should not get a multi-year monopoly on using well-known techniques in a particular domain where there was no reason to think the techniques couldn't be used in that domain (even if they were the first to apply the techniques there). A patent like this doesn't really bring any new technology to the table; it simply limits the areas in which an existing tool can be used. For this reason, we are declaring the '834 patent our latest Stupid Patent of the Month.

In fairness, the '834 patent is not as egregious as some of the other patents we have selected for this dubious ‘honor.' But we still think the patent is worth highlighting in this series because the problems similar patents could create for innovation and economic progress might be much more serious. Handing out patents on using well-known machine learning techniques but limited to a particular field merely encourages an arms race where everyone, even companies doing routine development, attempts to patent their work. The end result is a minefield of low-quality machine learning patents, each applying the entire field of machine learning to a niche sub-problem. Such an environment will fuel patent trolling and hurt startups that want to use machine learning as a small part of the larger novel technologies they want to bring to market.

We recently launched a major project monitoring advances in artificial intelligence and machine learning. As we pursue this project, we'll also monitor patenting in AI and try to gauge its impact on progress.

from the promote-the-progress dept

As the march of progress of robotics and artificial intelligence continues on, it seems that questions of the effects of this progress will only increase in number and intensity. Some of these questions are very good. What effect will AI have on employment? What safeguards should be put in place to neuter AI and robotics and keep humankind the masters in this relationship? These are questions soon to break through the topsoil of science fiction and into the sunlight of reality and we should all be prepared with answers to them.

Other questions are less useful and, honestly, far easier to answer. One that continues to pop up every now and again is whether machines and AI that manage some simulacrum of creativity should be afforded copyright rights. It's a question we've answered before, but which keeps being asked aloud with far too much sincerity.

This isn't just an academic question. AI is already being used to generate works in music, journalism and gaming, and these works could in theory be deemed free of copyright because they are not created by a human author. This would mean they could be freely used and reused by anyone and that would be bad news for the companies selling them. Imagine you invest millions in a system that generates music for video games, only to find that music isn't protected by law and can be used without payment by anyone in the world.

Unlike with earlier computer-generated works of art, machine learning software generates truly creative works without human input or intervention. AI is not just a tool. While humans program the algorithms, the decision making – the creative spark – comes almost entirely from the machine.

Let's get the easy part out of the way: the culminating sentence in the quote above is not true. The creative spark is not the artistic output. Rather, the creative spark has always been known as the need to create in the first place. This isn't a trivial quibble, either, as it factors into the simple but important reasoning for why AI and machines should certainly not receive copyright rights on their output.

That reasoning is the purpose of copyright law itself. Far too many see copyright as a reward system for those that create art rather than what it actually was meant to be: a boon to an artist to compensate for that artist to create more art for the benefit of the public as a whole. Artificial intelligence, however far progressed, desires only what it is programmed to desire. In whatever hierarchy of needs an AI might have, profit via copyright would factor either laughably low or not at all into its future actions. Future actions of the artist, conversely, are the only item on the agenda for copyright's purpose. If receiving a copyright wouldn't spur AI to create more art beneficial to the public, then copyright ought not to be granted.

To be fair to the Phys.org link above, it ultimately reaches the same conclusion.

The most sensible move seems to follow those countries that grant copyright to the person who made the AI's operation possible, with the UK's model looking like the most efficient. This will ensure companies keep investing in the technology, safe in the knowledge they will reap the benefits. What happens when we start seriously debating whether computers should be given the status and rights of people is a whole other story.

Except for two things. First, seriously debating the rights of computers compared with people is exactly what the post is doing by giving oxygen to the question of whether computers ought to get one of those rights in copyright benefits. Second, the EU's method isn't without flaw, either. Again, we're talking about the purpose being the ultimate benefit to the public in the form of more artistic output, but the EU's way of doing things divorces artistic creation from copyright. Instead, it awards copyright to the creator of the creator, which might spur more output of more AI creators, but how diverse of an artistic output is the public going to receive from an army of AI? We might be able to have a legitimate argument here, but there is a far simpler solution.

Machines don't get copyright, nor do their creators. Art made by enslaved AI is art to be enjoyed by all.

from the and-others'-jobs-as-well dept

There's been an awful lot of talk these days about how the machines (and "AI") are coming to take all of our jobs. While I'm definitely of the opinion that the coming changes are likely to be quite disruptive, many of the doom and gloom scenarios are overblown, in that they focus solely on what may be going away, rather than what may be gained. If there's anyone out there who might be forgiven for worrying the most about computers "taking over," it would be Garry Kasparov, the famed chess champion who took on the Deep Blue chess playing computer and lost back in 1997. However, in a new (possibly paywalled) WSJ piece, Kasparov more or less explains how, even now as AI is moving into all sorts of fields previously thought safe from automation, he's come to embrace the possibilities, rather than fear the losses:

It is no secret that I hate losing, and I did not take [losing to Deep Blue] well. But losing to a computer wasn’t as harsh a blow to me as many at the time thought it was for humanity as a whole. The cover of Newsweek called the match “The Brain’s Last Stand.” Those six games in 1997 gave a dark cast to the narrative of “man versus machine” in the digital age, much as the legend of John Henry did for the era of steam and steel.

But it’s possible to draw a very different lesson from my encounter with Deep Blue. Twenty years later, after learning much more about the subject, I am convinced that we must stop seeing intelligent machines as our rivals. Disruptive as they may be, they are not a threat to humankind but a great boon, providing us with endless opportunities to extend our capabilities and improve our lives.

There's a lot more in the essay, but basically Kasparov recognizes that there's tremendous opportunity in looking at what smarter machines can actually do to help more and more people:

What a luxury to sit in a climate-controlled room with access to the sum of human knowledge on a device in your pocket and lament that we don’t work with our hands anymore! There are still plenty of places in the world where people work with their hands all day, and also live without clean water and modern medicine. They are literally dying from a lack of technology.

And, towards the end, he notes that while there may not be easy answers, there are plenty of opportunities. While many people today insist that since they cannot think of what the new jobs will be, there can't possibly be any, the reality is that just a few decades ago, you would probably not have been able to predict many of today's internet/tech related jobs. And Kasparov is optimistic that freeing us up from more menial jobs may open up much greater opportunities for people to put their minds to work:

Compare what a child can do with an iPad in a few minutes to the knowledge and time it took to do basic tasks with a PC just a decade ago. These advances in digital tools mean that less training and retraining are required for those whose jobs are taken by robots. It is a virtuous cycle, freeing us from routine work and empowering us to use new technology productively and creatively.

Machines that replace physical labor have allowed us to focus more on what makes us human: our minds. Intelligent machines will continue that process, taking over the more menial aspects of cognition and elevating our mental lives toward creativity, curiosity, beauty and joy. These are what truly make us human, not any particular activity or skill like swinging a hammer—or even playing chess.

I am sure that some will dismiss this as a retread of techno-utopianism, but I think it's important for people to be focusing on more broadly understanding these changes. That doesn't mean ignoring or downplaying the disruption for those whose lives it will certainly impact, but so much of the discussion has felt like people throwing up their arms helplessly. There will be opportunities for new types of work, but part of that is having more people thinking through these possibilities and building new companies and services that recognize this future. Even if you can't predict exactly what kinds of new jobs there will be (or even if you're convinced that no new jobs will be coming), it's at the very least a useful thought exercise to start thinking through some possibilities to better reflect where things are going, and Kasparov's essay is a good start.

from the small-solutions-for-larger-problems dept

Last year, 19-year-old UK student Josh Browder released a chatbot called "DoNotPay" that assisted drivers in challenging parking tickets. It was a small program with a huge upside. The bot's legal guidance -- in the form of yes/no questions -- resulted in more than $4 million in tickets being dismissed.

Chatbots are no replacement for lawyers, but almost no one seeks legal help when dealing with parking tickets. That's probably why law/traffic enforcement agencies feel comfortable issuing so many bogus ones. DoNotPay not only saved UK residents millions of dollars, it also proved the ticketing system was fundamentally broken. More than 64% of the 250,000 tickets challenged were overturned.

Browder was hoping to apply his chatbot AI to other legal issues -- narrowly-focused areas where legal help might be appreciated, but without the chance of severely screwing up someone's life if the chatbot led someone down the wrong path.

Browder’s next challenge for the AI lawyer is helping people with flight delay compensation, as well as helping the HIV positive understand their rights and acting as a guide for refugees navigating foreign legal systems.

It's the last one on the list receiving attention this year. Immigration law is an incredibly-dense legal thicket where wrong moves can mean finding yourself stranded in a country that doesn't want you or forcibly returned to the country you've been trying to leave. Brower's chatbot -- running through Facebook messenger this time -- isn't going to put immigrants in awkward positions, though. Instead, it's much more in line with DoNotPay: something that provides helpful assistance to make an often-confusing experience easier to tackle, but without the potential downside of someone wishing they'd spoken to an actual lawyer instead.

The chatbot works by asking the user a series of questions, in order to determine which application the refugee needs to fill out and whether a refugee is eligible for asylum protection under international law.

If the program fails, nothing is made worse. The person seeking asylum is still stuck in the country they're trying to leave, but they're not sitting in a customs holding cell awaiting deportation. (At least, one hopes not. The best case scenario would be to apply for asylum before arriving, rather than after.)

Browder is also doing everything he can to protect users. The information obtained to autofill applications is only stored long enough to be transferred and deleted within 10 minutes of the app's use. The chatbot can also put users in touch with legal representation if requested.

This bot's success will be much more difficult to enumerate, but it's building on Browder's past successes. Since the debut of DoNotPay, Browder's legal assistance bots have helped UK citizens obtain reimbursement for delayed planes/trains and helped homeless individuals seek emergency housing.

from the beep-boop dept

Questions about how we approach our new robotic friends once the artificial intelligence revolution really kicks off are not new, nor are calls for developing some sort of legal framework that will govern how humanity and robots ought to interact with one another. For the better part of this decade, in fact, there have been some advocating that robots and AI be granted certain rights along the lines of what humanity, or at least animals, enjoy. And, while some of its ideas haven't been stellar, such as a call for robots to be afforded copyright for anything they might create, the EU has been talking for some time about developing policy around the rights and obligations of artificial intelligence and its creators.

In a new report, members of the European Parliament have made it clear they think it’s essential that we establish comprehensive rules around artificial intelligence and robots in preparation for a “new industrial revolution.” According to the report, we are on the threshold of an era filled with sophisticated robots and intelligent machines “which is likely to leave no stratum of society untouched.” As a result, the need for legislation is greater than ever to ensure societal stability as well as the digital and physical safety of humans.

The report looks into the need to create a legal status just for robots which would see them dubbed “electronic persons.” Having their own legal status would mean robots would have their own legal rights and obligations, including taking responsibility for autonomous decisions or independent interactions.

It's quite easy to make offhand remarks about all of this being science fiction, but this isn't without sense. Something like the artificial intelligence humanity has imagined for a century is going to exist at some point and, with advances beginning to look like that may come sooner rather than later, it only makes sense that we discuss how we're going to handle its implications. After all, technology like this is likely to impact our lives in significant and varied ways, from our jobs and employment, to our interactions with our electronic devices, not to mention warfare.

I think the most interesting philosophical and moral questions surround these MEPs call to grant robots and AI with the designation of "electronic persons." The call has largely focused on saddling robotic "life" with many of the obligations humanity endures, such as tax obligations and being under the jurisdiction of humanity's legal system. But personhood can't only come with obligations; it must too come with rights. And there would be something strange in recognizing a robot's "personhood" while at the same time making use of its output or labor. The specter of slavery begins to rear its head at this point, brought on only by that very designation. Were they electronic "beasts", for instance, the question of slavery wouldn't arise outside of the fringe.

The MEPs report does also deal with the potential danger from AI and robots in its call for designers to "respect human frailty" when developing and programming these machine-lives. And here the report truly does delve into science fiction, but only out of deference to great literature.

Things descend slightly into the realms of science fiction when the report discusses the possibility of the machines we build becoming more intelligent than us posing “a challenge to humanity's capacity to control its own creation and, consequently, perhaps also to its capacity to be in charge of its own destiny.”

However, to stop us getting to this point the MEPs cite the importance of rules like those written by author Isaac Asimov for designers, producers, and operators of robots which state that: “A robot may not injure a human being or, through inaction, allow a human being to come to harm”; “A robot must obey the orders given by human beings except where such orders would conflict with the first law” and “A robot must protect its own existence as long as such protection does not conflict with the first or second laws.”

While some might laugh this off, this too is sensible. There is simply no reason to refuse to have a discussion about how a life, or a simulacrum of life, that is created by humanity, might pose a danger to that humanity, either at the level of the individual or the community.

But what strikes me most about all of this is how the EU seems to be the ones out in front of this, while any discussion in the Americas has been either muted or occurring behind closed doors. If this is a public discussion worth having in the EU, it is certainly one too worth having here.

from the are-you-next? dept

Stories about robots and their impressive capabilities are starting to crop up fairly often these days. It's no secret that they will soon be capable of replacing humans for many manual jobs, as they already do in some manufacturing industries. But so far, artificial intelligence (AI) has been viewed as more of a blue-sky area -- fascinating and exciting, but still the realm of research rather than the real world. Although AI certainly raises important questions for the future, not least philosophical and ethical ones, its impact on job security has not been at the forefront of concerns. But a recent decision by a Japanese insurance company to replace several dozen of its employees with an AI system suggests maybe it should be:

Fukoku Mutual Life Insurance believes [its move] will increase productivity by 30% and see a return on its investment in less than two years. The firm said it would save about 140m yen (£1m) a year after the 200m yen (£1.4m) AI system is installed this month. Maintaining it will cost about 15m yen (£100k) a year.

The system is based on IBM's Watson Explorer, which, according to the tech firm, possesses "cognitive technology that can think like a human”, enabling it to “analyse and interpret all of your data, including unstructured text, images, audio and video".

The technology will be able to read tens of thousands of medical certificates and factor in the length of hospital stays, medical histories and any surgical procedures before calculating payouts

It's noteworthy that IBM's Watson Explorer is being used by the insurance company in this way barely a year after the head of the Watson project stated flatly that his system wouldn't be replacing humans any time soon. That's a reflection of just how fast this sector is moving. Now would be a good time to check whether your job might be next.

from the not-so-easy dept

I saw a lot of excitement and happiness a week or so ago around some reports that the EU's new General Data Protection Regulations (GDPR) might possibly include a "right to an explanation" for algorithmic decisions. It's not clear if this is absolutely true, but it's based on a reading of the agreed upon text of the GDPR, which is scheduled to go into effect in two years.

Slated to take effect as law across the EU in 2018, it will restrict automated individual decision-making (that is, algorithms that make decisions based on user-level predictors) which "significantly affect" users. The law will also create a "right to explanation," whereby a user can ask for an explanation of an algorithmic decision that was made about them.

Lots of people on Twitter seemed to be cheering this on. And, indeed, at first glance it sounds like a decent idea. As we've just discussed recently, there has been a growing awareness of the power and faith placed in algorithms to make important decisions, and sometimes those algorithms are dangerously biased in ways that can have real consequences. Given that, it seems like a good idea to have a right to find out the details of why an algorithm decided the way it did.

But it also could get rather tricky and problematic. One of the promises of machine learning and artificial intelligence these days is the fact that we no longer fully understand why algorithms are deciding things the way they do. While it applies to lots of different areas of AI and machine learning, you can see it in the way that AlphaGo beat Lee Sedol in Go earlier this year. It made decisions that seemed to make no sense at all, but worked out in the end. The more machine learning "learns" the less possible it is for people to directly understand why it's making those decisions. And while that may be scary to some, it's also how the technology advances.

So, yes, there are lots of concerns about algorithmic decision making -- especially when it can have a huge impact on people's lives, but a strict "right to an explanation" seems like it may actually create limits on machine learning and AI in Europe -- potentially hamstringing projects by requiring them to be limited to levels of human understanding. The full paper on this does more or less admit this possibility, but suggests that it's okay in the long run, because the transparency aspect will be more important.

There is of course a tradeoff between the representational
capacity of a model and its interpretability, ranging from
linear models (which can only represent simple relationships
but are easy to interpret) to nonparametric methods
like support vector machines and Gaussian processes
(which can represent a rich class of functions but are
hard to interpret). Ensemble methods like random forests
pose a particular challenge, as predictions result from an
aggregation or averaging procedure. Neural networks,
especially with the rise of deep learning, pose perhaps the
biggest challenge—what hope is there of explaining the
weights learned in a multilayer neural net with a complex
architecture?

In the end though, the authors think these challenges can be overcome.

While the GDPR presents a number of problems for current
applications in machine learning they are, we believe,
good problems to have. The challenges described in
this paper emphasize the importance of work that ensures
that algorithms are not merely efficient, but transparent
and fair.

I do think greater transparency is good, but I worry about rules that might hold back useful innovations as well. Prescribing exactly how machine learning and AI needs to work too early in the process may be a problem as well. I don't think there are necessarily easy answers here -- in fact, this is definitely a thorny problem -- so it will be interesting to see how this plays out in practice once the GDPR goes into effect.

from the I'm-sorry-I-can't-do-that,-Dave dept

As self-driving cars have quickly shifted from the realm of science fiction to the real world, a common debate has surfaced: should your car be programmed to kill you if it means saving the lives of dozens of other people? For example, should your automated vehicle be programmed to take your life in instances where on board computers realize the alternative is the death of dozens of bus-riding school children? Of course the debate technically isn't new; researchers at places like the University of Alabama at Birmingham have been contemplating "the trolley problem" for some time:

"Imagine you are in charge of the switch on a trolley track. The express is due any minute; but as you glance down the line you see a school bus, filled with children, stalled at the level crossing. No problem; that's why you have this switch. But on the alternate track there's more trouble: Your child, who has come to work with you, has fallen down on the rails and can't get up. That switch can save your child or a bus-full of others, but not both. What do you do?"

It's not an easy question to answer, and obviously becomes more thorny once you begin pondering what regulations are needed to govern the interconnected smart cars and smart cities of tomorrow. Should regulations focus on a utilitarian model where the vehicle is programmed to prioritize the good of the overall public above the individual? Or should self-driving cars be programmed to prioritize the welfare of the owner (the "self protective" model)? Would companies like Google, Volvo and others be more or less likely to support the former or the latter for liability reasons?

"Even though participants still agreed that utilitarian AVs were the most moral, they preferred the self-protective model for themselves," the authors of the study wrote...The study participants disapprove of enforcing utilitarian regulations for [autonomous vehicles] and would be less willing to buy such an AV," the study's authors wrote. "Accordingly, regulating for utilitarian algorithms may paradoxically increase casualties by postponing the adoption of safer technology."

To further clarify, the surveys found that if both types of vehicles were on the market, most people surveyed would prefer you drive the utilitarian vehicle, while they continue driving self-protective models, suggesting the latter might sell better:

"If both self-protective and utilitarian AVs were allowed on the market, few people would be willing to ride in utilitarian AVs, even though they would prefer others to do so," the authors concluded. "… Our results suggest that such regulation could substantially delay the adoption of AVs, which means that the lives saved by making AVs utilitarian may be outnumbered by the deaths caused by delaying the adoption of AVs altogether."

This social dilemma sits at the root of designing and programming ethical autonomous machines. And while companies like Google are also weighing these considerations, if utilitarian regulations mean less profits and flat sales, it seems obvious which path the AV industry will prefer. That said, once you begin building smart cities where automation is embedded in every process from parking to routine delivery, would maximizing the safety of the greatest number of human lives take regulatory priority anyway? What would be the human cost in prioritizing one model over the other?