Computational Complexity and other fun stuff in math and computer science from Lance Fortnow and Bill Gasarch

Wednesday, December 13, 2017

Our AI future: The Good and the Ugly

I don’t directly work in machine learning but one cannot deny the progress it has made and the effect it has on society. Who would have thought even a few years ago that ML would have basically solved face and voice recognition and translate nearly as well as humans.

AlphaZero, an offshoot of Google’s Go programs, learned chess given only the rules in just four hours (on 5000 tensor processing units) and easily beats the best human-designed chess programs. Check out this match against Stockfish.

Just a trend that machine learning often works better when humans just get out of the way.

The advances in machine learning and automation have a dark side. Earlier this week I attended the CRA Summit on Technology and Jobs, one of a series of meetings organized by Moshe Vardi on how AI and other computing technology will affect the future job market. When we talk about ethics in computer science we usually talk about freedom of information, privacy and fairness but this may be the biggest challenge of them all.

The changes have hit hardest for white middle-class less educated males. While this group usually doesn’t get much attention from academics, they have been hit hard, often taking less rewarding jobs or dropping out of the job market entirely. We're seeing many young people living with their parents spending their days playing video games and see a spike in suicides and drug use. Drug overdose is the now the leading cause of death of men under 50.

There are no easy solutions. Universal basic income won’t solve the psychological need a job plays in being a part of something bigger than oneself. In the end we'll need to rethink the educate-work-retire cycle towards more life-long learning and find rewarding jobs that go around automation. This all starts by having a government that recognizes these real challenges.

17 comments:

Where do you get that drug overdose claim? I couldn’t find any substantiation for that. Looks like unintentional injuries for under 40 and then cancer and heart disease 40-50. Suicide is in second place for some of the age ranges, but that’s as close as I could find. Would appreciate a link, thanks.

There are economic reasons for the decline of manufacturing jobs in the U.S. A good way to understand what is going on (short of reading a textbook) is to read Dean Baker's blog: http://cepr.net/blogs/beat-the-press/

The good part about defeating the best human-designed game engines with a reinforcement learning algorithm might seem less of a good thing if this implies that human interest in board games precipitously declines. Could it be that the world championships sponsored by FIDE and such would now no longer attract the attention and analysis that they used to? There would be a tipping point at which the best human players would just be seen as making stupid (i.e., less-than-optimal) moves. Perhaps soon to come in other fields of human intellectual endeavor - programmers shown to be writing less-than-optimal algorithms, theorists shown to be proving less-than-optimal theorems, doctors shown to be prescribing less-than-optimal treatments, politicians shown to be legislating less-than-optimal laws and so on :-).

I think, the advantage of AI and ML is the biggest for hard-to-formalize, "messy" problems, which come with a massive amount of data, such as face or voice recognition, etc. On the other hand, I'm not so sure that AI could still be better than humans in solving simpler tasks intelligently and elegantly. For example, while AlphaZero may beat the best human chess player, it is less clear that it could also beat, say, Dijkstra in developing a better, more elegant shortest path algorithm. Or, AI may learn how to find a maximum flow in a network, based on a large amount of training data, but it is harder to imagine that it could invent something as elegant as the Ford-Fulkerson algorithm. I guess, these tricky humans know something that is still elusive for machines...

As I see it (perhaps mistakenly), the low-hanging algorithmic fruits (ala Dijkstra) have mostly been harvested. What remains to extract are the highly intricate bits from vast amounts of data that are beyond human capabilities. Even if it turns out that deep learnt models are in some sense over-fitting (not sure that's true), these formulations are still far better equipped to extract intricate bits of "knowledge" than humans are.

Essentially, the means-to-an-end striving of human thinking has been turned topsy-turvy into whatever it takes to reach an end. We will not be asking (and should not be asking, perhaps) a deep-learnt model to explain itself or whether if its solution is some human sense - elegant. I suppose this distinction does not matter if all we need are things that work very reliably (but not always with algorithmic guarantees). Elegance also seems a naturalistic feature that could some day get incorporated into deep-learnt models via searching for minimal parameter models.

The Max Flow Min Cut Theorem is a beautiful by-product of the Ford Fulkerson algorithm. Furthermore, it also serves as a guarantee and explanation of optimality. It does not appear very likely to me that an ML system, which learns from a massive amount of data how to find a maximum flow, would also propose the Max Flow Min Cut Theorem. After all, the latter was not even requested, only the maximum flow, without "explanation." Due to this mechanical focusing on the original question only, ML can miss certain insights that humans still find very useful and attractive. At more complex algorithmic problems this gap may repeat itself, just maybe in less obvious ways.

"Due to this mechanical focusing on the original question only, ML can miss certain insights that humans still find very useful and attractive."

Exactly, however humans too are missing some insights; academics are rewarded for being clever, i.e. producing "smart" results within the currently accepted framework, not to dig major new perspectives which is always a risky business and doesn't help much in the pursuit of a career, like for instance Klaus Grue Map Theory.I am not implying that Klaus Grue theory is of any value whatsoever but that such kind of research is far too rare w.r.t. trendy topics like P=NP which IMHO is a total waste of time.(if you end up with a "complexity zoo" are you sure that you didn't screw up something along the way?)

Vivienne Ming's talk at the CRA summit was perhaps the one that most grabbed my attention. She is certain that ML will displace a vast number of high paying jobs among the professional class. As an expert who is using the technology she is not hamstrung by the bag of tools that most economists are able to bring to bear on the problem. Certainly Andrew McAfee's talk also pointed this way, a point more explicitly made his book with Erik Brynjolfsson. But Vivienne Ming wondered out loud, in an admittedly rambling talk, what the US will become when a vast number of professional positions, the ones your mother hoped you'd get, are eliminated. I wouldn't bet against it.