Posted
by
msmash
on Tuesday April 11, 2017 @10:20AM
from the problems-loom dept.

Last year an experimental vehicle, developed by researchers at the chip maker Nvidia was unlike anything demonstrated by Google, Tesla, or General Motors. The car didn't follow a single instruction provided by an engineer or programmer. Instead, it relied entirely on an algorithm that had taught itself to drive by watching a human do it. Getting a car to drive this way was an impressive feat. But it's also a bit unsettling, since it isn't completely clear how the car makes its decisions, argues an article on MIT Technology Review. From the article: The mysterious mind of this vehicle points to a looming issue with artificial intelligence. The car's underlying AI technology, known as deep learning, has proved very powerful at solving problems in recent years, and it has been widely deployed for tasks like image captioning, voice recognition, and language translation. There is now hope that the same techniques will be able to diagnose deadly diseases, make million-dollar trading decisions, and do countless other things to transform whole industries. But this won't happen -- or shouldn't happen -- unless we find ways of making techniques like deep learning more understandable to their creators and accountable to their users. Otherwise it will be hard to predict when failures might occur -- and it's inevitable they will. That's one reason Nvidia's car is still experimental.

It's really hard to predict what the deep learning is in fact learning. It may be often useful over the training, this very much does not mean that it's going to do the expected when faced with the unexpected, and not for example decide that it should go over an intersection because the person next to it is wearing a green hat that looks more like a green light than the red light looks like a red light.

Humans are not immune to this problem though. One big difference is that our visual system is trained on 3D images, which allow a lot more useful information to be extracted. With 2D images, we also have funny failures.

For instance, how long does it take you to see something funny with this wall ?

One thing I see often overlooked in the discussion is that a car can have vastly better vision than a human. It is not obstructed by the increasingly thicker pillars of the inside of a car - and furthermore a car can see in 3D because it can have cameras placed at every corner.

If it does a car has even better 3D vision than a human, because the spacing is so much wider which leads to much more accurate depth perception.

This is ignoring the fact a car can have real 3D vision not even relying on light, if it

I see a lot of things in that wall, from shapes in the brick (a smile) to what looks like a lizard head sticking out. I don't see anything non-obvious, or anything obviously unusual; further, I see nothing that would break in a 2D/3D transition.

That was a very interesting article showing real problems with current CNNs. But it doesn't appear that the problem it identifies is that monumental. It seems more likely these problems just aren't a high priority right now.

A multi-step CNN which identifies not just an end result (leopard) but also expected components (head, tail, eyes, etc) could conceptually solve for this problem. Suddenly if the image looks like a large cat but has no head, tail, paws or eyes then it rules out all classifications in whi

It's a poor workman who blames his tools. Your example has nothing to do with AI and everything to do with someone who wrote control software (or maybe just hardware logic gates) without writing out a decision diagram.A system that can't tell the difference between inflow and outflow current and just keeps whacking the voltage is beyond stupid.Even by Texas criteria.

I just don't have any faith in a system that is not fully understood. Just like back in college, you would create some cludge code without proper understanding of underlying concepts and sometimes it would work. However, this would never produce a robust system.

But intelligence and consciousness are not fully understood, and may not even be understandable. And I say that not to invoke some kind of mysticism, but because our decision making processes are lots of overlapping heuristics that are selected by yet other fuzzy heuristics. We have this expectation from sci-fi that a general purpose AI is going to be just like us except way faster and always right, but an awful lot of our intelligent behavior relies on making the best guess at the time with incomplete information. Rustling in bushes -> maybe a tiger -> run -> oh it was just a rabbit. Heuristics work until they don't.

It may be that an AI must be fallible, because to err is (to think like a) human. But forgiveness only extends to humans. When the human account representative at your bank mishears you you politely repeat yourself. When the automated system mishears you you curse all machines and demand to speak to a "real person." The real person may not be much better but it doesn't make you as angry when they mishear you. With automobile pilots we tolerate faulty humans whose decision-making processes we absolutely don't understand such that car crashes don't even make the news, but every car AI pilot fender bender will "raise deep questions about the suitability of robots to drive cars."

You will be hard-pressed to make a case that human intelligence is anything but a catastrophic failure and/or malfunctioning system by any rational standard. Insofar as applying this to driving - it is very easy to demonstrate that it is fault-prone, suboptimal even when functional, and full of glitches. If anything, such comparison supports my point.

Do you fully understand how biological intelligence works? No? Then by your own logic, you don't have faith in your own intellect. And the line of reasoning your brain just conjectured is not produced by "a robust system" and thus cannot be trusted.

This is the big mismatch I've noticed between how scientists and engineers think. Scientists refuse to believe something works unless they can understand it. Engineers just accept (take it on faith if you will) that there are things out there which work e

That's only a problem up to a certain point; when (if ever) the self learning algo has learned enough and has logged a couple billion safe kilometers with a much better track record than the average human, then no one will care that they (or real scientists) do not understand exactly how the thing makes its decisions.

So we are making progress. Reverse engineering the human brain has been proven extremely difficult. An intelligent program so complex that it's almost imposible to explain or understand is in my view the correct path, just like the human mind is so complex to understand or explain. And even better if it's fuzzy intelligence: you have no certainty it's going to make consistently good choices, just like any human.

That won't work. You can't talk to it to be sure it actually understands what it's doing and why. You can't talk to it and be sure it understands the value of human life, and why ramming itself into a telephone pole is a better choice than ramming itself into a crowd of pedestrians. You can't spend time driving with it, talking with it for six months while it's only got a learners permit, getting a sense of whether or not it's actually going to be a competent, reliable, and trustworthy driver. It's just a m

With a machine you can do so much more than that. Not only can you ask why it made a decision, you can replay the same conditions, and check detailed logs to figure out exactly where the problem is, fix the problem, and send the fix to all other cars. And instead of driving 6 months on a learner's permit, you can test drive 10000 cars at the same time, for 24 hours per day if you want too.

Yet so many of you are willing to put your life in it's hands. Personally I think you're all insane.

If it can be demonstrated that the machine makes fewer mistakes than human drivers, it makes perfect sense to trust it.

[...] had taught itself to drive by watching a human do it. Getting a car to drive this way was an impressive feat.

When my mother was a teenager and on her first attempt to learn how to drive, she managed to plow her daddy's Caddy into a telephone pole. She never learned how to drive after that. If we're getting to tech AI's to drive, my mother wouldn't be a good example to follow.

But the local net at the High Lab had transcended—almost without the humans realizing. The processes that circulated through its nodes were complex, beyond anything that could live on the computers the humans had brought. Those feeble devices were now simply front ends to the devices the recipes suggested. The processes had the potential for self-awareness and occasionally the need.

Exactly this. Generally speaking, software developers no longer understand what they write. Whether it's a simple program to pop up a dialog window or a self-driving car, 99% of the time the developer has no idea how things are really working, they know how they set the initial parameters, and maybe can speak to a high level about the stuff under the hood, but really they have no more understanding of what they are doing than a typical driver understanding how the car moves when they press the gas pedal.

The article is too negative. If you listen to the AlphaGo programmers, they have logs explaining why certain moves were made or not made at each step. They look through the logs and try to understand. The real problem isn't "we don't understand," it's that the logs have mountains and mountains of data. Figuring out why one move was chosen over another when the computer performed a billion operations is hard. That's a lot of logs to look through, a lot of connections to consider.

You seem to not slow down and actually read things, so here, let me help you:
We have no idea yet how human sentience works, therefore it is impossible to emulate it with machinery. Anyone who tells you different is either lying to you, or is a fool who believes the hype.

We have no idea yet how human sentience works, therefore it is impossible to emulate it with machinery

You are merely repeating the same bullshit, without adding any argument.

What if I study the brain, and make a complete functional copy of all the little details, without understanding what it actually does on a higher level. The copy behaves exactly the same. Mission accomplished.

Or, I make a genetic programming environment, and let algorithms evolve until they've reached sentience. Just like humans evolved. Mission accomplished.

What if I study the brain, and make a complete functional copy of all the little details, without understanding what it actually does on a higher level. The copy behaves exactly the same. Mission accomplished.

You CAN'T. THEY can't. If they could they'd do that already. No one has ANY IDEA HOW THE HUMAN BRAIN ACTUALLY WORKS AND NEITHER DO YOU.

I think that'd depend on the fidelity. Does the guy making a prosthetic leg know how muscles work on a biochemical level, or does he just have to get things close enough? An AI doesn't have to appreciate the Muppets on as deep a level as you to drive the car around just as well.

I've tried to learn some AI techniques, but I run into the following issues:
1. I never took linear algebra in school.
2. I never took advanced statistics in school
3. Everything I have read on the topic of AI requires a fluent knowledge of 1 and 2. I know basic statistics, I can do differential equations (with some difficulty). However, you have to completely think in terms of linear algebra and advanced statistics to have a basic understanding of what's going on. Very few people are taught those subjects.

I am specifically interested in anomaly detection. I've seen some companies successfully implement AI as a new technique to predict when complex mechanical systems will fail. I think this may turn the field of mechanical engineering on its head.

"Anomaly Detection" is still fairly vague, and a large number of techniques could be used, depending on the details. In the worst case, statistics is just a semester long class in college, and so is linear algebra. If you apply yourself, then within four months you could be quite good at both of those topics.

The statistical and neural network approaches to AI use crushing amounts of computation. Other approaches use less, but don't scale as well to more complicated problems.

Whatever your approach you will need a very good computer, but with the statistical or neural net approach you will be restricted to toy problems unless you invest heavily in a fancy multiprocessing computer system. Possibly several of them. And that gets expensive.

If you want to learn AI, read the literature, build the examples, and then

Perhaps the biggest problem with understanding neural networks is that we don't have a way to describe their behavior. Since they work in such an asynchronous and sometimes nonlinear fashion, I think we need to develop the algorithms needed to turn plain code (e.g. C) into neural networks. With these algorithms, we can then begin to decode the neural networks that we have created through training and thus be able to predict their behaviors. It will also allow us to perfect and optimize networks so that f

Humans make these decisions now and you can't provide the complete logical flow which makes them. Additionally, programs that we know all the steps for contain flaws. Before someone chimes in that software can be proven to be bug free mathematically, this is a false sense of security because software can only proven to be free of the bugs you knew to check for. I remember an MIT professor drawing a pie chart once, they drew a tiny line and indicated "this is what we know", Then a somewhat thicker swath next

Today's well designed neural networks and other machine learning systems can certainly be fully understood and debugged.

What ARE you talking about? Sure, the underlying neural network architecture can be understood and perhaps even debugged (depending on what exactly you mean by "debugged"). But AI learning systems frequently go through many, many generations of creating their own adaptive solutions to problems, which often only exist as huge collections of numbers that are basically empirically derived weightings from the interactions with the dataset.

>There is now hope that the same techniques will be able to diagnose deadly diseases, make million-dollar trading decisions, and do countless other things to transform whole industries. But this won't happen -- or shouldn't happen -- unless we find ways of making techniques like deep learning more understandable to their creators and accountable to their users.

While I care about understanding the system so it can be improved (hopefully before a problem occurs), ultimately all that matters is that it produces statistically better results than a human.

If a machine kills someone (and we don't even know why) 1% of the time, but a human doing the same job would mess up and kill 3% of people (but we'd understand why)... I'll take ignorance.

If a machine kills someone (and we don't even know why) 1% of the time, but a human doing the same job would mess up and kill 3% of people (but we'd understand why)... I'll take ignorance.

A couple problems with this argument:

(1) Is the 1% part of the 3% that would likely have been killed by the human, or is the 1% a novel subset? If you yourself were part of that 1% that is now more likely to be killed, you might care about this choice.

(2) Unpredictable failures often mean that you can't ever get good stats like you have there until you actually deploy a system. Which means you're basically taking a leap of faith that the system will only kill 1% and not 5% or 20% when put into practic

It may be relatively complex, but neural networks aren't all THAT complex. Usually there are a few hundred nodes, facial recognition can be done in a few dozen or so (less if you only want to recognize 1 feature). The nice thing about "AI" is that you can halt the program and inspect it's state, then step through the program. Sure it's difficult and at first glance, you may not be able to infer input from output but it's not impossible.

The problem with true "intelligence", besides the lack of definition, is

Absolutely correct, single task algorithms are NOT AI.
The ability to apply what you've learned from one task to come up with a novel solution to a non-related task is Intelligence - the "I" part of AI. Which is decades away. It doesn’t mean computers aren’t really good at single tasks, it just “single tasks”
Secondly, something bad eventually will happen, but something bad ALWAYS happens when people do it. There’s always accidents, there’s always Doctors making bad

Funnily enough I submitted another story about how vulnerable these algorithms are to attacks if you have access to the code. Squiggly lines the computer interprets as a gun, a sticker on a stop sign making the algorithm ignore it.

Its been the case for years - the first time I saw one posted here I thought it was a trash site co-opting the MIT name.

I thought it was more like the Stanford School of Business that graduates students who are more interested in writing the next billion dollar app than changing the world of business. Having a Stanford MBA is a good reason for hiring managers to pass over a resume.

How would this change with Clinton as potus? America would still be "dumbing down" because MIT Technology Review would still be technologically stupid using buzzwords. Thanks for injecting politics where it has not place.

Good gravy get over it already. Politics does not have to be apart of every conversation. It gets old in topics that has nothing to do with politics. This is coming from someone that loves political conversations (yes am masochist leave me alone).

I work for a small contracting agency which works for a larger contracting agency that has a government contract. Hence, I'm in government IT as a contractor. This is a specific as I can be about my current job. Otherwise, I might get contacted by whistleblowers (which did happen), news media or right-wing political extremists.

Then you are aware the current intelligence capabilities mean the unelected individuals in intelligence have more blackmail material than Hoover could have ever began to dream of on any given politician which means they are in charge, not the politicians.

Based on this statement I'm guessing you've never worked with statistically based machine learning. Take a "simple" artificial neural network trained to do classification. The person who wrote the algorithm knows how samples from the training set are presented to the network, i.e. what features hit the first layer. The author also knows how data propagates through the network (i.e. a value is propagated to the next layer along the edges connected to a previous layer's node) and even how the weighting on different edges connecting the nodes are updated based on classification failures.

Once that network is trained it may spit out correct answers time and time again, but the author who knows the algorithms inside and out doesn't know exactly how the network decides that it's looking at a lunar crater and not a volcano. Not knowing those details means that it is incredibly hard to define how the trained AI will fail when faced with an unexpected input.

There's the problem: if you have a trained AI and not some sort of expert system based on a collection of human knowledge it's nearly impossible to say how it will handle the unexpected near-garbage input.

It is difficult to predict how a person reacts, also. Because, well, we don't exactly know how we work either. The solution has always been simulation and training. Plenty of instruction for plane pilots, but -- tragically -- hardly any for cars. IMHO even the pseudo AIs we have now will do better in most situations than the majority of poorly-trained, distracted, intoxicated, hung-over people currently at the wheel. Nearly 30K dead every year. I want you all in robo cars now. But I'll keep my Land Cruiser

I completely agree that simulation and training are the solution and that the bar to beat humans at driving is pretty low. That doesn't make it any less of a nasty task to figure out WTF the neural net is actually basing decisions on or make it any more understandable to the programmer who wrote it. I'd gladly give up my vehicle for a well tested self driving car. I'd still like the option to drive sometimes, but the normal day-to-day is just a dangerous waste of time.

Uh, it's simple. Freeze it (disable the feedback loop that lets it modify itself) and test in on a bunch of new data, a bunch of garbage data, etc., and watch it.If you want to methodically define its behavior you just need to look at the damn thing. Getting any useful info out of that will be an issue though. You may find out that somewhere deep in your neural net it's looking for a seemingly random pattern of contrast or checking against some strange distance/angle. Without tracing its entire training history you won't know why. But you can see that it's checking for that shit and then test it by giving it data that varies a lot on the things it checks, and try to suss out what impact that has in real-world use. No, it's not easy. But it's absolutely knowable and testable.

I agree that it's completely doable, but the poster I replied to was stating that the programmer who wrote the algorithm must understand how it's making decisions and that only the less skilled maintenance coders would be confused. That's simply not true. I know people who could write a neural net from a reasonable spec but doing the steps you described above would blow their minds. I'd also argue that a NN with even a few layers of nodes can get complex fast enough that what you're proposing would result in a document the size of a novel and still not capture all the nuances.

I really appreciate your point that

Getting any useful info out of that will be an issue though. You may find out that somewhere deep in your neural net it's looking for a seemingly random pattern of contrast or checking against some strange distance/angle.

If the net is using some seemingly random pattern that's where you can get some bizarre (to human thinking) failures. We tend to understand when something goes wrong in a way we can comprehend. If the seemingly random pattern the computer finds happens to call a slightly obscured "stop sign" a "no u-turn" sign that would be incomprehensible to a human, but might make perfect sense to the NN.

This all isn't to say that you can't reduce the odds of this sort of problem to such a small number that it's meaningless especially in comparison to human error. Still, when crap like this happens it makes the news and gets blown all out of proportion, so expect "the sky is falling" stories to follow any uncertainty AI behavior.

No, there is an actual disconnect. The algorithms set up a pathway for a learning system but the algorithm does not define the discrete path of logic which determines what decisions and choices the AI will make. The design decisions have more to do with providing the right balances in scoring good results, statistical patterns of AI design that provide better results based on the complexity of the type of decisions being made, that sort of thing. Some pieces are more like figuring out the ideal depths of pi

The algorithm that exists is this: given a set of input data and a set of output data, we ask the computer to create a function than maps input to output, according to how we label the data (input-1 goes into output-42, etc). What this algorithm produces is a function that performs this kind of mapping on the sample data, within some acceptable error. Then we feed it data it has not seen and look at the output.

The function it produces in general would not be comprehensible to a hum

In "Two Faces of Tomorrow" by James P. Hogan (republished in "Cyber Rogues" [amzn.to]), the AI worked and everyone was happy. In fact, the AI worked too well. The AI started taking shortcuts that was efficient from its point-of-view but endangered human lives. If the AI became aware, could the plug still be pulled? The latest AI tech got installed on a space station habitat and humans went to war to push the AI to the limit. Of course, that's science fiction. But it might help to understand what's going on with an AI

the car mimics a car driven by a human. but the data set that controls it is not understood by the people who made the data set using training.

perhaps they want money to teach objectives to it? or priorisations? I mean, if you just ran a dataset to teach it to drive, presumably footage of people driving and value logs of inputs happening while they are doing so, that for example in a curve it would try to keep the car on the road by turning right/left, you would still need to teach

You entirely missed the point. These systems are essentially programs written by machines (that's the learning process), they are not written to be understandable by people. With your debugger you will see that a variable x1267321467321587 is sometimes set to 1.0123 and other times 34243.11111.
You will have not idea what that means.