But the machine also made some mistakes. A 36-year-old man who suffered bilateral brainstem damage after a stroke was given low scores by both doctors and AI. He recovered fully in less than a year.

Not a true AI if it makes human errors, sounds more like an expert system. People love to slap AI on everything these days. uh the computer does it by it self ~Yeah according to the code it follows which is human made... pretty sure they would let this system machinelearn things, it would go: beeepboop, place brain in vat, next!

What this is, is probably a neural network that looks at MRI images, and was trained on MRI images where they knew the eventual outcome. So after training it with 10k MRI images and having it guess the outcome, then backpropagating through the neural network changing the connection strengths between the neurons to reduce the error of the system when it guesses, over and over. You do this enough times, it builds a very good predictor machine for whatever you train it on. But it's never error-free. No real-world information system ever can be.

would bulb it up more if I could. Great explanation, didn't think of MRI photo's or stuff, was more thinking about medical textbooks in an advanced complex if...then... system.
Explains why we don't know the reason why the AI thought that way (in the good and bad situations).

Thanks, I worked on this type of research for several years so I'm very familiar with how it works.

You're exactly right that the crazy thing with neural nets is we cannot know why it does anything. They cannot be debugged or solved. They're just a black box that tweaks itself through feedback until you get it where you want it, then you lock down the connection strengths between the neurons when you're done training, then you have a black box that spits out accurate predictions.

That's also how self-driving cars work. So many computer functions are now becoming these trained neural nets. It's awesome but also weird because sometimes the neural net can get in to a state where it does something unexpected. Most of the self-driving car crashes are caused by this.

And I expect we'll probably begin to see something similar in the medical field as machine learning becomes used more to make predictions about patient outcomes. The overall accuracy will improve, but there will be these fluke predictions that no one will really understand...flukes that may not be detectable until it's too late.

Definitely a double-edged sword. We're creating technology that is "evolved" through feedback like a living being, rather than programmed. It's a wild time to be alive.

From unreadable undocumented/uncommented code spaghetti nightmares to "This works, but might someday destroy the earth if some unknown and very specific parameters are reached."

I never liked neural networks because of the maths (don't like that) and the limited reasoning about the outcome. Though I could see a future where the "reasoning" is outputted in a human comprehensible manner. Not the blackbox itself (might be a whole library) but about the individual decisions. This would probably mean nodes (or clusters of them) would be labeled by means of the examples it's learning on. This way you could ask questions like "Why not this?" and it could go 'I dismissed that diagnosis because of [detail] and in 68% of the cases that made it have nothing to do with the whole area' So people can find out 'what went wrong' if something went wrong.

I can't find the study, but as an example we got a system that was trained to see penguins in photos in their natural habitat. Once it was done and working very well on stuff outside it's learning examples it was shown random pictures and it went well with a few misses until they came to lion pictures and it detect penguins almost every time. I was waiting for the explanation, but they didn't have it except some hypothesis that it might the mountains in the background or the lion's noses...

Speaking as a programmer, neural networks will never be suitable for most purposes. For example, it is impossible to make a neural network which will give perfectly accurate results for simple integer addition or subtraction, merely one which calculates close approximations. However, for certain fields, such as image recognition, neural networks are the only way to do it well.

Edit: "This works, but might someday destroy the earth if some unknown and very specific parameters are reached." won't ever be a thing, because we can use maths to find whether certain outputs are possible. Neural networks merely take inputs and produce corresponding outputs once frozen.

That's a very interesting take. I have been recently saying something related about the direction that the AI field is going to take. Right now one of the biggest problems keeping it from having human-level intelligence is the lack of compartmentalization. It's always trying to solve everything simultaneously. Neural nets simply lack the behavior to focus on perfecting one specific sub-task, like how you might master throwing a ball before attempting to play baseball. The computer just tries to learn baseball, and walking, and throwing, all at once, as if it's one big problem.

Any smart human would see "I do not understand how this works, so let's break the task in to smaller pieces and work on those skills individually then come back to the full task". Neural networks cannot do this, and I think it's the key thing holding AI back right now.

So my proposed solution to this, is to work on developing compartmentalized tasks. If you can solve the "throw a ball" algorithm, then you can lock that in place and then access that locked-in neural net when you play baseball, when you need to throw a ball. So the computer must develop 3 skills:

Recognizing when it doesn't know something

Breaking that task in to smaller sub-tasks

Practicing each sub-task until it can assemble them together to complete the larger task

If neural nets could learn to do these 3 things, I think AI tech would move forward 20 years. This is very similar to what you're saying, in a way. By breaking each task in to a compartmentalized separate neural net, then having a meta-neural net that controls how those smaller ones all connect to each other, not only would the intelligence perform better, but like you said it would be possible to "query" parts of the intelligence. And say "Why did you do this" and be able to actually investigate that question because of the ability to answer sub-questions about choices made. Instead of the whole thing being essentially one giant thousand-part equation that we cannot really predict or understand.

The tough question is how exactly to do this... I think you could do 1 and 2 by hand at the start. Skill 3 is where the next advancement should be. Making a neural net of neural nets, perhaps...

The tough question is how exactly to do this... I think you could do 1 and 2 by hand at the start. Skill 3 is where the next advancement should be. Making a neural net of neural nets, perhaps...

There you might (in step 3) run into real complex problems. Throwing while running might to too different to assemble from throwing and running that you would get absurd behavior like running, dropping like a brick, stand up and throw a perfect ball (kinda like children do).

Some interaction within the learning process by humans might not only speed things up but also prevent some comically disastrous solutions. But this won't be cheap and there is the "out of the box" solutions the system may come up with that are never reached because the humans interrupt it when it wants do to a flip {still the baseball example}.

Learning amongst humans isn't that well understood so hopefully these two problems wholesomely help to solve eachother.

And if it all works like we want it to, in a time far far away, some ***bag would go "Can we MKUltra this system?".