Posted
by
msmash
on Monday June 12, 2017 @11:20AM
from the AI-for-good dept.

An anonymous reader writes: Colin Walsh, data scientist at Vanderbilt University Medical Center, and his colleagues have created machine-learning algorithms that predict, with unnerving accuracy, the likelihood that a patient will attempt suicide. In trials, results have been 80-90% accurate when predicting whether someone will attempt suicide within the next two years, and 92% accurate in predicting whether someone will attempt suicide within the next week. The prediction is based on data that's widely available from all hospital admissions, including age, gender, zip codes, medications, and prior diagnoses. Walsh and his team gathered data on 5,167 patients from Vanderbilt University Medical Center that had been admitted with signs of self-harm or suicidal ideation. They read each of these cases to identify the 3,250 instances of suicide attempts. This set of more than 5,000 cases was used to train the machine to identify those at risk of attempted suicide compared to those who committed self-harm but showed no evidence of suicidal intent.

Of course, for a sufficiently vague definition of an algorithm, and for a sufficiently vague outcome requested, you could probably formalize brains as algorithms - although no known ANN comes close to how the human brain actually works (mostly because we still don't know how the human brain actually works). But that's still not what the word "algorithm" (e.g., "Euclid's algorithm") means in common parlance.

Your argument that BNNs don't use algorithms can be equally applied to ANNs.If you use a vague definition of algorithm, then it can apply to both.If you use a strict definition, it will apply to neither.

Not according to my profs in school. A key part of the definition of algorithm was that it was guaranteed to terminate. It may take a long time, but it was guaranteed to return an answer someday. A heuristic doesn't have a guaranteed stopping condition, just a time limit that the caller is willing to wait for the most optimal solution.

I believe this to be the typical definition of algorithm, not just a specialization for computer science. Note that the Merriam-Webster definition [merriam-webster.com] includes a particularly key

That AC you're responding to is far from being a 'fuckwit', they actually understand what's going on. You on the other hand keep sipping the media hype-supplied Kool-Aid and don't know the difference between the ersatz and the real thing when it comes to so-called 'AI'. Go educate yourself.

If I knew that I'd be richer than Bill Gates right now from the patents I'd own, and I wouldn't have to spend any time at all arguing with fools like you, I'd have my fully conscious, sentient, human-level AI do it for me, LOL. You don't have to know how something DOES work in order to identify that something else isn't equivalent to it. Now STFU and actually go educate yourself on the subject, using sources other than the media.

I -- and everyone else -- don't know how HUMAN cognition/consciousness/self-awareness/actual THOUGHT/creativity works

If you don't know how it works, then you can't claim that someone didn't capture the essential elements in an algorithm. The only meaningful thing you can do is look at the output, and the output is pretty good.

And now we go back to the top of the thread: You could create a questionnaire that does just as well at predicting who will try to kill themselves; are you going to call ink on a piece of paper an 'AI', too?

Until you can show me a so-called AI that is at least everything that defines us as human beings, then all you've got is a piss-poor imitation that doesn't deserve to be called 'artificial intelligence'. 'Machine learning' and 'algorithms' aren't even as smart as a dog-brain and don't qualify.

You are absolutely correct about the high school thing, but I think you need to learn more about algorithms and realize how broad of a term it is. It's math on paper, it's used to solve Rubik's Cubes for example, and is used in computing. From all the responses above, I doubt anyone really bothered to do any research before commenting and just wanted to be "on screen." So, I gave a few links below to hopefully lessen the burden.

This story is about machine learning. Whether you consider machine learning to be "artificial intelligence" probably says more about your definition of "artificial intelligence" than it does about machine learning.

Machine learning definitely replaces human judgment at certain tasks -- in this case classifying a thing by its attributes -- however it does it in ways that an unaided human brain cannot duplicate. For example it might examine the goodness of fit of a large number of alternative (although structurally similar) algorithms against a vast body of training data.

Many years ago, when I was a college student, AI enthusiasts used to say things like, "The best way to understand the human mind is to duplicate its functions." I believe that after three decades that has proven to be true, but not in the way people thought it would be true. It turns out the human way of doing things is just one possible way.

I think that's a pretty significant discovery. But is it "AI"? It's certainly not what people are expecting. On the plus side, methods like classification and regression trees produce algorithms that can be examined and critiqued analytically.

Expert systems work in a completely different way than machine learning approaches. Expert systems do indeed require the analysis of human knowledge as a starting point. Machine learning approaches do not; they just need data.

You also cannot duplicate my human method without aid (and pretty sure not even WITH aid)

My point is that duplicating the way you think isn't really necessary. You can in many cases be replaced by something that works in a completely different way.

Expert systems work in a completely different way than machine learning approaches.

Proving that you dont know what you are taking about.

Converting your shit into a car analogy: "Bicycles work in a completely different way to tractor trailers"

You are just proving that you dont know anything about at least one of the two things you are trying to talk about. Didnt you know bikes are ridden? Didnt you know tractor trailers haul cargo? You think the difference is how they 'work' ? really?

Converting your shit into a car analogy: "Bicycles work in a completely different way to tractor trailers"

Exactly. I don't see why you think that's ridiculous. Bikes and tractor trailers have some broad similarities, but they're built to accomplish different things so analogies between them aren't particularly useful.

There is AI and there is AI. This program is almost certainly AI in some sense of the term.

On the one hand we have what people sometimes call general intelligence or "true AI". That means capable of independent and original thought, and possibly passing the Turing Test someday. (I'm not convinced that even a true AI will pass the Turing Test because its life experiences will be so different from those of a human, or at least won't pass until it becomes enough smarter than humans to be able to fake out the t

Technology is wonderful but it has a dark side for the society it brings so much convenience: It requires conformity. As individuals put their lives online, those who disagree with the group-think and propaganda are are easier to detect and punish; essentially criminalizing all deviation from normality. This is the very reason we don't want people with guns, often known as 'the government', watching everything we do.

Obedience to social conventions is required everywhere: At work, in town, in other perso

You have been deemed to be suicidal. Please check into your nearest healthcare location. Refusal to do so will result in you being placed imminently into level two treatment. Which may result in loss of job, loss of family, and the loss of your pet named spot.

It's probably mostly meaningless. I mean, they scanned for features of people who are suicidal. They were in the hospital because they inflicted self harm, and were on medications specifically prescribed to make people not do that. So as far as I can tell, this doesn't predict anything, it juts measures that "80-90% of the time doctors do the same thing for folks who would hurt themselves".

It's not like they randomly picked a bunch of people off the street and determined from THAT. Like basically every single other artificial intelligence or machine learning story, it's a bunch of dumb hype, eventually to get folks investing in stupid startups.

I expect that an 80-90% accuracy means that in a group of X people is correctly identifies 80-90% of the people who later go on to attempt suicide. However, if you ignore the false positive rate then I can make an even simpler algorithm that is 100% accurate: simply tag everyone as a suicide risk.

I wish that those reporting on medicine had a basic grasp of science and simple statistics so that they could ask the relevant questions such as: what is the false positive rate?, does 80-90% mean that your stat

The group that was being analyzed was already considered "high risk", out of 5,167 cases there were 3,250 attempted suicides. So even if those were false positives, it wasn't an amount that dwarfs the actual predictions. Now if they expand this to a larger, less risky group, who knows, but at this stage the false positive rate seems more than acceptable.

So who can do the calculations for the false postives and the false negatives? Because I am sure that this will calculate that I am willing to kill myself, even if I have no desire to do so and tell me that I won't when I am willing to do so.

Distopian prediction: Life insurance payout denied. Despite a clean tox screen, your relative was suicidal (according to our algorithm) and was intentionally driving at a time of night when she knew a lot of drunk drivers would be on the road.

Because I am sure that this will calculate that I am willing to kill myself, even if I have no desire to do so and tell me that I won't when I am willing to do so.

I'd like to take that test . . . just to see if I can avoid any long-term planning issues. So when the bank invites me to come around, so they can turn my worthless surplus cash in my bank account into their juicy sales commissions for dubious financial "products", I can tell them with a good conscience, "No, thanks, I'm probably going to commit suicide within the next two years anyway. AI said I would."

In the actual paper, they report precision = 0.79 and recall= 0.95, which means that they predicted nearly all of the attempts (very few false negatives) and most of what they predicted were actual suicide attempts (few false positives). They report the actual numbers, too, but that table is pain to copy and paste.

I don't need a computer to tell me that there is a good chance some of these people will attempt suicide again.

Yes, but which ones? That's the whole point, surely? You'd want to use this as a diagnostic tool, in cases where you're dealing with a lot of depressed people and you need to know which ones you particularly need to watch out for in terms of suicide risk. Mental health clinics would find this invaluable, wouldn't they?

It's pretty much the same thing as being able to tell a cardiac clinic which of their heart-disease patients are most at risk of having a heart attack soon. Obviously everyone who is a patient

This sounds way too unrealistic, even before analysing the methodology (how are they training the algorithm? By letting people die during various years?!). I am not familiar with suicide-prone personalities, but "AI" can certainly not understand better than humans. So, having an algorithm delivering 92% accuracy would imply that people could detect these situations even more accurately than that(?!)

It seems a new a sample of AI-labelled-really-meaning-nothing hype (or dishonestly/ignorantly over-fitted, bl

It looked like this or that they were mostly dealing with not-committing-suicide-at-all people. You can get something like 92% either by having an almost perfect understanding of the given situation (extremely unlikely scenario here) or by playing around with numbers and showing whatever you want to show.

(Clueless-CEO impression) Good work houghi! We are very happy with you! But some of our clients aren't completely on board with this +-3%, because they think that it might provoke cancer. Could you work this bit out, by next month perhaps? Ask for whatever you need. LOL.

I wrote a generic statement intended to provide a clear enough overall picture. As what happens with most of generic statements, proving its absolute validity/falsehood is virtually impossible. So, I am not sure why you are saying a so clear "no" followed by a (logically) pretty imprecise justification for it. Shall I understand this as a more-or-less-blind critic (attack to me?!), not exactly aiming to have a constructive discussion? Or am I misunderstanding your intention? OK, I will bite...

The different patterns of behavior could be so complicated and subtle that people can't pick them up, especially in an area where people tend to have biases.

But the question is: how are you expecting an algorithm, precisely developed by a person, to succeed where people will fail? It doesn't seem too logical, right?

Teach the algorithm by providing it with a list of properties from patients in the past, together with the patient outcome (suicide after N days, or no suicide). The algorithm then searches for patterns in the properties that have a high chance of resulting in suicide.

The developer doesn't even need to be educated in the field of psychology.

The developer doesn't even need to be educated in the field of psychology.

You are again misinterpreting my point. A human understanding of the actions to be performed (= accurate prediction of suicides) is a basic requirement. It doesn't matter if this understanding comes from a group of people (which, at some point, will have to transmit the required knowledge to the given programmer), from a trial-and-error analysis or from a bunch of random guesses. The algorithm can only output what its authors can understand and its whole point is to speed up/ease the analysis of big amounts

A very quick one, the last one I promise!! I will not continue answering what seems random ideas from a person without the required knowledge, completely unwilling to understand and seriously expecting what seems random guesses to be true no matter what.

If a chess engine developer can be outperformed by his own algorithm, then a suicide predictor developer can also be outperformed by his own algorithm. It's the same concept.

You misunderstood the idea (again). With enough time and resources (manuals, advice from knowledgeable people, previous games, etc.), a person will always beat/draw with a chess program. The time and the management of the huge amount of information involved

A human understanding of the actions to be performed (= accurate prediction of suicides) is a basic requirement.

This is wrong. The basic requirements are a set of data on each individual case, including the desired final outcome. We enter data for patient 1 and whether patient 1 attempted suicide. We do the same for all the other patients in the "training" process. The "required knowledge" is objectively recorded, including whether the patient attempted suicide. The "training" is a mechanical process

Pfff.... Note that I have done a quite big effort to continue reading your comment after that starting sentence (after all the previous comments), but here I go once again...

We enter data for patient 1 and whether patient 1 attempted suicide. We do the same for all the other patients in the "training" process. The "required knowledge" is objectively recorded, including whether the patient attempted suicide. The "training" is a mechanical process, producing a set of arbitrary-looking parameters that have no obvious meaning. This is not an attempt to codify human understanding (which an expert system would do), but to create a program that will yield a certain output given certain input.

This is either false or representative of a seriously-flawed system. Blindly analysing random sets of data is the perfect recipe for disaster. Even by creating an algorithm very concerned about over-fitting aspects, over-fitting (or other kind of data misinterpretation) is very likely to occur. I don't think that any (serious enough) sy

This is either false or representative of a seriously-flawed system. Blindly analysing random sets of data is the perfect recipe for disaster.

Except when it works, and it often works much better than you appear to think. What matters is not what you think of the process, but how well the end product works. If the end product does a better job than human judgment, then it is a success.

I don't think that any (serious enough) system aiming to understand any situation has ever been developed by facing the a

??!! What was that?! Projection? Extreme irony? The most inoffensive, naive and pointless attack ever?! Don't you get it? Here you have a clearer version:- Person 1 thinks that a deeper (expert) knowledge about the given conditions is a basic requisite to ever reach a good enough understanding about any situation.- Person 2 blindly defends a

Statistically speaking, is there a reliable way to win the lottery? Statistically speaking, does whatever Charlie's mom does (I haven't watched the video) work? I'm an empiricist. Give me some evidence, such as a comparatively better success rate.

Let's see.
- Person one thinks that a deeper expert understanding is a basic requisite.
- Person two intelligently defends other approaches by pointing to evidence that they sometimes work. Person two has also mentioned that the approach used doesn't always

Long answer: [please, put the short answer here] because statistics/maths (science, engineering, etc.) are just ways to allow our limited understanding to somehow get more insights into too complex-for-our-immediate-grasp realities. They are basically tools, enhancements, extensions which only can complement our much more comprehensive remaining knowledge. Blindly believing in the first misinterpreted (because even the tools are

Simple accuracy percentages are misleading when applied to low-probability events. An "AI" that always returned "No" to the query "Will this person commit suicide within the next two years?" would be 97.2% accurate (and 99.975% accurate for the next-week variant). And yet, that "AI" would be absolutely useless for any practical purpose.

Not to mention, with suicides, access to means has been a better statistical predictor than anything else, even mental illness. A person with no personal or family history of mental illness, but with a gun and a gas oven in their house, is at higher risk of killing themselves than a bipolar alcoholic with neither.

Not that I think this is a particularly useful bit of research but - the study's patients pretest probability of suicide was much higher than the general population. These are people who are ADMITTED TO A HOSPITAL with concerns of self harm. They've already passed a bunch of screens to separate them from everybody else.

So you are talking a group of people that the current system thinks is at some non trivial risk of suicide and trying to figure out which ones are at the highest risk.

So it's quite a bit more useful than some of the posters have been assuming. Still not sure how generalizable this will be, but give the researchers a bit of a break.

Suicide is a low probably event in the general population but their initial data set was not random, it was 5000 patients already exhibiting symptoms of self harm. Picking out the people in that group likely to kill themselves is a pretty impressive feat.

I agree. It isn't a surprise that modern machine learning can recognize patterns. I don't see how this is even close to innovative. Now if it resulted in changing treatment offered to patients such that the outcomes were improved relative to current human Dr recommendations, then that would be interesting.

As someone who's been down that road (but never gone through with an attempt), I automatically hate this invention. When depressed to that point, emotions tend to swing so hard and so fast that any mention of predictions during this state of mind is utmost bullshit.

The very slightest of triggers can either send you overboard or keep you in one piece depending on how your inner conversation is going with yourself. This can be anything... a faint sound, perhaps a song that reminds you of good/shitty times, from a car passing by not too far away.

I consider myself lucky to be both scared of the afterlife enough to have thoughts force second-guessings into me (although the older I grow the less I care), and have enough positive triggers to bring myself back. Nobody, not even myself, could predict if these will always work for me as well as they have however.

Suicidal/depressive folks definitely need help, but not from the machines of this day and age. A positive trigger could well be overridden by a "fuck it", and it only takes a split second to follow through the act. You can't predict that kind of stuff with a high degree of accuracy, at least not yet.

Disclaimer : I did not RTFA. I find stuff like this appalling as it hits me right in the feels and I would be deeply insulted if a machine tried to guess whether I was going to kill myself or not. There's much more to it than some algorithms a team engineers wrote.

You mentioned being in the oscillating state where anything can push you over. That's likely the state the machine is detecting. It isn't detecting exactly whether you'll do it or not, just whether your oscillation is high enough where the risk is sufficient that your environment is likely to present you with a situation.

So while I'll grant that it is improbable that the machine could predict *what* will push you too far, I suspect that it is far better than the average human at identifying whether you'r

When depressed to that point, emotions tend to swing so hard and so fast that any mention of predictions during this state of mind is utmost bullshit.

It doesn't try to predict if a person will try to commit suicide this second. Rather, I assume it tries to predict when a person will get "depressed to that point". So yes, emotions are unpredictable, but if you are sufficiently depressed, at some point you are likely to consider or attempt suicide.

It's like saying "winter is cold" even though you might have a couple 60 degree days in December - true enough in the big picture.

Of course, the software could be worthless, but I think such software *could* work

In fact we are quite lucky to even be having this conversation, you and I, Anonymous Coward. Astronomically so.I am however, far from wrong. These "statistics" are hogwash. Place them back in your ass where they came from.

This reminded me of a sci-fi novel in which an AI arranges for people to die in bizarre and apparently accidental ways by interfering with other automated systems.

As mentioned in other comments, this is just an algorithm but maybe it's not a huge leap to a more complex system doing the same this and given the goal of improving the accuracy percentage... well there's one option that would work, just kill off individuals that have already been flagged at risk.

So, once the computer diagnoses someone as highly likely to kill themselves in the next week, then does it (or the user) call the men in white coats to give the subject the coat with the funny sleeves? Therapists frequently have a statutory or license requirement to report potential suicides.
We don't know what the rate of false positives are, but with our current state of health insurance, getting locked up for a week and then getting a $50k bill would probably drive most people to suicide.

And can they be sued for false negatives? If someone commits suicide but the family finds out that they system didn't flag them as a risk then are they at risk for a lawsuit? I'm sure that someone will sue but what the courts decide their responsibility was is a different matter.

I doubt the person would get locked away for the week but I'm sure that a visit from a social worker or someone with some training in spotting the signs of someone who might commit suicide soon would be sent. Which then leads into w

The 80s called, Comrade! They want their Soviet meme back. In the meantime, Cuban, North Korean & Venezuelan comrades are up in arms at a non-Communist entity like Russia still keeping the 'comrade' moniker