Tweet-analyzing algorithm can detect depression sooner than a human doctor

As well as being an instantaneous method for sharing news and views with millions of people around the world, could Twitter also be a way of helping predict mental health issues such as clinical depression? Absolutely, claim researchers from Harvard University, Stanford University, and the University of Vermont. With that goal in mind, they analyzed data from hundreds of Twitter users to see if they can detect changes in language, which may correlate with depression or post-traumatic stress disorder (PTSD). Their newly developed algorithm suggest that not only is such a thing possible but that it could actually highlight telltale signs 100 to 200 days before clinical diagnosis — thereby making it a valuable predictive tool.

“We recruited individuals who were active on social media, and had been diagnosed by a psychiatrist as having depression or PTSD,” Chris Danforth, a researcher on the project from the University of Vermont, told Digital Trends. “Using a subset of their tweets, we trained an algorithm to identify differences between their behavior and that of a control population on Twitter who had not been diagnosed. Our main finding is that there are predictive markers distinguishing the two groups, and this is often true well before individuals are first diagnosed with these mental health problems.”

In the case of clinical depression, some of the linguistic markers included the fact that sufferers tend to be more likely to use negative words such as “no,” “never,” and “death,” and fewer positive ones such as “happy” and “photo.” It may sound obvious that an unhappy person will use unhappy words, but analysis of the posts people make on social media over time could turn out to be far more revealing in this capacity than the words a person uses in a short appointment with a doctor. That is, if a person is even willing to seek the help of a medical professional.

“We hope that our research will eventually help improve mental health care, for example in preventive screening,” Katharina Lix, another researcher on the project from Stanford, told us. “We could imagine clinicians using this technology as a supporting tool during a patient’s initial assessment, provided that the patient has agreed to have their social media data used in this way. However, before we get to that point, the technology needs to be validated using a larger sample of people that’s representative of the general population. We want to emphasize that any real-world application of this technology must carefully take into account ethical and privacy concerns.”