By Paul Biegler

10 November 2017 — 6:52pm

Two recent events caused me to dwell on the observation, sometimes attributed to Joseph Stalin, that "one death is a tragedy; one million deaths is a statistic".

The first was the release of Australian Bureau of Statistics data in September affirming that suicide remains the leading cause of death for Australians aged 15 to 44, something many will read without turning a hair. The second was a chance encounter with my friend and his son at the park, who had been joined by another schoolboy as they walked their dog.

Sensors in smartphones and wearable devices could provide a key to preventing suicides.Credit:Richard Giliberto

But this was no ordinary play date.

The boy's mother had taken her own life precisely five days earlier, after a long struggle with mental illness. Those black-and-white statistics, which recorded 2866 deaths by suicide in Australia in 2016, were made suddenly and sickeningly real.

Advertisement

"For most people the peak period of risk for suicide is only about 10 or 15 minutes": Professor Nick Allen, director of University of Oregon's Centre for Digital Mental Health.

They also drive home the urgency of a quiet revolution that is upending the way psychological illness is being diagnosed and treated, through the use of sensors in smartphones and wearables such as Fitbits and smart watches, which can reach into people's lives at critical moments in ways not previously possible.

"For most people the peak period of risk for suicide is only about 10 or 15 minutes," says Professor Nick Allen, who heads up the freshly minted Centre for Digital Mental Health at the University of Oregon, which launched in September.

"People make plans to attempt suicide often months earlier … but the actual high risk of intention to act, for many people, is a period that is relatively short-lived," says Allen, who stresses that predicting that point is pivotal to averting a crisis.

Allen's team is conducting research that aims to exploit sensors on phones and wearables, including the GPS, accelerometer, microphone and camera, to extract data that turn the phone into a prediction tool for mental health deterioration. It can also intervene by alerting a carer or clinician.

Allen, who recently moved to the US and remains an honorary professor at the University of Melbourne, explains that the microphone, and the voice, are key.

"If you know someone very well, say your partner, and they ring you up and say 'Hi Paul' (in a lowered tone) … you will immediately say 'What's wrong?' The voice is very rich with emotional information, probably second only to the face," he says.

Allen has collaborated with Margaret Lech, an associate professor at RMIT, in work that shows a characteristically dull inflection of the voice can predict major depression in adolescents. A delayed "switching" time, where you take longer to respond to another person during a conversation, also correlates with depression.

Another tell-tale clue is increased use of the first person pronouns I, me, my and mine, a sign of self-orientation that predicts a downward mood spiral and can be picked up through texts, emails, social media posts and, in a world where chatting to Siri is becoming second nature, natural language processing of speech to devices.

It sounds pervasive but, according to Allen, we are at a critical juncture in mental health and radical change is needed.

"In developed countries ... we've gone through an absolute sea change in availability of both pharmacological and psychological treatments for mental health problems. And yet we're not seeing the burden of disease coming down," he says.

"I'm a fairly geeky old person. I use my phone to check the football scores and tell my wife I'm running late. But my daughter, who is 16, uses her phone to flirt with people, to fall in love, to have fights, to find out information about things that are core to her identity ... and her sense of well being. All of this behaviour is being quantified through our natural usage of our consumer devices."

This unprecedented access to people's lives is increasingly exploited by tech multinationals to target ads at us – "if it's free, you're the product", so the new mantra goes. But Allen wants to see a pivot that hands meaningful data back to people, empowering them to better manage their mental health.

Those data go beyond just speech and text. People move less in depression and more in mania, something the phone's GPS and accelerometer can track. Sleep patterns alter in depression, which can be measured by how long a smartphone screen is off or whether it is being charged. And even extreme technophobes should now be aware - with the release of the iPhone X, which uses facial identification as an unlocking mechanism - that phones can scan faces.

Allen is using software, developed by Jeffrey Cohn at the University of Pittsburgh, that wraps facial images in a virtual 3D mask that can measure your emotion from a selfie. The demo video includes an engaging Hugh Jackman as test subject.

Bundle it all together and you have millions of data points that can be crunched by machine learning algorithms with the aim of predicting mental health crises such as emergency department presentations, hospital admissions or suicide attempts.

But there is another reason, says Allen, why smart sensing will reshape the mental health paradigm.

"When we work clinically with people ... they come in and we say, 'How was your week?' and they say, 'It was kind of like this' and then we assume that what they tell us is a pretty accurate picture. This is no different than the way Freud did things," he says.

But this consultation room banter is often at odds with a client's self-report diary which might, for example, show that Sunday was actually pretty crappy. Sensor information promises to cut out the (often inaccurate) middleman: us.

"This is going to be our MRI scan in behaviour, that transfers across time and does so in a way that is unobtrusive and allows us to get a real sense of how the patient's week has been," says Allen.

Allen's research isn't at the app stage yet, but there are thousands of mental health apps out there making a myriad of claims. The quality is, to say the least, variable.

Dr John Torous, a Harvard-trained psychiatrist with a background in computer science and engineering, chairs an American Psychiatric Association working group that is scrutinising mental health apps.

In January they launched a website that helps mental health professionals evaluate apps they might be considering for clients, based on evidence, ease of use, privacy and security.

John Torous chairs an American Psychiatric Association working group that is scrutinising mental health apps.Credit:Ben Rushton

Torous, who was in Sydney in October to speak to the Black Dog Institute, points out that the US Food and Drug Administration (FDA) does not vet mental health apps. Smartphones only fall under FDA jurisdiction when they form part of a medical device that can, for example, monitor heart rhythm or brain waves. In the vacuum left by the FDA, the regulatory burden defaults to none other than the purveyors of those apps, mostly Google and Apple. That is, potentially, a very big problem.

"Just because there are a lot of [mental health apps] out there doesn't mean that they are useful, and some certainly can be quite the opposite. They can be harmful and dangerous," says Torous.

Those dangers extend beyond dodgy advice.

A memorable lapse occurred in September 2015, when NHS England's Health Apps Library was found to be leaking personal information that could be used for fraud or identify theft, due to some apps flouting privacy standards.

Torous is being very proactive in raising the bar. He's working closely with Harvard University's Jukka-Pekka Onnela, developer of Beiwe, a research app that will be made available to other investigators free and open source in early 2018.

In a study of people with schizophrenia, published in October and co-authored by Torous, data from Beiwe helped correctly classify sleep quality in 85 per cent of participants, compared to the gold standard of a sleep study in the clinic. That's important because sleep disturbance in schizophrenia is related to symptom severity, and risk of relapse and suicide.

Aung says their app's clients include not only healthcare providers and wellness programs attached to universities, but pharmaceutical companies.

Digital data is increasingly attractive to drug companies wanting, among other things, to test the effectiveness of a sleeping pill, or get a handle on drug side effects such as somnolence, insomnia or suicidal thoughts.

Aung also reveals one of the app's features is to "wake up" when an audio stream is detected and to measure the amount of human voice versus other noise.

It then calculates a "log of sociability" that can indicate social withdrawal, often seen in depression and anxiety.

But, says Aung, the sensing domain is set to get bigger.

"With the Internet of Things ... it is not just about the phone, but the house, car, other wearables. That, coupled with big data science plus advances in machine learning, is completely new territory. We're going to get a lot of big steps," he says.

Could sensors in your fridge door help diagnose depression by detecting an appetite change?

Maybe. But a huge challenge for the field is to ensure health gains are not undercut by a tsunami of ethical concerns.

Luciano Floridi, professor of philosophy and ethics of information at the University of Oxford, acknowledges threats to privacy and security are worrying, but says mental health apps come with their own set of risks.

"We are all essentially being gently, implicitly, relentlessly nudged towards some decision": Luciano Floridi, professor of philosophy and ethics of information at the University of Oxford.Credit:Simon Schluter

A big issue is the business model that delivers a "free" app in return for barraging the customer with advertising, often very precisely targeted.

"We are all essentially being gently, implicitly, relentlessly nudged towards some decision, choice or lifestyle," says Floridi, who was the keynote speaker at the Networked Society Symposium at Melbourne Uni in October.

"Multiply that by the fact that someone is even more vulnerable, even more fragile, even more easily influenced. That is a huge problem," he says.

To take an egregious example, imagine, says Floridi, an algorithm that exposed people with a substance abuse disorder to ads for cheap alcohol.

Floridi also worries the explosion of data will make it even easier for unscrupulous governments to mine information in order to, for example, quell protests or prevent mass gatherings.

There is, says Floridi, a range of solutions, including better scrutiny of apps by users and promoting ethical app development, but he also thinks increased regulation is on the horizon.

"Maybe data should not be retained for longer than x amount of time. Or maybe apps should actually be paid for and, therefore, collect no [additional personal] data," he says.

Floridi is also mindful of the massive role played by the tech giants.

"Multinationals have more power and more influence than many governments, and yet we don't admit them in the same room when decisions are being taken," he says.

"Should we have a table were we invite the top 10 digital companies, say, in the European Union, Australia and the US, and start coordinating, together, decisions that are good for everybody?"

Floridi acknowledges the suggestion is utopian and not necessarily democratic, but his appeal to "realpolitik" perhaps testifies to the impotence of regulators in the face of the tech behemoths.

For the present, Allen confirms that we are just making do with Google and Apple as the gatekeepers to which mental health apps make it into their online stores: "It is the Wild West at the moment. Which is great for innovation, but terrible for quality control."

For help or information visit beyondblue.org.au, call Suicide Helpline Victoria on 1300 651 251, or Lifeline on 131 114.