The lead article in APA’s July 2019 issue of the Monitor on Psychology is titled “Better Ways to Prevent Suicide.” It’s the first article in a series titled “More Impact Together,” which will highlight the interdependent ways in which psychologists work together to address some of society’s most daunting challenges. The journal chose to begin the series with suicide which has been on the increase in the U.S. The article underscores the ways that psychologists from a wide range of specialties along with the aid of other scientists are tackling the crisis. For example, scientists from a variety of disciplines are examining associated brain changes and risk factors while others are looking at new ways to discover who is at risk. Similarly, clinical researchers are testing new interventions while front-line clinicians are helping to deliver those treatments to those who are suffering.
I went on to read the article because the topic of suicide was a focus in my early career. Right after I finished my PhD in clinical psychology at the University of Michigan, I was hired by Counseling Services there to set up a Suicide Hotline for the campus and surrounding community and to train the phone volunteers. So, I needed to be a quick study and come up to speed with what was known about suicide prevention at that time. Of course, there has been an explosion of research since that time.
Anyone who knows me or my work will be aware that I’m an inveterate technophile, having at one time planned to become an electrical engineer. Consequently my podcasting and blogging have focused on ways hi-tech is impacting our field and society at large.
You’ll not be surprised, then, to learn that my interest in the suicide article was particularly piqued when I encountered mentions of AI (Artificial Intelligence) and it’s subset Machine Learning. These are developments that I’ve been following in recent years but had no idea they have already become tools for data analysis in psychology. It’s been a long time since my grad school courses in statistics and research design. The statistics I cut my teeth on back in those early days were T-tests, correlation coefficients, Chi square, analysis of variance, and such. As computers were beginning to be available, factor analysis was the hot new thing. Given that I was bound for becoming a therapist, I never bothered to get my hands dirty with factor analysis. I’m under the impression that the sorts of statistical assumptions and reasoning that underlay these early approaches may be built into some AI applications.
The place where Machine Learning really excels is with very large data sets. The software is able to sort through it all to discover relationships that would elude our human brains.
But I digress. Getting back to the Monitor’s suicide article, they report “Suicide is the 10th-leading cause of death in the United States, overall. For people ages 35 to 54, it ranks fourth, and for 10- to 34-year-olds, second.” One of the many areas around suicide that sorely needs improvement is risk prediction. In the past, sample sizes were too small to provide the statistical power needed. There are obvious risk factors associated with increased suicide risk, including depression, anxiety, sociodemographic factors and substance use.
The ability to accurately predict suicide is important for future research into the causes of suicide as well as an aide to making clinical decisions about the likelihood of a given patient committing suicide. Consequently, I was surprised to discover that after more than five decades of research, the various instruments designed to predict suicidality are accurate only 50% of the time. In other words, the clinician might as well just flip a coin to judge whether the person in front of them was likely to kill himself or herself!
Imagine my excitement when I read about a recent study that used machine learning to achieve a prediction accuracy of 84% (Walsh et al. Predicting risk of suicide attempts over time through machine learning. Clinical Psychological Science. 2017;5(3):457-469). They applied machine learning to the electronic health records of more than 5,000 adults who had a history of self-injury. They developed an algorithm that predicted suicide attempts based on combinations of risk factors including demographic data, previous diagnoses, medication history and past health-care utilization. As one of the researchers observed, “Machine learning can take us from near-random guessing to a prediction that’s about 80% correct.”
This struck me as a real leap ahead and seemed to dovetail with research from the medical field that AI/Machine Learning has been found to correctly identify photos of skin cancers as either benign or malignant more often than a panel of dermatologists. There have been similar reports of Deep Learning systems superior identification of a variety of eye diseases using retinal scans.
However, my initial enthusiasm has been tempered somewhat as a result of additional reading on the Internet. An online article in the Washington Post cautions that suicide prediction technology is revolutionary but badly needs oversight. They note that, “Corporations outside health care are racing to use AI to predict suicide in billions of consumers, and they treat their methods as proprietary trade secrets. These private-sector efforts are completely unregulated, potentially putting at risk people’s privacy, safety and autonomy, even in the service of an important new tool.” Facebook is a major player in this initiative and as the author cautions they do not share their methods but they do notoriously share user data with data brokers, which may lead to dire legal, employment, or even law enforcement consequences for unsuspecting Facebook subscribers. As the Washington Post article asks, “Should we trust Facebook to dispatch police to the homes of distraught users?
Meanwhile, despite the enthusiasm over AI’s successes in correctly discriminating visual images of skin cancers and retinas as either normal or pathological, progress in other medical applications as not lived up to expectations. An April 2, 2019 article in the online IEEE Spectrum reports that after IBM’s Watson (AI supercomputer) defeated two Jeopardy champions, the company invested heavily in the development of medical applications. According to the article,
IBM’s bold attempt to revolutionize health care began in 2011. The day after Watson thoroughly defeated two human champions in the game of Jeopardy!, IBM announced a new career path for its AI quiz-show winner: It would become an AI doctor. IBM would take the breakthrough technology it showed off on television—mainly, the ability to understand natural language—and apply it to medicine. Watson’s first commercial offerings for health care would be available in 18 to 24 months, the company promised.
In fact, the projects that IBM announced that first day did not yield commercial products. In the eight years since, IBM has trumpeted many more high-profile efforts to develop AI-powered medical technology—many of which have fizzled, and a few of which have failed spectacularly. The company spent billions on acquisitions to bolster its internal efforts, but insiders say the acquired companies haven’t yet contributed much. And the products that have emerged from IBM’s Watson Health division are nothing like the brilliant AI doctor that was once envisioned: They’re more like AI assistants that can perform certain routine tasks.
Clearly I need to throttle back my techno-enthusiasm, but I do expect we will see other significant contributions from AI to psychological science and practice in the future, beyond the demonstrated ability to improve suicide prediction.
APA Presidential Award recipient!