Will Semi-Smart AIs Replace Psychotherapists?
David Van Nuys
There’s a lot of excitement in the air these days about artificial intelligence (AI), along with a lot of speculation about who will be impacted in the workplace and by how much. We take it for granted that AI is in the process of transforming manufacturing along with many forms of blue-collar work.
What about white-collar work, such as psychotherapy? We have already seen AIs taking over some of the work in such white-collar fields as journalism, education and the law, among others, but the current consensus seems to be that AI will not totally replace human workers in white-collar jobs—at least not in the short term. The more likely scenario, which we are already witnessing, is that AI will be used to create tools that assist white-collar workers by taking away the drudgery of repetitive tasks. Not only can AIs handle repetitive tasks, they can do so at super-human speeds. By using these sorts of intelligent assistants, white-collar workers can work more efficiently, thereby allowing them to do more work of the high-level thinking variety. Rather than being replaced, the human beings are freed up to do the parts where humans excel such as insight, empathy, creativity, and contextual understanding.
There is so much enthusiasm about the promise of AI that it is being hyped and oversold. Consequently, we can expect major marketing assaults from a variety of companies peddling AI products and services. Much like we’ve seen with soap powders and toothpaste, I predict we will be seeing slogans like: “New and improved—contains AI!” Buyer beware of rampant puffery.
AI does have its talents, however, and one of the most useful so far is pattern recognition. When large datasets are fed into an AI system, it is sometimes able to recognize patterns with greater accuracy than human experts. One example I’ve written about previously is the ability of an AI that’s been provided with a large number of photos of possible skin cancers to outperform the human expert in recognizing those that are most likely to be deadly (Van Nuys, 2017). And speaking of deadly, there is a recent report that Google has developed an AI that can predict with 95% accuracy when a patient in hospital will die (Felton, 2018).
Two examples that are more in the mental health domain relate to schizophrenia and depression. According to a post from the IBM newsroom, IBM scientists have collaborated with researchers at the University of Alberta and the IBM Alberta Centre for Advanced Studies to publish new research regarding the use of AI and machine-learning algorithms that can predict instances of schizophrenia with 74% accuracy (Gheiratmand et al., 2017). Based on these findings, they predicted that “computational psychiatry” could be used to help clinicians assess patients with schizophrenia more quickly and therefore begin treatment sooner.
AI is also being used to help in the detection and treatment of depression. A recent article in Tech Emergence states: “Depression is a leading mental disorder impacting about 16 million Americans. According to the World Health Organization, the annual global economic impact of depression is estimated at $1 trillion and is projected to be the leading cause of disability by 2020” (Senaar, 2018).
Researchers at the University of Texas at Austin are using a supercomputer called Stampede to develop a machine-learning algorithm to detect depression (Schnyer, Clasen, Gonzalez, & Beevers, 2017). The program can identify commonalities among patients using magnetic resonance imaging (MRI) brain scans, genomics data and other factors in order to make predictions of risk for those with depression and anxiety. From the analysis of hundreds of patient data inputs, the researchers successfully classified individuals with major depressive disorder with 75% accuracy and hence provide a basis for a workable diagnostic tool.
What sorts of incursions into the practice of psychotherapy can we expect?
These reflections were triggered, in part, by my recent Shrink Rap Radio interview with Silja Litvin (http://shrinkrapradio.com/images/600-The-Future-of-Psychotherapy-Apps-with-Silja-Litvin.mp3), a British psychologist who is founder and CEO of PsycApps (https://www.psycapps.com/about/). Her company’s first product is “eQuoo” (pronounced EQ), deriving from Litvin’s conviction that our emotional intelligence is actually more important for success in today’s world than our IQ. Having spent time playing this game, I think it could be marketed as a therapy app, although at this point they’re not marketing it as such, but rather as an “emotional fitness” game for mobile devices to widen its acceptance and the potential audience. Unfortunately, the word “therapy” still carries a certain stigma for the general population whereas emotional fitness education is more palatable. Either way, they’ve succeeded in making eQuoo both educational and compelling through a mix of story-telling, gamification, and animation. In fact, one of their goals was to overcome the major drawback of competitors in this category, which is that people tend to drop out too soon. The app leads users painlessly through evidence-based information drawn from positive psychology, cognitive-behavior therapy, couples therapy, brain research, big five personality theory and other psychological approaches that build resilience, optimism, and hope. From my own observations and experience with this game, I think eQuoo is spearheading a revolution in mental health apps that eventually will incorporate AI and machine learning.
Right now, Litvin is focusing on apps/games that help users to help themselves. It’s clear to me, however, that already eQuoo could be a leader in games or apps that are used in tandem with traditional person-to-person therapies (e.g., as homework between sessions).
Silja Litvin made me aware of another phenomenon that is more or less in this same space, which is the emergence of social media apps and websites designed to put mental health sufferers in touch with others who share their particular condition. For example, MIT researcher Robert Morris’s desire for a faster and less expensive way to improve mental health than what he’d seen in his experience with traditional therapy led to him developing Panoply, a website where people could post their problems and other users could re-frame them in less condemning ways as part of his doctoral dissertation (Hardesty, 2015). The site is no longer open to the general public, but it led to a social networking app called Koko, as reported in the Huffington Post (Holmes, 2015):
Koko operates just like any other social networking app in which you can post statuses and respond to other users’ content. The difference lies in what comes after you publish what’s on your mind. App users see your post and use a research-backed technique called “reframing” to make you think about an anxiety in a new way. . . . Reframing is all about changing how we think to change how we feel. When we’re stressed, we often become our worst enemy. We tell ourselves we can’t do it.
In my interview with Silja, she also made reference to the growing number of mindfulness and meditation apps such as Calm, Headspace, Aura, and 10% Happier. While, strictly speaking, these are not therapy apps per se, they certainly can be considered adjunctive. No doubt we will see more and more apps like these that serve an adjunctive role.
This all raises the question as to when or whether an AI will have the sort of general intelligence to actually stand in for a psychotherapist in a truly convincing way. I believe it’s unlikely to happen in our lifetimes, if ever. The problem is that AIs don’t have a wide-enough understanding of the world. They can be very smart in defined contexts, but they don’t have the breadth of experience or the understanding of nuance that humans possess. Amazon, in its quest to push that envelope, is offering a $3.5 million prize to developers in their Alexa competition to build an AI that can chat like a human (Vincent, 2018). It turns out that the steepness of the challenge is revealed in the mistakes the AI makes. For example, one chatbot said in a conversation about Christmas: “You know what I realized the other day? Santa Claus is the most elaborate lie ever told.” At one level, that’s an amazing and provocative statement for a machine to make, but imagine what a spoiler it would be if it were holding this conversation with a small child. The machine simply doesn’t have enough real-world input (i.e., experience) to have the wisdom to anticipate how inappropriate that statement might be.
So, I think psychoanalysts and their ilk can rest easy for some time.
Read this magazine as a 2 hour course. [wlm_private “3 Year Subscription|Standard Membership|Staff|NPT Premium|Standard Monthly”] Standard members click here to get the course for free. [/wlm_private]
Felton, J. (2018, June 19). Google AI can predict when you’ll die with 95 percent accuracy. Retrieved from http://www.iflscience.com/health-and-medicine/google-ai-can-predict-when-youll-die-with-95-percent-accuracy/
Gheiratmand, M., Rish, I., Cecchi, G. A., Brown, M. R. G., Greiner, R., Polosecki, P. I., . . . Dursun, S. M. (2017). Learning stable and predictive network-based patterns of schizophrenia and its clinical symptoms. Schizophrenia, 22, 1–12. doi:10.1038/s41537-017-0022-8
Hardesty, L. (2015, March 30). Crowdsourced tool for depression: Peer-to-peer application outperforms conventional self-help technique for easing depression, anxiety. MIT News. Retrieved from http://news.mit.edu/2015/crowdsourced-depression-tool-0330
Schnyer, D. M., Clasen, P. C., Gonzalez, C., & Beevers, C. G. (2017). Evaluating the diagnostic utility of applying a machine learning algorithm to diffusion tensor MRI measures in individuals with major depressive disorder. Psychiatry Research: Neuroimaging, 264, 1–9. doi:10.1016/j.pscychresns.2017.03.003
Senaar, K. (2018, January 22). Diagnosing and treating depression with AI and machine learning. Tech Emergence. Retrieved from https://www.techemergence.com/diagnosing-and-treating-depression-with-ai-ml/
Van Nuys, D. (2017, November 9). Humanistic AI. Age of Robots. Retrieved from https://www.neuroroboticsmagazine.com/humanistic-ai/
Vincent, J. (2018, June 13). Inside Amazon’s $3.5 million competition to make Alexa chat like a human. Retrieved from https://www.theverge.com/2018/6/13/17453994/amazon-alexa-prize-2018-competition-conversational-ai-chatbots