Clicky

On robots, depression and the lure of the not-quite-real

Would you have a robot as your therapist? You have 24/7 access; you control your schedule, not having to wait for an appointment; you won’t feel rushed; your “therapist” will remind you of medication and will sense a potential crisis; your unbiased bot is nonjudgmental, a big plus in a culture that still stigmatizes mental affliction. And mental illness is intensifying as our viral purgatory lingers on, with likely new waves due to both mutations and human thoughtlessness.

The mental fallout is crushing. Global anguish pre-COVID was appalling enough, with over 250 million cases worldwide of reported depression, according to the World Health Organization. Since then, levels of stress, depression, burnout, loneliness, and suicidal ideation have mushroomed. A recent Kaiser Foundation study relates 41 percent of American adults reporting symptoms of anxiety and depression, a sizable increase from 11 percent in 2019. There are simply not enough humans to offer care.

Enter the robots.

Woebot is the newest in “chatbot” therapy, a conversational artificial intelligence engineered to deliver cognitive behavioral therapy, a big leap up from Carl Rogers’ nondirective counseling in the late 1940s that thrived in the churning wake of the collective trauma of World War II, Korean War, and Cold War politics. Far surpassing its progeny and first chatbot, MIT’s Joseph Weizenbaum’s 1966 “Eliza,” Woebot has natural language processing, deep learning, neural networking, and predictive analytics. Developed by Woebot Health, a startup company designing digital therapeutics for mental health, it’s a handy online tool with smartphone apps.

What is curiously striking — though not surprising, since iPhones have become our digital vital organs — is that we humans may well prefer to reveal our deepest fears and secrets to machines rather than to humans. In their 2020 global workplace survey, Workplace Intelligence and Oracle reported 68 percent of the over 12,000 respondents favored sharing their emotional issues with a robot instead of a human. Again, no surprise given our enduring personifying of objects, like our trusty household companions and sources of information, Apple’s Siri and Amazon’s Alexa. And indeed, isn’t much of our online activity, like online shopping, already a type of therapy?

Yes, the benefits are considerable. But think carefully. What is it with humans talking with, not to, machines? Are we actually conversing? Does the machine genuinely understand what we say? Therapy requires a deep, reflective capacity for empathy. So, can these bots offer empathy, the kind that comes with that good old-fashioned face-to-face, with its multidimensional cues and nuances?

Empathy. It is our au courant term we toss around with a variety of meanings depending upon the flavor of our context, whether neuroscience, primatology, politics, ethics, etc. “I feel your pain” stokes a painfully simplistic view of empathy’s unmitigated variety of faces. Nonetheless, even though the original meaning of empathy has transmuted from its 1890s formation in German psychologist Theodor Lipps’ Einfühlung, an engaged “in-feeling” towards an artistic form or object, “empathy” has come to convey a resonance, more emotional than cerebral, with another’s inner experience. In his “Letters to a Young Writer,” Irish author Colum McCann neatly cuts through all this: “The only true way to expand your world is to inhabit an otherness beyond ourselves. There is one simple word for this: empathy.” Empathy’s shared core lies in shattering and liberating oneself from the protective shell of “I, me, mine.”

We humans can shatter our protective shells. Can robots? Can chatbots break out of their algorithms? Does they have a “world” to “expand”? A chatbot may respond “as if” it is emotionally attuned to the human user, and the human, in turn, may respond “as if” there is no pretense. Still, as Sherry Turkle underscores in her book for our times, “Alone Together,” we humans are increasingly immersed in an “as if” world. And that “as if” world is much safer than the real human world we inhabit. Or is it?

Michael Brannigan is a philosopher, author, and speaker. Email: mcbrannigan64@gmail.com. Website: www.michaelcbrannigan.com.