In a year when the COVID-19 pandemic has wreaked so much havoc on the nation’s collective mental health, research has shown—unsurprisingly—that emotions like sadness, anxiety, depression, and stress are dramatically more prevalent now than they were this time last year. While those lamentable outcomes were measured through traditional surveys, a quiet revolution is under way in the underlying methodology of how mental health researchers and psychologists analyze the sentiment floating around our social media feeds and the internet more broadly.
By leveraging data from platforms like Twitter and Facebook, a growing number of doctors and academics (myself included) are using advanced textual analysis to determine what our choice of words reveals about ourselves in real-time. For instance, scholars have already demonstrated that it is possible to measure depression in an individual’s Facebook posts by detecting language predictors reflecting sadness, a preoccupation with the self, and expressions of loneliness and hostility.
As I detail in a new policy brief on “AI-Enabled Depression Prediction Using Social Media,” published by Stanford University’s Institute for Human-Centered Artificial Intelligence (HAI), the detection of mental illness using that data can now match the accuracy of traditional screening surveys. We collected social media data through a custom Facebook app in an urban hospital system and trained an AI model to use social media language to predict future diagnoses in the medical record. We were the first to test such a digital screening method in a real-life medical context.
The use of social media data for mental health screening poses significant ethical, legal, and regulatory questions, and should not be used without careful safeguards in place. But the social benefit of early screening is also important. In the United States each year somewhere between 7 percent and 26 percent of the population experiences depression, but less than half of those people receive treatment. These high rates of underdiagnosis and undertreatment suggest that unobtrusive new screening methods like AI-enabled prediction could radically improve how we identify and treat patients with depression.
Incorporating mental health into digital epidemiology
The field of digital epidemiology dates back to 2008, when researchers sought to track flu trends in real time using keywords in search engine queries. By 2015, an important proof-of-concept linked the language people used on social media with their health outcomes. Drawing on over 100 million tweets from more than 1,300 counties across the U.S., I was part of a team that evaluated whether negative tweets that express anger or hostility could reliably predict rates of death from heart disease in a given location. Not only could they do so, but we found that our models could reliably exceed the accuracy of those that only used traditional government-reported risk factors like income, smoking, hypertension, and so on.
The success of these kinds of digital epidemiology owe to advances in modern natural language processing, with which we can extract meaningful patterns out of everyday language filled with slang, “likes”, and emoticons. Using statistical pattern-recognition algorithms, our algorithms cluster, count, and score words and phrases—in effect, learning psychological associations from scratch. This then allows researchers to draw connections between the health of patients and populations and the ways they choose to express themselves.
As Facebook, Twitter, and similar platforms have maintained tremendous levels of growth throughout the past decade, the amount of data available for analysis has also expanded. This offers psychologists a vast new window into the mental health of social media users. In 2013, for instance, I was part of a team that found, in a published a study of 75,000 Facebook users who had taken a personality test, that some words were highly predictive of a psychological trait (extraverts talk about “parties,” introverts about “books). More recent work has replicated this approach on Twitter, making it possible to derive the personality composition of all counties in the United States.
Realizing the promise in depression diagnostics
More recently, our analysis on social media has focused on mental health and helping to treat individual patients. By examining Facebook language data from a sample of consenting patients, we built a method to predict the first documentation of a diagnosis of depression in the electronic medical record based on the textual content of posts.
The result is a model performance that is close to the customary threshold for acceptable diagnostic power in traditional assessments. We further sought to investigate how far in advance this kind of analysis might be able to predict future depression, since identifying patients at an earlier stage in their mental illness creates better treatment opportunities. Our analysis suggests that social media-based prediction of future depression status may be possible as early as 3 months before the first documentation of depression in the medical record. We believe that combining different screening methods (AI + screening surveys + clinicians) would improve overall screening significantly. With proper safeguards in place, this raises the prospect of doctors in the future being able to analyze social media posts for red flags and following up with patients accordingly. In light of how widely underdiagnosed depression is, AI-enabled diagnostics are poised to take a significant step toward increasing access to mental health care.
Regulatory guidance needed
Although unobtrusive monitoring of digital traces shows tremendous promise for the early diagnosis of mental illness, it also raises important concerns. As researchers have noted, the misuse of health data represents a major concern with possible harms that include involuntary confinement, inappropriate data-sharing, and other risks. How effective digital epidemiology proves to be will ultimately depend on the guidelines and consent requirements put in place for carrying out this kind of research, much less deploying predictive models in clinical contexts.
In our research, we always obtain permission to analyze participants’ social media feeds and follow strict privacy guidelines. But few social media users realize that giving access to their statuses (or even their likes) can supply a fine-grained personality profile to a team of researchers just as easily as it can to a corporation with a profit motive. Our findings thus raise important questions related to patient privacy, informed consent, data protection, and data ownership. Analyses of how people use language on social media are based solely on statistical patterns, but they can be so revealing that political campaigns, consumer marketers, insurance actuaries, and even intelligence operatives are as interested in their application as scientists are. For example, imagine being denied a life insurance policy based on your publicly available social media posts, and the personality profile algorithms derived from it.
In order for our AI-enabled prediction method to become scalable as a complement to existing screening and monitoring procedures, policymakers and regulators will need to work to ensure that patient privacy and confidentiality contribute to greater trust in these systems. To that end, clearer guidelines are needed about access to data and the purpose of those who are accessing it.
Further integrating this methodology with scalable treatments will require developers and policymakers to look at how best to combine the digital text patients produce with other complementary data feeds, such as patient location, her physical and sleep activities, and maybe even recognition of facial and voice expressions. But they will also need to confront lingering ethical questions about how to use these predictions responsibly: multi-step testing and algorithmically fair accuracy across races and genders will be a necessary part of the equation here, but the more we realize the potential of digital text analytics to improve our health and well-being, the more we can craft our future in ways that are conscious, ethical, and even lifesaving.
Johannes C. Eichstaedt is a computational social scientist who is jointly appointed as an Assistant Professor at Stanford University’s Department of Psychology and as the Shriram Faculty Fellow at the Stanford Institute for Human-Centered Artificial Intelligence (HAI). He is a co-founder of the World Well-Being Project.
Facebook and Twitter provide financial support to the Brookings Institution, a nonprofit organization devoted to rigorous, independent, in-depth public policy research.