When Algorithms Think You Want to Die

Opinion: Social media platforms not only host troubling images of suicide and self-harm, they end up recommending it to the people most vulnerable to it.
a kaleidoscopic image of a women's face
Shannon West/Getty Images

It’s troubling enough that British teenager Molly Russell sought out images of suicide and self-harm online before she took her own life in 2017. But it was later discovered that these images were also being delivered to her, recommended by her favorite social media platforms. Her Instagram feed was full of them. Even in the months after her death, Pinterest continued to send her automated emails, its algorithms automatically recommending graphic images of self-harm, including a slashed thigh and cartoon of a young girl hanging. Her father has accused Instagram and Pinterest of helping to kill his 14-year-old daughter by allowing these graphic images on their platforms and pushing them into Molly’s feed.

Molly’s father’s distressing discovery has fueled the argument that social media companies like Instagram and Pinterest are exacerbating a “mental health crisis” among young people. Social media may be a factor in the rise of a “suicide generation”: British teens who are committing suicide at twice the rate they were eight years ago. There have been calls for change in the wake of Molly Russell’s death. British health secretary Matt Hancock, for example, said social media companies need to “purge this content once and for all” and threatened to prosecute companies that fail to do so. In the face of this intense criticism, Instagram has banned “graphic self-harm images,” a step beyond their previous rule only against “glorifying” self-injury and suicide.

But simple bans do not in themselves deal with a more pernicious problem: Social media platforms not only host this troubling content, they end up recommending it to the people most vulnerable to it. And recommendation is a different animal than mere availability. A growing academic literature bears this out: Whether its self-harm, misinformation, terrorist recruitment, or conspiracy, platforms do more than make this content easily found—in important ways they help amplify it.

Our research has explored how content that promotes eating disorders gets recommended to Instagram, Pinterest, and Tumblr users. Despite clear rules against any content that promotes self-harm, and despite blocking specific hashtags to make that content harder to find, social media platforms continue to serve this content up algorithmically. Social media users receive recommendations—or, as Pinterest affectionately calls them, “things you might love”—intended to give them a personalized, supposedly more enjoyable experience. Search for home inspiration and soon the platform will populate your feed with pictures of paint samples and recommend amateur interior designers for you to follow. This also means that, the more a user seeks out accounts promoting eating disorders or posting images of self-harm, the more the platform learns about their interests and sends them further down that rabbit hole too.

As Molly’s father found, these recommendation systems don’t discriminate. Social media shows you what you “might love,” whether you like it or not—even if it violates the platform’s own community guidelines. If you’re someone who seeks out graphic images of self-harm, or even if you just follow users who candidly talk about their depression, these recommendation systems will fill your feeds with suggestions, reshaping how you experience your own mental health. Recommendations expose you to content you didn’t necessarily want to see, and more and more of it; they can consume your Instagram Explore page, your Pinterest homepage, your Tumblr dashboard. Social media accounts can quickly become funhouse mirrors, not just reflecting your mental health back to you, but amplifying and distorting it.

Of course, if their prohibitions were perfect, then recommendations would include only the most acceptable content social media has to offer. Clearly this isn’t the case. It’s not for lack of trying. Content moderation is astoundingly difficult. The lines between the acceptable and the objectionable are always murky; untrained reviewers have just seconds to delineate between content that “promotes” self-harm and content that might aid in recovery; with thousands of new posts every day, day in and day out, something is sure to slip through. And self-harm is just one manifestation of mental illness. While Instagram might promise a clampdown on content depicting self-harm, others will remain.

And bans are not only imperfect, they can be harmful in and of themselves. Many users who struggle with self-harm or suicidal inclinations find immense emotional and practical support online. Social media can offer them a supportive community, valuable advice, and a sense of relief and acceptance. And these communities sometimes engage in the circulation of images that might shock others—as a testimony to someone’s pain, a badge of honor for having survived, a cry for help. Blanket prohibitions risk squeezing these communities out of existence.

The issue is not just about making graphic content disappear. Platforms need to better recognize when content is right for some and not for others, when finding what you searched for is not the same as being invited to see more, and when what‘s good for the individual may not be good for the public as a whole.

We don’t talk about recommendation systems enough, perhaps because they are now so commonplace, or perhaps because most users still don’t really understand how they work. We may mock Facebook for recommending Amazon products we’ve already bought. We may dislike how Spotify thinks we want more sad songs just because we played James Blunt that one time. But researchers are beginning to open up more serious conversations about how recommendations work. In Algorithms of Oppression, Safiya Umoja Noble criticizes Google for amplifying racial stereotypes in its search results. Twitter and Tear Gas author Zeynep Tufekci has highlighted YouTube’s tendency to amplify conspiracy theories and extremist content, arguing that its recommendation algorithm “promotes, recommends and disseminates videos in a manner that appears to constantly up the stakes.” We need more conversations like this, especially about the impact of such recommendations on people going through mental health crises, that acknowledge not just that self-harm images are available but how difficult it would be to encounter the same content—or worse—over and over again.

In a Telegraph op-ed, Adam Mosseri, who now runs Instagram, admitted the platform is not where it “need[s] to be on the issues of suicide and self-harm.” A Pinterest spokesperson told us: “Our existing self-harm policy already does not allow for anything that promotes self-harm. However, we know a policy isn’t enough. What we do is more important than what we say. We just made a significant improvement to prevent our algorithms from recommending sensitive content in users’ home feeds. We will soon roll out more proactive ways to detect and remove it and provide more compassionate support to people looking for it. We know we can’t do this alone, which is why we are working with mental health experts to make our approach more effective.”

But real change here is going to be hard: Recommendation has become the primary means for social media to keep users on the site and clicking. It’s not likely to go away. And these algorithms are optimized to serve the individual wants of individual users; it is much more difficult to optimize them for the collective benefit. Social media companies also need more in-house knowledge about mental health, to better judge how to handle content that is both objectionable and valuable, and to better recognize the impact these platforms may have on users struggling with mental illness. This is a move Instagram has committed to, and it is imperative that they follow through.

We also urgently need to integrate knowledge about social media and their recommendation systems into mental health treatment. We need more recognition from health professionals that people aren’t just searching for content: their inquiries are being answered, they’re being offered it through their feeds, through their email, and through the many ways social media companies try to please and retain their users.

WIRED Opinion publishes pieces written by outside contributors and represents a wide range of viewpoints. Read more opinions here. Submit an op-ed at opinion@wired.com