Are Facebook’s Suicide Prevention Tactics Misguided?

Apr 8, 2019 by

What to think about Facebook’s foray into suicide prevention.

Sara Gorman, Ph.D., MPH, and Jack M. Gorman, MD

Alarming statistics about rising rates of suicide, including among teens and young adults, have rattled everyone in the suicide prevention field and the general public at large. Amidst this unsettling trend, it was therefore of great interest to many people when it became apparent that Facebook was trying its hand in suicide prevention.

Facebook has actually been involved in suicide prevention for a number of years, allowing people on Facebook to flag concerning posts, which would then be reviewed by trained members of the company’s Community Operations team, who could connect the person posting with support resources. In 2018, the company started using machine learning to actively scan posts for concerning messages potentially related to the desire to die by suicide. These posts would then be sent for review by a human team that would have to decide how to respond.

Facebook has openly discussed the difficulties they ran into in trying to hone this machine learning technique in order to avoid too many false positives. Over time and with many examples, they feel they have gotten the computer to a place where it sends only a subset of truly concerning posts to the human review team. The algorithm also looks at comments left on the post to see whether they indicate concern on the part of others. If the human review team identifies a truly concerning post, they will automatically show support resources to the original poster. In cases in which “imminent harm” is determined, Facebook may contact local authorities.

Source: Shutterstock

On a basic, intuitive level, this sounds like a great development. Especially given all the horrible news about Facebook lately, it felt nice to see that the company might be using its incredible influence to do something good. And indeed, this tactic has met with some praise. Some have pointed out that traditional methods of suicide prevention have not always worked so well and thus we should be open to this new form of “experimentation” using advanced technology. We know, for example, that asking about suicidal thoughts, as important as that may be, is not an effective method for predicting who will actually kill themselves. In addition, the reality is that people spend a lot of time on social media and do express suicidal intent there, so it is only natural for us to try to use this medium to prevent suicide as well. Others have noted that AI-based tools for detecting people in trouble and encouraging help-seeking behaviors can be quite accurate and effective, so long as seeming cries for help are understood in proper context. Not to mention that the threat detection and response through Facebook’s AI technology can be very rapid and cant allow for timely response to people who might never have asked for help in a more traditional way, like calling a suicide hotline or speaking to a friend or family member.

Nonetheless, there are some very serious concerns about Facebook’s suicide prevention AI technology. The main issues here are both scientific and ethical. From a scientific standpoint, we need to be able to rigorously evaluate whether Facebook’s intervention is effective. That is the only way we can truly understand whether it is worth deploying and ensure that it is not doing any kind of harm, as even the best of intentions can sometimes lead to unintended consequences. If the company is going to claim to have developed a technology that prevents suicide, then that technology and its effect need to be carefully studied by trained researchers. But Facebook has thus far refused to share any information about how its technology works. Many academics and researchers have commented that this lack of transparency is basically unacceptable. Facebook has an obligation in this case to contribute to the field by sharing more information about how its algorithm works. So far, there is limited research evidence that online and mobile telephone applications are associated with reductions in suicidal ideation, but much more research needs to be done before anyone is confident that this is a safe and effective prevention strategy.

Others have also argued that in this case Facebook should be subject to rules that apply in clinical research more generally, such as required review by outside experts and informed consent on the part of users. In a recent paper in Annals of Internal Medicine, John Torous and Ian Barnett argue that Facebook users are essentially being experimented on and Facebook should be required to get informed consent in order to do this. This is important not only from a general ethical point of view but also from a practical point of view—if people find out that Facebook is screening their posts without their permission, they may trust the platform less and people who might have expressed suicidal thoughts there and been flagged for help might end up not doing so, causing us to lose the opportunity to intervene at all.

On the other hand, it is unclear how the process of getting informed consent would work and also not evident to everyone that what Facebook is doing truly qualifies as experimentation. If the Facebook approach to identifying potentially suicidal people actually saves lives, then it might be argued that tying it up with cumbersome research protection procedures would only serve to blunt its effectiveness.

Balancing these issues of trust, transparency, and intervention efficacy is certainly a delicate matter. But this debate over Facebook’s foray into suicide prevention brings up larger issues about just how much we still don’t understand about suicide, suicidal behavior, and preventing suicide. This is partially because it can be very difficult to study what in general is a rare event (even if rates are increasing). It’s hard to tell what actually prevents suicide when most of our studies have to settle on proxies, such as hospitalizations and suicide attempts, due to the low base rate of completed suicides. In addition, it can be very challenging for clinicians to predict which patients will make serious attempts to end their lives. Because of all of this, it is imperative that we not entirely dismiss interventions such as Facebook’s—we are in dire need of new approaches and fresh ideas in this field. At the same time, we must always follow the most rigorous scientific standards and our ethical obligations to those we are trying to help must always be paramount.

Source: Are Facebook’s Suicide Prevention Tactics Misguided? | Psychology Today


Print Friendly, PDF & Email

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.