Can Big Data Help Us Fight Rising Suicide Rates?
Innovative efforts struggle to save lives and mitigate the sorrow behind suicide.
Image by Miskatonic via Wikimedia Commons
This Thursday, September 10, marks the 13th annual World Suicide Prevention Day. At 8 p.m., thousands upon thousands of people across the globe will light candles in their windows to remember those they’ve lost to suicide, honor survivors, or simply manifest a sign of their support for suicide prevention. That simple act is, historically at least, an extraordinary thing.
Humans have contemplated the ethics and philosophy of suicide for millennia. But until recently (in the Western world at least), the notion of bringing self-inflicted death out of the shadows, discussing it in the open, and developing sound, evidence-based theories on how to tackle it just wasn’t on our collective radar. One of the first real public discussions on suicide and its social implications was in the West French sociologist Émile Durkheim’s Suicide, published in 1897. The first suicide prevention organization in America, the Save-A-Life-League, only came along in 1906. And it wasn’t until 59 years ago that America got its first suicide hotline, slowly dragging suicide from the shadows and into public discourse over some long, hard decades.
The fact that we can even have a commemoration like World Suicide Prevention Day is a sign of how far we’ve come in the past century. We’ve learned and widely disseminated knowledge of the signs of suicidal risk. We’ve come to understand the roles that isolation, hopelessness, and self-hate play in a person’s decision to take his or her own life. We can appreciate the value of talking to people, offering sympathy, empathy, and connection, and we’ve developed indispensible guides to help individuals navigate the choppy waters of the difficult conversations around suicide. Between the resources at medical facilities, an infinite number of online support forums, national hotlines, and increasingly dense and enlightened personal support networks, we have a massive arsenal at our disposal to help those dealing with suicidal thoughts find a way out of their darkness. Compared to the world of 1897, 1906, or even 1956, this is an amazing configuration of knowledge, resources, and sympathy—something to be truly applauded. But unfortunately, it’s not enough.
For many years, it seemed like the development of prevention resources and awareness in America was making a dent in suicide rates. From 1990 to 2000, suicides dropped from 12.5 per 100,000 people to 10.4. But since the millennium, suicide rates have risen again, to 12.1 per 100,000 people—putting self-death rates even higher than they were 50 years ago, when they sat at 11 per 100,000. In 2013, 41,149 Americans committed suicide (as did 1 million more people across the globe), making it the tenth most common cause of death in the nation—almost triple our national homicide rate. Among Americans aged 15 to 24, suicide remains the second most common cause of death. And on top of those 100-plus Americans killing themselves per day, up to 8 million others seriously consider ending it all every year.
It’s not entirely clear why suicide is such a stubborn social phenomenon. Maybe the resurgence and continued strength of suicide has something to do with the rise of cyberbullying since the millennium, or other new social pressures delivered through new avenues we have not yet addressed in our suicide prevention programs. Perhaps it just speaks to something deeper that we’re missing about the realities of suicide and how to stem it. But whatever the cause, the response has been remarkably steady: For the past few years, most national programs and the United States government have focused their efforts on bolstering existing suicide prevention programs, hoping that if they just make the population a little more aware, the resources a little more accessible, then we’ll manage to make an impact.
This full-steam-ahead approach isn’t illogical. No matter how advanced our suicide prevention programs seem to get in America, it’s astounding to see just how many misconceptions about suicide continue to run deep in our national psyche. Expanding existing programs and conducting further outreach can dispel the notion that talking about suicide puts the idea into the head of a hopeless person and help convince the public that suicidal folks often aren’t dead-set on dying and can be helped. And raising awareness can help us all feel more comfortable broaching the subject amongst ourselves, allowing us to follow up on difficult discussions and make sure that someone in need of help is getting it.
But there are also others promoting resource development in new frontiers of suicide prevention:
One line of thought holds that we stand to achieve a lot by reducing access to the fastest-acting and most effective tools for committing suicide. Citing a slew of studies, they show that methods of suicide requiring more premeditation have lower success rates and that failed suicide attempters don’t often try again, having rethought their reactions or received the help they need. On the other hand, those using deadlier and swifter tools, like jumping bridges and firearms, tend to exhibit less signs beforehand and act on swifter impulses. This means that they have less access to resources because there’s no time between conception and death for a support network to kick in or for them to consider their options. Under this line of reasoning, increasing gun control or building bridge barriers should decrease suicide rates and up the utilization of existing resources just by slowing down the process, thereby opening up avenues for intervention.
Chart shows fatality rate by suicide method. Image by James Heilman, MD via Wikimedia Commons
Another line of thought holds that we need to focus on lowering conceptual barriers of access to existing suicide prevention resources. That’s the logic behind Facebook’s new suicide prevention protocols, introduced towards the end of February. The site’s new system allows those who suspect their friends may be suicidal based on their social media or physical actions to flag their troubling posts, rally support groups of friends, chat with helpline workers, and otherwise marshal resources without the need for direct, uncomfortable confrontations. Users dubbed to be at risk by their friends will then start to receive pop-ups offering advice and resources when they log online, lowering the barriers for all involved when it comes to seeking out information that they might want to explore, but could be too embarrassed or confused to actively seek out.
Some recent studies even suggest that we might want to pursue a new pharmacological solution to suicide’s persistence. A recent meta-study on the influence of micro-variations in naturally occurring lithium in drinking water shows that in 9 out of 11 global studies, miniscule increases in the element lead to up to 40 percent reductions in suicide in a given country. These findings have led some scientists to propose that we label lithium a vital trace element in our daily nutrition and work to consume enough of it just like we do copper, manganese, or zinc.
None of these are anywhere near perfect or complete solutions: Unfortunately, in the few cases where people have tried to systematically reduce access to swift suicide means (as in Sweden in 2008 where reducing access to rapid-action suicide tools was part of a national nine-point reduction plan), there’s not much evidence of a decline in suicide rates. Efforts to reduce barriers, like Facebook’s new protocols, come with the risk of abuse, harassment, and real-world consequences for false flags. Plus there is no guarantee that your friends will catch every little micro-sign of suicidal thoughts and report them. And as for trying to convince the public to consume more lithium or spiking our drinking water with even microdoses of a substance used to treat bi-polar disorder, just consider the hue and cry of current popular conspiracy theories regarding treatments like water fluoridation.
Yet there is one frontier in suicide prevention that seems especially promising, though in a way, it maybe a bit removed from the problem’s human element: big data predictions and intervention targeting.
We know that some populations are more likely than others to commit suicide. Men in the United States account for 79 percent of all suicides. People in their 20s are at higher risk than others. And whites and Native Americans tend to have higher suicide rates than other ethnicities. Yet we don’t have the greatest ability to grasp trends and other niche factors to build up actionable, targetable profiles of communities where we should focus our efforts. We’re stuck trying to expand a suicide prevention dragnet, as opposed to getting individuals at risk the precise information they need (even if they don’t tip off major signs to their friends and family).
That’s a big part of why last year, groups like the National Action Alliance for Suicide Prevention’s Research Prioritization Task Force listed better surveillance, data collection, and research on existing data as priorities for work in the field over the next decade. It’s also why multiple organizations are now developing algorithms to sort through diverse datasets, trying to identify behaviors, social media posting trends, language, lifestyle changes, or any other proxy that can help us predict suicidal tendencies. By doing this, the theory goes, we can target and deliver exactly the right information.
A counselor speaks to a soldier at a suicide prevention information booth at U.S. Army Garrison Humphreys, South Korea. Image by Pfc. Ma, Jae-Sang via Flickr
One of the greatest proponents of this data-heavy approach to suicide prevention is the United States Army, which suffers from a suicide rate many times higher than the general population. In 2012, they had more suicide deaths than casualties in Afghanistan. Yet with millions of soldiers stationed around the globe and limited suicide prevention resources, it’s been difficult to simply rely on expanding the dragnet. Instead, last December the Army announced that they’d developed an algorithm that distills the details of a soldier’s personal information into a set of 400 characteristics that mix and match to show whether an individual is likely in need of intervention. Their analysis isn’t perfect yet, but they’ve been able to identify a cluster of characteristics within 5 percent of military personnel who accounted for 52 percent of suicides, showing that they’re on the right track to better targeting and allocating prevention resources.
Yet perhaps the greatest distillation of this data-driven approach (combined with the expansive, barrier-reducing impulse of mainstream efforts) is the Crisis Text Line. Created in 2013 by organizers from DoSomething.org, the text line allows those too scared, embarrassed, or uncomfortable to vocalize their problems to friends, or over a hotline, to simply trace a pattern on a cell phone keypad (741741) and then type their problems in a text message. As of 2015, algorithmic learning allows the Crisis Text Line to search for keywords, based on over 8 million previous texts and data gathered from hundreds of suicide prevention workers, to identify who’s at serious risk and assign counselors to respond. But more than that, the data in texts can trip off time and vocabulary sensors, matching counselors with expertise in certain areas to respond to specific texters, or bringing up precisely tailored resources. For example, the system knows that self-harm peaks at 4 a.m. and that people typing “Mormon” are usually dealing with issues related to LGBTQ identity, discrimination, and isolation. Low-impact and low-cost with high potential for delivering the best information possible to those in need, it’s one of the cleverer young programs out there pushing the suicide prevention gains made over the last century.
It’ll be a few years before we can understand the impact of data analysis and targeting on suicide prevention efforts, especially relative to general attempts to expand existing programs. And given the limited success of a half-century of serious gains in understanding and resource provision, we’d be wise not to get our hopes up too much. But it’s not unreasonable to suspect that a combination of diversifying means of access, lowering barriers of communication, and better identifying those at risk could help us bring programs to populations that have not yet received them (or that we could not support quickly enough before). At the very least, crunching existing data may help us to discover why suicide rates have increased in recent years and to understand the mechanisms of this widespread social issue. We have solid, logical reason to support the development of programs like the Army’s algorithms and the Crisis Text Line, and to push for further similar initiatives. But really we have reason to support any kind of suicide prevention innovation, even if it feels less robust or promising than the recent data-driven efforts. If you've ever witnessed the pain that those moving towards suicide feel, or the wide-reaching fallout after someone takes his or her life, you'll understand the visceral, human need to let a thousand flowers bloom, desperately hoping that one of them sticks. Hopefully, if data mining and targeting works well, that'll only inspire further innovation, slowly putting a greater and greater dent in the phenomenon of suicide.