In suicide-prevention literature, “gatekeepers” are community members who may be able to offer help when someone expresses suicidal thoughts. It’s a loose designation, but it generally includes teachers, parents, coaches, and older co-workers—people with some form of authority and ability to intervene when they see anything troubling.
Could it also include Google? When users search certain key phrases related to suicide methods, Google’s results prominently feature the number for the National Suicide Prevention Lifeline. But the system isn’t foolproof. Google can’t edit webpages, just search results, meaning internet users looking for information about how to kill themselves could easily find it through linked pages or on forums, never having used a search engine at all. At the same time, on the 2019 internet, “run me over” is more likely to be a macabre expression of fandom than a sincere cry for help—a nuance a machine might not understand. Google’s artificial intelligence is also much less effective at detecting suicidal ideation when people search in languages other than English.
Ultimately, search results are a useful, but very broad, area in which to apply prevention strategies. After all, anyone could be looking for anything for any reason. Google’s latest foray into algorithmic suicide prevention is more targeted, for people who are already asking for help. In May, the tech giant granted $1.5 million to the Trevor Project, a California-based nonprofit that offers crisis counseling to LGBTQ teenagers via a phone line (TrevorLifeline), a texting service (TrevorText), and an instant-messaging platform (TrevorChat). The project’s leaders want to improve TrevorText and TrevorChat by using machine learning to automatically assess suicide risk. It’s all centered on the initial question that begins every session with a Trevor counselor: “What’s going on?”