Suicide continues to be a prominent presence in the news. It is the second-leading cause of death for people aged 15 to 29, according to the World Health Organization. In an uncharacteristic turn, Facebook has begun using their users' data not just for profit, but to prevent suicides by warning authorities of those users exhibiting suicidal behavior.

The program was created in response to a suicide that was streamed over Facebook Live last year and has been running for 18 months, according to the New York Times. In a Nov. 15 Facebook post, CEO Mark Zuckerberg wrote that the program has “helped first responders quickly reach around 3,500 people globally who needed help.” The program uses an algorithm that flags content from users expressing a desire to self-harm. Those flagged posts are then reviewed by a human, who decides whether or not to call the police.

Art Caplan, the director of the Division of Medical Ethics at NYU Langone Medical Center, told Boston Public Radio Wednesday that he finds the lack of involvement by any mental health professional to be problematic.

“The intention is nice, but … it isn’t clear that they have mental health professionals doing the screening or somebody who knows what they are doing,” Caplan said.

Caplan said that in order for the program to become an impactful suicide prevention tool, Facebook needs to be more transparent with how they are deciding what posts get flagged and who exactly is reviewing them.