Key points from article :
A new study in PLOS Medicine has found that artificial intelligence (AI) is not effective at predicting suicide or self-harm, despite rising hopes that modern machine learning could offer a breakthrough in prevention. The research, led by Matthew Spittal, PhD, from the University of Melbourne, reviewed 53 studies and concluded that AI performs no better than traditional risk assessment tools, which are already discouraged by clinical guidelines due to poor accuracy.
The analysis revealed that machine learning algorithms misclassified more than half of the people who later died by suicide or were treated for self-harm as being at low risk. While the models showed reasonable global accuracy measures, their real-world clinical usefulness was limited. They tended to correctly identify those unlikely to return to hospital but failed to reliably flag those who would repeat self-harm or die by suicide. Sensitivity levels ranged from 45–82%, meaning many high-risk individuals slipped through unnoticed.
The findings highlight the limitations of relying on predictive models in such a complex and multifactorial area of mental health. Over the past 50 years, many risk assessment scales have been developed, but their inaccuracy has led global guidelines to caution against using them to guide treatment. This review shows that AI, despite access to large health data sets, is not yet an improvement.
Instead, the authors advocate for a shift in focus. They recommend that hospital care for self-harm patients should be built on needs-based assessments, targeting modifiable risk factors, and proven aftercare interventions. AI may still hold promise, they suggest, but rather than predicting suicide, future research could explore how algorithms might help identify and address individual risk factors to support more personalized and effective care.