Global Allergy & Airways Patient Platform
Bronchiectasis Educational Flyers
Guide
05 Feb 2026
Health Action International (HAI)
13 Jan 2025
Artificial intelligence (AI) holds great promise for transforming mental healthcare. From personalised treatment plans to early detection of mental health conditions, AI could make mental health services more accessible and effective. AI systems are already being developed for various applications, including diagnostic assessment, therapeutic support (such as chatbots), monitoring of mental health, and even educational tools aimed at promoting mental health literacy. These applications span both clinical and non-clinical settings, addressing a broad spectrum of conditions from depressive disorders and anxiety to non-medical issues, such as loneliness. And yet, despite all its potential, AI introduces a set of new risks that extend beyond individual patients to broader societal concerns, raising questions about equity, safety, and ethics.
This policy brief outlines potential risks of AI in mental healthcare, which can be identified at three levels. At the individual level, concerns include misdiagnosis, inappropriate treatment recommendations, and privacy breaches. At the collective level, issues such as biased datasets, accessibility barriers, and the marginalisation of vulnerable groups come to the forefront. At the societal level, challenges including over-surveillance, erosion of trust in healthcare, and the commodification of mental health services emerge, revealing broader implications for equity and justice.