Key points from article :
AI generated large language model tools (LLMs) are rapidly expanding and being used for health-related purposes.
WHO is concerned that LLMs are not being used cautiously, despite the potential risks of bias, misleading responses, lack of consent, and misuse to spread disinformation.
WHO recommends that policy-makers ensure patient safety and protection while technology firms work to commercialize LLMs.
Proposes that these concerns be addressed and clear evidence of benefit be measured before widespread use.
Reiterates the importance of applying ethical principles and governance when designing, developing, and deploying AI for health.
Six core principles identified by WHO: protect autonomy, promote human well-being, ensure transparency, foster responsibility, ensure inclusiveness, and promote AI that is responsive and sustainable.