Warning From WHO Over AI Tools In Medicine

On Tuesday, the World Health Organization (WHO) warned against using artificial intelligence (AI) in public healthcare, citing the possibility of distorted or otherwise exploited data utilized by AI for decision-making.

In a statement, the WHO expressed concern that partial data could spread false information.

The group was optimistic about AI’s future but cautious about its potential uses in healthcare information access, decision help, and diagnostics.

The United Nations health agency has called it “imperative” to assess the risks associated with user-generated large language model (LLM) technologies like ChatGPT to safeguard and enhance public health.

Its warning is timely since the popularity of AI apps is on the rise, bringing greater attention to a technology that could significantly impact economies and society.

On Tuesday, the World Health Organization warned that the lack of regulation of vast language model tools developed by artificial intelligence threatens human health.

The World Health Organization (WHO) is concerned that LLMs are not being subjected to the same level of scrutiny as other new technologies, even though they use artificial intelligence to analyze data, develop material, and answer questions (often erroneously). The UN body has pushed for extensive risk assessments and safeguards to be put in place before LLMs are used in healthcare.

The government expressed concern that such technologies’ potential long-term advantages and applications could be undermined or delayed if unproven systems were widely used too soon, leading to errors by healthcare staff, harm to patients, and a loss of trust in AI.

While the answers generated by LLMs may seem credible and authoritative to the end user, they may be utterly wrong or contain significant inaccuracies, especially regarding health-related answers.

WHO has voiced its concerns after a group of international doctors published a warning in the peer-reviewed journal BMJ Open Health, calling for a halt to AI pending thorough regulation.

The potential dangers of AI in healthcare and other sectors appear to be well-founded. The Biden administration has been called to investigate after Medicare Advantage insurers’ use of unregulated artificial intelligence tools to determine when to end payments for patients’ treatments had resulted in the premature termination of coverage for vulnerable seniors.