WHO Stresses Transparency, Regulation in AI Health Technologies
(Photo : ANGELA WEISS/AFP via Getty Images)
Director General of the World Health Organization, Tedros Adhanom Ghebreyesus, with UN Deputy Secretary-General Amina Mohammed (R), speaks at a meeting on universal health coverage on the sidelines of the UN General Assembly at UN headquarters in New York City on September 21, 2023.

The World Health Organization (WHO) calls for improved regulations on using artificial intelligence (AI) technology in healthcare, emphasizing the significance of developing secure AI systems and encouraging communication between manufacturers, regulators, healthcare providers, and patients.

The United Nations Agency for Health stated in its recent publication that AI could improve clinical trials, diagnosis, and healthcare personnel's knowledge and skills. To protect privacy, security, and data integrity, however, it is necessary to have strong legal and regulatory frameworks when using AI to handle health data since it raises worries about potential access to sensitive personal information.

Artificial Intelligence: Promising But Also Challenging

"Artificial intelligence holds great promise for health but also comes with serious challenges, including unethical data collection, cybersecurity threats, and the amplification of biases or misinformation," WHO Director-General Tedros Adhanom Ghebreyesus stated, as quoted by UN News.

To safely manage the rapid rise of AI in healthcare, WHO emphasizes transparency, documentation, risk management, and external data validation. The new guidelines from the organization are intended to assist nations in successfully regulating AI in healthcare to realize the technology's full potential while reducing related dangers, Ghebreyesus noted.

To protect privacy and data, complicated laws like the General Data Protection Regulation (GDPR) in Europe and the Health Insurance Portability and Accountability Act (HIPAA) in the US were tackled by focusing on jurisdiction and consent.

Read Also: Telegram Exposes User IP Addresses During Calls, Security Expert Warns

The WHO stated that AI complexity relies on code and training data. Since it can be difficult for AI models to effectively reflect different populations, there is a risk that biases, errors, or failures may result, therefore effective regulation is crucial to reducing the risks of AI magnifying biases in training data, according to a report from EWN.

TechTimes recently reported that AI surpassed primary care physicians in depression therapy. The study, which appeared in Family Medicine and Community Health, an open-access publication,  The study found that AI chatbot ChatGPT's depression treatment suggestions met requirements and had no gender or social class biases, suggesting they might improve primary healthcare decision-making. The research included 1,249 French primary care providers and ChatGPT.

Healthcare AI: How US Patients View It

Meanwhile, US patients' comfort levels with the use of artificial intelligence (AI) in their healthcare experiences are uneven, according to a poll conducted by Propeller Insights on behalf of Carta Healthcare. 

According to TechTarget, a study of 1,027 adult patients indicated that 49% said they were comfortable with their healthcare practitioners using artificial intelligence, while 51% said they were not. When questioned about AI's potential to improve diagnostic accuracy, 51% of the respondents said they were comfortable and 42% were uncomfortable.

These results come as research on the potential of AI in healthcare shows that it can improve diagnosis accuracy and provide opportunities for patients and healthcare professionals seeking medical guidance from AI-driven solutions.

Related Article: New AI Tool Can Diagnose Type 2 Diabetes in Seconds by Listening to Your Voice

byline quincy

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion