Sunday, December 22, 2024 | Jumada al-akhirah 20, 1446 H
scattered clouds
weather
OMAN
20°C / 20°C
EDITOR IN CHIEF- ABDULLAH BIN SALIM AL SHUEILI

Is AI ethical in healthcare?

LMMs have been adopted in healthcare faster than any consumer application in history,
LMMs have been adopted in healthcare faster than any consumer application in history,
minus
plus

While LMMs (Large Multi-modal Models), a type of fast-growing generative artificial intelligence (AI) technology can be used for specific health-related purposes, the question of how ethically they are in use is also prominent.


The World Health Organisation (WHO) has pointed out that there are risks of producing false, inaccurate, biased, or incomplete statements, which could harm people using such information in making health decisions.


There are mainly five broad applications of LMMs for health, which include, diagnosis and clinical care such as responding to patients’ written queries; patient-guided use, such as for investigating symptoms and treatment; clerical and administrative tasks, such as documenting and summarising patient visits within electronic health records; medical and nursing education, including providing trainees with simulated patient encounters and Scientific research and drug development, including identifying new compounds.


While LMMs can be used for specific health-related purposes, there are risks of producing false, inaccurate, biased, or incomplete statements, which could harm people using such information in making health decisions. Also, LMMs may be trained on data that are of poor quality or biased results.


Speaking to the Observer, Dr Pallavi, Country Head, Global Healthcare said that ethical considerations are crucial due to potential biases in AI algorithms, necessitating ongoing assessment and feedback for improvement.


"Common AI mistakes in healthcare mirror human errors and may include misdiagnoses or failures in understanding accents for speech recognition. Social prescribing with AI integration requires caution for effective patient connection," she said, adding that understanding patient 360 would be big time miss that AI will face. Hence, AI must be more used to increase the efficiency of medical professionals rather than replace them.


LMMs can accept one or more types of data inputs, such as text, videos, and images, and generate diverse outputs not limited to the type of data inputted. LMMs are considered to be unique in their ability to replicate human communication and to carry out tasks they were not explicitly programmed to perform. LMMs have been adopted in healthcare faster than any consumer application in history, with several platforms including ChatGPT, Bard, and Bert, mostly in 2023.


Dr Dilip Kumar Singvi, Specialist in Internal Medicine at Shifa Hospital said that using technology has always been a double-edged sword and the same goes with AI. "On one hand, it has proven to be an effective tool in enhancing the quality of healthcare by providers, patients, and insurers, quicker diagnosis, and subsequent management. On the other hand, there's a concern regarding the safety of using AI in healthcare as raised by WHO in May last year," Dr Singvi said.


AI and the large language tools (LLM) work on the data feeder to them and if the data is based on fact or inaccurate, then there is always a risk of getting false results or uncertain results leading to substandard care, besides the threat of ransomware attack or hacking of the data.


"Majority of patients are not confident with the treatment plan if done by third party, in the hands of AI instead of being directly getting formulated by the healthcare provider," adds Dr Singvi.


A survey was done in the USA and 57 percent said they were not happy with 3rd party decision making hence, there have been calls for using AI with utmost care in the healthcare sector.


Speaking to the media on the sidelines of the launch of WHO guidelines, Dr Jeremy Farrar, WHO Chief Scientist opined that generative AI technologies have the potential to improve healthcare but only if those who develop, regulate, and use these technologies identify and fully account for the associated risks.


“We need transparent information and policies to manage the design, development, and use of LMMs to achieve better health outcomes and overcome persisting health inequities,” Jeremy said.


LMMs are designed to perform well-defined tasks with the necessary accuracy and reliability to improve the capacity of health systems and advance patient interests. Developers should also be able to predict and understand potential secondary outcomes.


WHO RECOMMENDATIONS


- Invest in or provide not-for-profit or public infrastructure, including computing power and public data sets, accessible to developers in the public, private, and not-for-profit sectors, that requires users to adhere to ethical principles and values in exchange for access.


- Use laws, policies, and regulations to ensure that LMMs and applications used in healthcare and medicine, irrespective of the risk or benefit associated with the AI technology, meet ethical obligations and human rights standards that affect, for example, a person’s dignity, autonomy, or privacy.


- Assign an existing or new regulatory agency to assess and approve LMMs and applications intended for use in healthcare or medicine – as resources permit.


- Introduce mandatory post-release auditing and impact assessments, including for data protection and human rights, by independent third parties when an LMM is deployed on a large scale. The auditing and impact assessments should be published and should include outcomes and impacts disaggregated by the type of user, including for example by age, race or disability.


SHARE ARTICLE
arrow up
home icon