Stellenbosch University: AI in healthcare should promote human well-being and safety
We know there is a role for artificial intelligence (AI) in public health crises and emergencies. We must consider the key ethical issues, develop the necessary legal frameworks, and ensure there is adequate public engagement so that people can understand the impact of AI in healthcare.
This was the view expressed by Prof Keymanthri Moodley from the Centre for Medical Ethics and Law at Stellenbosch University (SU) in a recent Stellenbosch Forum lecture. The lecture, the fifth in the series for 2022, was themed “Digitisation of healthcare: adapting to change”.
The Stellenbosch Forum lecture series was started in 1990 and provides regular opportunities to SU staff and students as well as members of the public to learn more about the world-class research conducted at SU. Presented in an accessible and understandable way, these lectures offer both academics and non-academics a platform for critical debate across disciplinary boundaries.
In her lecture, Moodley said AI has accelerated the introduction of technology to healthcare. She mentioned robotics, triage algorithms, sensors, wearables, portable diagnostic devices, chatbots, virtual reality and holograms as examples of how AI has entered the doctor-patient relationship.
“During the Covid-19 pandemic, we were made more aware of the critical role of AI in healthcare. One of the challenges was the distribution of limited resources when doctors and nurses struggled to deny patients access to care when they needed it – especially when everyone admitted to intensive care units needed to be on a ventilator or needed critical care services.
“People started to develop triage algorithms based on machine learning that could help make those triage decisions for healthcare workers. In some settings, it was found that these triage algorithms were able to choose or select patients just as well as a team of triage healthcare professionals.”
According to Moodley, this remains a very controversial area because we know that it is not always extremely objective scientific factors that can be used. “While algorithms do offer a considerable advantage in helping with triaging, they’re not foolproof all the time.”
She said another gadget used during the pandemic was the pulse oximeter to detect oxygen levels. Unfortunately, it had problems of its own.
“The oximeter was built based on the collection of large amounts of data. It somehow appeared to build racial and ethnic bias in terms of how the oxygen level was detected in people of lighter and darker skins.
“Based on the science used to develop the oximeter and the data that was fed into this system, many devices underestimated the oxygen levels in people of colour. If a person of colour had a lower oxygen level needing more urgent treatment, the oximeter was reading at a higher oxygen level. In some respects, this was misleading and could have led to some patients not receiving treatment in time and according to their needs.”
Moodley pointed out that everything about data collection and research in healthcare is linked to precision medicine. This is an important approach to disease treatment and prevention because it focuses on the individual – their genetic makeup, environment, and lifestyle – and tries to tailor-make treatment to suit each individual. In this form of medicine, all forms of data collection and AI are important.
She emphasised the need to protect data and said a huge ethical discussion on aspects of data sharing, consent, ownership, and the commercialisation of data is ongoing.
“AI should promote human well-being and safety and should be in the public interest. There should be transparency, and doctors should be able to explain the use of AI to patients. We should promote AI that is responsible, inclusive, equitable, and sustainable,” Moodley concluded.