AI Ethics in Healthcare

Yesterday, the World Medical Association updated its core set of ethical principles for medical research (known as the Declaration of Helsinki) for the first time in 11 years. The revisions seek to modernize protections including new language on justice, public health, vulnerability and consent for personal data collection, which have implications for the future role of AI in the sector.

The World Health Organization (WHO) has identified 6 core principles on ‘safe and ethical AI’ for health: (1) protect autonomy; (2) promote human well-being, human safety, and the public interest; (3) ensure transparency, explainability, and intelligibility; (4) foster responsibility and accountability; (5) ensure inclusiveness and equity; and (6) promote AI that is responsive and sustainable.

If you look through the six WHO principles, then, apart from the 6th, it is not completely clear as to how these principles would differ from the desirable aspects you would want in any other element of the healthcare system (i.e., wanting to promote well-being, transparency, equity, etc). So, what if anything, is particularly especially distinctive with regard to AI?

In the UK, the National Health Service has created an AI Ethics Initiative focused on ‘on how to counter the inequalities that may arise from the ways that AI-driven technologies are developed and deployed in health and care’ and an AI Lab, which has a role of ‘accelerating the safe adoption of artificial intelligence in health and care’

These declarations, principles, and initiatives all try to provide some basis for grappling with the challenges posed by AI for healthcare. Are any of these approaches (or similar efforts you can identify) likely to be successful? Why or why not? What do you consider to be the biggest challenges posed by AI in the healthcare domain? What about the greatest opportunities? How should the key actors trade off the risks and benefits of expanded use of AI in healthcare?