UNIverse - Public Research Portal
Project cover

Trusting Black-Box Algorithms? Ethical Challenges for Biomedical Machine Learning

Research Project
 | 
01.02.2019
 - 01.10.2021
The integration of Artificial Intelligence (AI) into everyday life will deeply reshape many areas of society. Also within the medical domain, applying machine learning (ML) techniques on health-related data promises paradigm-shifting advances. More accurate and efficient diagnostic tools, personalised therapeutic regimes as well as prognostic or predictive measures are bound to improve the treatments of patients. Yet, while many authors have voiced general ethical concerns, in-depth ethical analysis of biomedical machine learning is still nascent. Intricate ethical questions arise when the very design of a program renders it opaque to human understanding. Artificial neural networks employed for Deep Learning (DL) can serve as a classic example for this. In DL, programs can find their own, multi-layered representations based on vast training data, allowing the program to find novel patterns in the data. In practice, this can be of great value if it renders a program's predictions and decisions more accurate or allows for new and improved scientific descriptions of a phenomenon. However, such novel forms of representations, recognizing patterns yet undetected and potentially undetectable by human agents, often make it impossible to fully explain and understand these so-called "black boxes". In clinical contexts, such opacity poses particular ethical challenges. How can we address so-called responsibility gaps, created by complex interactions between human agents and black-box algorithms, if a program's recommendation is erroneous and endangers patients? How can informed consent be obtained to use a program if it is by principle incomprehensible to both patients and health care professionals? How can we avoid discrimination against socially salient groups and protect vulnerable populations from systematic bias without understanding the underlying computational processes? A popular strategy to tackle these challenges is to call for trust in AI. After all, trust can be a means to deal pragmatically with uncertainty and incomplete knowledge in complex societies - and is generally considered to be of vital importance in healthcare settings. Taking trust as a starting point, this project hence asks if and under which conditions we can and should trust medical black boxes, bringing together theoretical considerations and empirical analysis informed by semi-structured qualitative interviews. The results of both parts shall be integrated in the sense of critical applied ethics to evaluate social practices concerning biomedical ML, improve bioethical theory addressing this field and provide guidance for ethics committees and regulatory bodies.
Publications
Starke, Georg et al. (2021) ‘Why educating for clinical machine learning still requires attention to history: a rejoinder to Gauld; et al’, Psychological medicine, 51(14), pp. 2512–2513. Available at: https://doi.org/10.1017/s0033291720004766.
URLs
URLs
Starke, Georg, De Clercq, Eva and Elger, Bernice S. (2021) ‘Towards a pragmatist dealing with algorithmic bias in medical machine learning’, Medicine, Health Care and Philosophy, 24(3), pp. 341–349. Available at: https://doi.org/10.1007/s11019-021-10008-5.
URLs
URLs
Starke, Georg et al. (2021) ‘Intentional machines: A defence of trust in medical artificial intelligence’, Bioethics, 36(2), pp. 154–161. Available at: https://doi.org/10.1111/bioe.12891.
URLs
URLs

Members (4)
Profile Photo
Bernice Simone Elger
Principal Investigator
Profile Photo
Eva De Clercq
Co-Investigator
Profile Photo
Volker Roth
Co-Investigator
MALE avatar
Georg Starke
Project Member