#Seminar 11: Artificial intelligence (AI) is beginning to be used for many life-changing decisions in medicine. Critics object that using AI in these areas is too likely to lead to harm, unfairness, and other moral wrongs. In response, I will argue that these decisions can be made safer and more ethical by building human moral values into the AI decisionmaker. But how can we do that? I will discuss problems for some proposed ways to build morality into AI from the top down and from the bottom up. Then I will explain our lab’s novel hybrid alternative, which surveys human moral judgments and then corrects for ignorance, confusion, and partiality. Because our approach is based on idealized observer theories in ethics, it minimizes substantive assumptions about what is morally right or wrong, and it can be used in a wide variety of contexts. I will report initial empirical results using our method and discuss potential applications to kidney allocation and dementia.
* Short Bio: Walter Sinnott-Armstrong is Chauncey Stillman Professor of Practical Ethics in the Department of Philosophy and the Kenan Institute for Ethics at Duke University. His most recent book is Moral AI and How We Can Get There, with Jana Schaich Borg and Vincent Conitzer in 2024.
rTAIM (Rebuilding Trust in AI Medicine) Monthly Seminars are online seminars open to anyone in the world that would like to present her/his current research on topics associated with the main theme of the project.
rTAIM Seminars: https://ifilosofia.up.pt/activities/rtaim-seminars
Organisation:
Steven S. Gouveia (MLAG/IF)
Mind, Language and Action Group (MLAG)
Instituto de Filosofia da Universidade do Porto – UIDB/00502/2020
Fundação para a Ciência e a Tecnologia (FCT)