A Dilemma for Skeptics About Trustworthy AI
Overview
Can AI ever be (un)trustworthy? While numerous AI ethics guidelines provide criteria for developing trustworthy AI, a growing number of philosophers argue that AI cannot be trustworthy because AI lack some human feature deemed essential for the trust relation, such as moral agency or being responsive to reasons. Here we propose a dilemma for these skeptics. Either such theorists must hold there is either only one kind of trust (monism), or that there are multiple varieties of trust (pluralism). The first horn of the dilemma is that a monistic view of trust is implausible; no one analysis can capture all kinds of trust relationships. The second horn of the dilemma is that if such theorists adopt a pluralistic account of trust, then they have little reason to deny that AI is the sort of thing that can be trustworthy. While AI may fail to possess characteristics required for some kinds of trust relations, these are not necessary conditions for trustworthiness.
Madeleine Ransom is an Assistant Professor of Philosophy in the Department of Economics, Philosophy, and Political Science at University of British Columbia, Okanagan, and a Faculty Associate at the W. Maurice Young Centre for Applied Ethics. She holds the Canada Research Chair in Artificial Intelligence, Wellbeing, and Ethics.
Nicole Menard is a fourth-year undergraduate student at the University of British Columbia, Okanagan, majoring in Philosophy. She works as a research assistant for Madeleine Ransom on trustworthy AI and is a member of the Digital Transparency Cluster. Upon graduation, she intends to pursue her graduate studies in Philosophy.
Good to know
Highlights
- 1 hour 30 minutes
- Online
Location
Online event