Myself and my good friend/collaborator/brilliant human being Ahmer Arif have a new paper out for the TRAIT 2022 workshop. It’s part of an ongoing project trying to deepen the philosophical underpinnings of discussions of trust in both Artificial Intelligence and misinformation research. For this piece, we dig (briefly) into the idea of vulnerability, and its role as an essential underpinning to trust:
In this paper, we are working out a position on the interrelations between vulnerability, trust and distrust in AI research. Vulnerability is an important aspect of trust because it is critical to understanding issues of power and dependency that characterise many trust-relationships. Building on the literature, our work considers how vulnerability is conceptualised in research on AI trust more broadly. Drawing on Annette Baier’s contributions, enlarged and developed recently by Gilson and Mackenzie, we argue for the treatment of vulnerability as a potential source of positive change rather than as a de-facto negative state that should be avoided. Illustrating our argument with examples from both within and without the AI Trust literature, we suggest some implications of viewing relations of trust as a product of mutual vulnerability for AI researchers. We conclude by describing some ways in which our argument is incomplete and needs further development.
Get the paper here!