Efforts to address algorithmic harms have gathered particular steam over the last few years. One area of proposed opportunity is the notion of an “algorithmic audit,” specifically an “internal audit,” a process in which a system’s developers evaluate its construction and likely consequences. These processes are broadly endorsed in theory—but how do they work in practice? In this paper, we conduct not only an audit but an autoethnography of our experiences doing so. Exploring the history and legacy of a facial recognition dataset, we find paradigmatic examples of algorithmic injustices. But we also find that the process of discovery is interwoven with questions of affect and infrastructural brittleness that internal audit processes fail to articulate. For auditing to not only address existing harms but avoid producing new ones in turn, we argue that these processes must attend to the “mess” of engaging with algorithmic systems in practice. Doing so not only reduces the risks of audit processes but—through a more nuanced consideration of the emotive parts of that mess—may enhance the benefits of a form of governance premised entirely on altering future practices.
Although this is the first time the written form has been put in front of a large audience, I was lucky enough to present the thoughts and findings behind it to the Tufts STS program. They seemed to really enjoy it (which was gratifying) and asked - as would be expected of students of Sam and Nick! - some fantastic questions.
One of these was, to paraphrase, if I’d do it again - if I regretted exposing myself to all the misery involved in staring deliberately into the void of awful we were investigating. And, well, there were some extremely rough moments, and times I ended up having to take breaks at (or under) my desk. Nevertheless, my immediate answer, and the one I stick with, is “yes”. Not because of what I learned about the work but because of the opportunity it represented for habit formation and self-examination.
To recap the central argument of the paper: the work of algorithmic audits is messy and laced with feelings, many of them both unpleasant and resonant with how the auditors are oriented towards the world (and vice versa). This means that audits can be highly miserable experiences - particularly due to the (ethics-laundering-sourced) drive to include “diverse stakeholders” in audit teams. Being a minority representative in a job that involves gazing at the worst bigotry a tech company lets through means being, essentially, a sin eater.
But: the very same affective charge of auditing work can also be used for insight, particularly if channeled appropriately. It can be used to sensitise, and discover things about the work one is doing and the objects one is focused on (anger, after all, is sometimes productive). And in keeping with the fact that those objects tend to stare back, it can also be used to sensitise and make discoveries about the self.
Our project was autoethnographic (or: duoethnographic. It’s hard to work out where to draw the line). We were evaluating not only the dataset, but our experience of doing so. Correspondingly, we spent a lot of time journalling, talking through journal entries and feelings with each other, and trying to articulate how we were feeling (and why we were feeling that way).
Doing so meant I learned a lot about, well, the process of auditing - but it also provided an opportunity to think and reflect about how I respond to the world, and relate to it. Being able to dedicate time to that, in a world as frantic as the one we find ourselves in, is rare. Being able to do so and call it work? Even rarer.
As a result: while the unhappiness was, well, unhappy, it was also a gift. An opportunity. And so in this very particular context, with the support and community involved, I’d do it again in a heartbeat. I don’t regret it, because, to paraphrase Paddy McQueen, I am happy to be the person I am having experienced it.