This is a slightly edited version of a response I gave, alongside Professor Lucy Suchman, to Kalindi Vora’s keynote at the Cambridge Summer School on Histories of Artificial Intelligence
It is an absolute pleasure and privilege to be here, and to be responding to and with two scholars whose work I so admire and appreciate. I cannot promise my contribution will be anywhere near as insightful as theirs – but I can at least promise that it will be less awkward than the last time this happened.
A mortifying aside
See: in early 2019 I was invited to a Cambridge workshop on gender and AI, where I ended up sat on a panel with Allison Adam and Judy Wajcman. Who are personal heroes and icons of mine. Which made it very awkward when one audience member’s first question was: “Os, do you like the name of Judy’s new research center?”
To which my reply – the first time I had ever interacted with Judy, or spoken in her presence – was: “no”. And then turning to her and saying: I’m absolutely mortified by this, honestly I’m a tremendous fan and brought my copy of Feminist Technoscience in the hopes that you might sign it, and going the same colour as my lipstick out of shame.
Feminist, anti-racist AI
Unlike the name of Judy’s research center, I very much enjoyed and valued Professor Vora’s talk. The call for reimagining the social and the human reminds me of Nikolas Kompridis’s recent work, where he points out that such reimagining are necessary precisely because the current technoscientific desire for the future means we may not have this present to work and theorise with for much longer. A particular type of joy comes from Professor Vora’s discussion of resistance, and the already-ogoing rewrkings of technology by users; it often feels, reading scholarship on AI (including my own) as though we have got very good at narrating a sort of grim determinism but are nowhere near as good at articulating what anyone might do about it.
As I understood it, Professor Vora’s keynote emphasised and explored the ways in which a historicist evaluation of AI requires taking into account the ongoing racial and colonial legacies of “the human” and “the social”. This is a welcome intervention – although there is some fantastic work in domains like STS on colonial underpinnings and resonances in datalogical thinking (I’m thinking in particular of Payal Aurora and Nick Couldry’s research), there is less discussion of how deeply, at the ontological and epistemological level, these ways of thinking, and of thinking about thinking, appear.
Disability and algorithms
Given all this agreement and appreciation, all I really have to contribute is a gloss, or perhaps more appropriately, an augmentation, which is on the need to integrate critical approaches to and awarenesses of disability into such theories of the social. Part of this is historicist; there is a long history of interactions between notions of disability and capitalist ideas of productivity and “use”, and the racialised and gendered entanglement of disability and biopolitical projects around it. The same is true of ideas of eugenics, “autonomy”, reason, failed humanity and personhood Professor Vora discusses here.
Part of it is also motivated by the ways that I see “the social” being deployed in AI right now, and ideas of emotion and personhood – a prominent point of explicit theorising is around disability. We can see this in Affectiva, the company that is perhaps most prominent for bringing ideas of emotion recognition and affective AI into technological development and popular culture, which originated not as a marketing company (which it now bills itself as) but as a pseudo-pedagogical project to “fix” disabled children marked as asocial. Only a couple of years ago, some – I’m sure – very well-intentioned – HCI researchers proposed “fixing” the social and emotional aspects of disabled life essentially by hooking disabled people up to an always-on network of monitoring sensors to alert for social disconnects that could be connected. There are a thousand other examples, from diagnostic systems to ideas of depending on “useful” disabled people as low-level technology workers, for which I will awkwardly suggest my paper “Automating Autism” if you want to read about, but – you get the idea.
Despite this focus on disability in a lot of AI work around the social and the emotional, there is relatively little critique or inquiry within critical data studies or AI. As scholars such as Ashley Shew and Aspen Lillywhite have highlighted, there is extensive (and welcome) inquiry into race and gender as intertwined systems of power in algorithmic thought and work, but a dearth of inquiry into disability.
But I don’t suggest this gloss or augmentation solely to address ontological disconnects. My motivation is also that I think it would allow us to more adroitly address some of the critiques and concerns that have arisen from critical disability studies in response to more relational and collective notions of personhood. There is a lot of fantastic feminist and postcolonial work on both, but – for very good reasons – scholars tend to focus on “those who do the work”. A particular site of focus is often care work, given the gendered and racialised aspects of who does this work, and the care workers. But one response from feminist theorists of disability has been, as Sunaura Taylor puts it, to point out that “being ‘cared for’ can be stifling”. A deeply relational and social idea of personhood requires recognising not only the worker, but the subject of the work, and the way that being the subject includes being subject-to. In the case of things like social robots, this means attending not just to the ways that technologies, or human labour, can be made invisible, but the ways that the humanity of the target of that labour can be made to vanish, even in feminist theorising that seeks to render oppressed workers visible.
To end on an optimistic note, as Professor Vora did, there is some work already on this! In particular I would point the audience to Olivia Banner’s brilliant Catalyst paper on “Technopsyence and Afro-Surrealism’s Cripistemologies”, along with Damien Patrick Williams’ paper on “Heavenly Bodies”, which thoughtfully interrogates Harawayan research on cyborg theory through a lens of Black Critical Disability Studies. And beyond scholarly publications – because we should always be going beyond scholarly publications – I am deeply excited by the workshops and projects that Professor Vora has shared in her talk as an opportunity to achieve just these insights. They resonate strongly with the work of Katta Spiel and Cynthia Bennett, who are (together, and apart) doing amazing work in HCI theorising about different ways of exploring futures, and the different futures that might result.