New year, new paper, new contradictions

By Os Keyes

Continuing the flurry of announcements: I have a new paper out, written with the delightful Katie Creel! As the abstract puts it:

While feminist critiques of AI are increasingly common in the scholarly literature, they are by no means new. Alison Adam’s Artificial Knowing (1998) brought a feminist social and epistemological stance to the analysis of AI, critiquing the symbolic AI systems of her day and proposing constructive alternatives. In this paper, we seek to revisit and renew Adam’s arguments and methodology, exploring their resonances with current feminist concerns and their relevance to contemporary machine learning. Like Adam, we ask how new AI methods could be adapted for feminist purposes and what role new technologies might play in addressing concerns raised by feminist epistemologists and theorists about algorithmic systems. In particular, we highlight distributed and federated learning as providing partial solutions to the power-oriented concerns that have stymied efforts to make machine learning systems more representative and pluralist.

Writing it - getting an opportunity to think though these problems (and thinkers) - qa. jot. Not only did I get to collaborate with one of my favourite people, I got o draw attention to the work of one of the best and most underappreciated anthropologists of AI. It doesn’t hurt that the paper’s core argument - the need for feminist critical engagement with AI to be actual engagement - feels important.

But: it’s also potentially confusing, because given my existing work I wouldn’t be surprised if a lot of people saw me as fundamentally opposed to AI and the people who build AI systems. A lot of the time, this is the case, but overall it’s more complicated than that. It’s more about how I respond to the tension and contradiction, in research and life, between the ideal and the pragmatic; the utopian and immediate. It seems to me that any domain of critical analysis needs a balance of the two, contradictory though that seems. It needs people both advocating we change the system in immediate, meaningful ways, and that we meaningfully unpick it in the long-term.

In most spaces around AI, the problem is too much pragmatism and immediacy (or, cynically: too much conservative selling-out disguised and excused as “pragmatism”). We have proposals for voluntary ethical codes that don’t ask who will (or will not) adopt them; new approaches to developing software that imagine the idea it should be developed is obvious; facial recognition researchers explaining they exploited trans people to be inclusive. Too many people happy with the making and uncomfortable asking questions where the answer might be not to.

But there are other spaces where the opposite problem occurs - and a lot of the more philosophically-inflected critiques of AI fall into this category. None of the work is wrong, or unnecessary. But taken together, it often feels (slightly) divorced from the actual possibilities of making, and doing, and AI. People end up reacting to what places like Google are doing, and so conflating “what AI is” with “what Google is”.

Hence, well, this paper! Like Adam before us, we ask what “making” might look like, examining not Google-style large language models but instead things like meaningfully-decentralised and federated systems. It’s (I hope) a fun and interesting read - even if it comes off as an unusual one. You can grab it, as usual, here.