This is the associated FAQ for The Misgendering Machines: Trans/HCI Implications of Automatic Gender Recognition, the solo-author paper I got into CSCW 2018 (and am super excited to present!)
What’s this about then?
There’s a subfield of facial recognition called gender classification, or gender recognition, which purports to, well, recognise people’s gender based on photos of them. This is not only worth exploring directly (look at the red flags on that), but it’s also an opportunity to explore HCI’s approach to trans issues and whether that has got any better over time, because HCI has repeatedly used the technology in papers.
What we found was that the technology was, indeed, just as dodgy as it sounds - and so was HCI’s use of it, ranging from papers that did not define ‘gender’ but used it to mean whatever the heck gender classification APIs meant, to papers that implicitly or explicitly defined gender in a trans-exclusionary way. Overall, it suggests the field has made very little progress on undertaking work that is trans-inclusive, as exampled by the wide array of explicitly essentialist work, particularly around health
And what’s the solution?
More trans people. But in the meantime, people actually educating themselves, explicitly operationalising gender in a way they sure as shit haven’t been, and trying to deconstruct what precisely they’re looking for when one of their variables is called “gender”.
Also ending gender classification as a technology in its entirety.
Why did you write a solo-author paper as a first year?
The professionally beneficial answer: I’m a wunderkind whose name you should keep in mind for postdocs, low-work, high-impact publications and grant submissions with multiple commas in the dollar amount.
The actual answer: Everyone Knows that first-years do not write solo-author papers because it is a terrible, exhausting idea, and because Everyone Knows that nobody felt a need to tell me until I’d already done it. I and my sleep cycle have learned our lesson, probably, although CHI suggests that’s a very overestimated “probably”.