I recently testified in front of the Washington House of Representatives’ Committee on Innovation, Technology and Economic Development about House Bill 1654, regulting facial recognition. My prepared statement is replicated below:
Chair Hudgins, members of the Committee, my name is Os Keyes. I am a PhD student at the University of Washington, where I have spent the last two years studying facial recognition systems. I am also trans, and I am also a former Data Scientist. I present this testimony as someone who can speak directly to the possibilities of the technology, the societal perils of it, and the individual impacts. From those perspectives, and with those experiences, I thoroughly endorse House Bill 1654, and I endorse Jevan Hutson’s earlier comments on it. I am, in fact, one of the experts he mentioned who had undertaken research and found facial recognition to have a discriminatory basis. I have provided copies of my paper, and several by other researchers in the field.
I undertook this research, and I am here to provide this testimony, because this technology is deeply threatening to trans existences, particularly the existences of trans people of colour. This is for a couple of reasons. The first is simply that the technology is often biased. It is less accurate with trans people; it is less accurate with people of colour. And so when you take it and deploy it, it is more likely to flag trans people as incongruent, unexpected; to falsely identify us as matches, or highlight us for attention. If this happens – if someone is flagged, whether due to dataset bias or a simple error, and reported up to an operator – trans people are also more likely to face unpleasant and often violent outcomes. Because we are incongruent, because we are often unexpected, there is a long history of government entities, be it the police or the housing administration, discriminating against us. The National Transgender Survey, in their 2015 report on Washington State, found that 60% of trans people who had interacted with the police experienced mistreatment. And this is without algorithms prompting the police to interact with us. 33% of us experienced discrimination in public accommodations – even without a facial recognition system prompting the administrative official weighing up our housing, healthcare or benefits to give us a second look. These numbers are far worse for trans people of colour, particularly trans women of colour. Facial recognition systems, whether integrated with automated decisionmaking or not, are inevitably going to make this far worse unless we can have some guarantee that they are identifying people in an unbiased way, and as part of a system that is responsive to the needs and concerns of the most vulnerable. Now, at some point I expect you to hear a facial recognition developer say: don’t worry! We’re fully accredited! We’ve fixed the bias problem! And they might be right, but whether they are or not, it is vital that there be a process for ensuring they are right – that they be forced to prove it. There a long history of these systems being biased, even those produced by vast companies with vast resources such as Microsoft and Amazon. The field’s existing testing standards almost never consider this an issue; the gold standard test, administered by the U.S. government, not only fails to evaluate racial bias or accuracy rates for trans people, but uses a dataset that has been known to be racially biased since 1995. If the state government is going to purchase or develop these technologies, it has a responsibility to ensure that they are safe. 1654 is an important step in meeting that responsibility.
You can see a full video here.