This quarter, I’m helping teach the undergraduate methods class in the information school, which means witnessing people’s first reactions to (formally) confronting different ways of knowing in academia. Students encounter qualitative methods, students encounter quantitative methods - and students articulate their perceptions of each. One often-expressed idea in class has been that quantitative methods are better at getting at truth than qualitative ones; that a graph is more trustworthy than an ethnography. This is a sentiment I have heard a lot before, and clearly one that is baked pretty deeply into Western cultures, but what was interesting was the rationale. Many students’ respect for quantitative methods was justified by the fact that quantitative methods are difficult, and complex, to learn. In contrast - and sometimes this was explicitly said! - ethnographies or interviews are simply based on seeing, hearing, talking. Things that anyone can do; things that aren’t, correspondingly, special.
Obviously this isn’t a position I agree with, but I’m extremely glad to hear it verbalised, because that logic (ethnographic data can’t be depended on for truth because anyone can look at things/hear things!) is, I think, something that sits underneath a lot of dismissal of ethnographies and participant observation: professors who take that approach just don’t say it out loud. And setting aside the association of complexity with truth, and “specialness” with expertise, it’s fascinating to see people think of perception-oriented skills as passive, and pre-given, rather than always-already undergoing training and changing, because that’s not been my experience at all. To the contrary: research doesn’t just involve looking at new things, but (as a consequence of the looking), seeing them in new ways.
Research materialities
Here’s an example (probably my favourite example) from my dissertation fieldwork. As part of my research, I’ve been digging not only into formal, recognised archives but also administrative records: court filings, mainly. The “why” is an interesting story for another time, but what is important for this story is that many of these records, particularly the more-historical ones, are now stored on microfilms.
For anyone who hasn’t worked with microfilms, they are what the name suggests: strips of film tape which hold scaled-down copies of things (usually: texts). They’re an incredibly popular format for archival materials, due not only to their relative stability but their compression rate. Shrinking a page of paper down to 16mm, you can fit thousands of pages on a single spool of tape.
But that same compression also carries a pretty obvious cost: the human eyeball can’t really pick out the words on a piece of paper when that piece of paper is sixteen milimeters wide. Instead, you use a dedicated reader, which looks sort of like an exploded casette player. The tape is run over a light source and a magnifying lens, and projected on to a screen at the original (human-readable) size.
There’s another wrinkle, too: one thing microfilms don’t have is any indexing mechanism. There’s no way of seeing what page you’re on (or might want to be on): from the perspective of the reader, it’s just a length of tape. And given that the whole appeal of the system is compression, institutions pack as many documents as possible on to that tape. So if you are, for example, looking for court case 12345, you might get a reel containing cases 12200-12500, each of a variable number of pages, and have to go an unknown way through this reel manually before you can find the case you’re looking for.
Now: you could go through page by page at reading speed, but again, there are thousands of pages. So you take advantage of the fast-forward button instead, which zips through dozens of pages a second. Except just like the fast-forward button on a casette or VHS, the speed means there’s no way of seeing what on earth is on the screen. So you have to sort of triangulate. Fast forward a bunch; stop; see what sheet you’re on. Too far? Rewind. Not far enough? Fast forward a bunch. And repeat. It’s better than nothing, but as I kept doing it - kept familiarising myself with the format and medium - it became clear there was a better way. Because even though microfilms don’t inherently support indexing, the court clerks had built an index in, and hidden it in plain sight.
Seeing the material
Archival records, particularly those associated with active institutions, aren’t just for researchers. They’re also for court clerks and archivists, who are the people who prepared these microfilm reels in the first place. And given that they did so in the knowledge they might have to use those reels, and having done so, they were very familiar with the frustrations I was dealing with. So they built an indexing system into the tapes, one that looks something like this:
At the end of every case, this page (and text) appears; ``file divider’’. What you’ll notice is it has a very defined shape, one consisting of a square surrounding (largely) empty space. That makes it pretty much unique in a film reel otherwise consisting of dense, typewritten documents. No other sheet is going to look like it. And because the shape is mostly empty space, it lets almost all of the light from the projector through. Again: very different from every other sheet. And so unlike every other sheet, when the projector runs it in front of the lens, a distinctive shape appears accompanied by a bright flash of light.
This is an index: an index you can use by turning your gaze away from the screen and towards the guts of the machine itself. You don’t look at the text, or try to understand it. You look at the aperture - the aperture through which light suddenly floods when a file end card passes in front of it. And you count those flashes of light. Case 109? That’s 108 flashes in. Even though the format itself has no indexes, an indexing system is there.
Nothing has changed, between me “triangulating” and me counting flashes of light. The room, the microfilm, the reader, the structure of my eyeballs - everything material is the same. But the data I am accumulating, and how I interact with the machine, is different. Because I have learned to see differently; to see the same things in different ways, and to perceive new things altogether.
Perception, then, isn’t inherent; academic training around methods doesn’t consist of simply reformatting what you perceive to fit academic publishing norms. Instead it’s learned and habituated, just like quantitative methods. And like quantitative methods, it takes training and experience extensive enough that calling it “easy” is mistaking the doing for the knowing how to do.