My research agenda examines how software engineering practices are situated within contemporary political and social systems of power. This takes two main forms: First, while we might frequently hear accusations of tech being racist or transphobic, we don’t often see the technical causes of those outcomes; my research uncovers those causes – when we know how we are making [bad thing], then we may have an opportunity to do things differently. As an output of that research, I work on new technical practices. In this work, I ask questions like: (these questions are what I’m writing my book about)

  • How can software engineers support social justice causes in their technical practices?
  • How have the ways we make software been shaped by oppressive social forces?
  • What new technical practices can we (engineers) integrate into our pipelines?

Second, I research how open source communities enact the community part of their name. My work here seeks to answer questions like:

  • How do we collect demographic data about community members, and what do we do with that data once we have it?
  • What actions and processes contribute to a open source community where systemically minoritized members feel safe(r) and welcome?
  • How can we identify and offset the problematic roots of open source to make contribution accessible to more people?

Right now, I’m giving talks primarily about the first aspect of my research. If you invite me for a talk (hint hint), I would give talks titled:

  • “Abolitionist Data Practices: Applying political goals to data structures”
  • “Is my database racist? How white supremacy structures our databases”
  • “The history of data modeling” (this one is more exciting than it sounds, I swear!)
  • “Beyond ‘anti-racist’ engineering: why anti-racism does not make better software”

Uncovering the technical causes of technical injustice

In order to unearth tehnical roots of injustice, I use several approaches, which I broadly consider to be three kinds of engineering:

Frequently, I perform what I call historical engineering: I use close reading, archival analysis, and software design to excavate software engineering practices with the aim of historicizing and denaturalizing them. Using this method, I have reviewed texts from early data modelers, like E.F. Codd and Peter P.Y. Chen. I also recreate technology and practices identified in early patent documents to identify the moments in which problematic epistemologies were codified.*

I very often use reverse engineering to hack into existing software applications and examine the choices made by their creators. Most often, I reverse engineer the Android applications of smart home devices to uncover the relational data models within. I analyze in-code annotations, create entity-relationship diagrams, and then examine those resultant models, again linking them back to systems of race and gender power. In my work, I have developed a method for performing a reverse engineering investigations within a specific sociotechnical context – a situated analysis of the contextual epistemological frames embedded within relational paradigms.

I also use speculative engineering and software engineering to explore the potentiality and creative possibilities of existing and not-yet-created computational tools. In my speculative engineering process, I frequently suggest that the temporality of modern digital computing is incommensurate with the temporality of modern transgender lives. Following this, I design and build a trans-inclusive data model that demonstrates ways to actively subvert systems of racialized and gendered power.

The most well-developed example of these methods in practice is in my dissertation, Modeling Power: Data Models and the Production of Social Inequality.

You can also read my other work (PDFs linked below):

Stevens, Nikki L., Anna Lauren Hoffman, Sarah Florini. “The Unremarked Optimum: Whiteness, Optimization, and Control in The Database Revolution.” Review of Communication. June 2021.

Stevens, Nikki L. and Os Keyes. “The Domestication of Facial Recognition Technology.” Cultural Studies. March 2021

Stevens, Nikki L. “Dataset Failures and Intersectional Data.” Journal of Cultural Analytics. March 2019.

Stevens, Nikki L. and Jacqueline Wernimont. “Seeing 21st Century Data Bleed through the 15th Century Wound Man.” IEEE Technology and Society. December 2018

Open Source Community Assessment and Culture Change

The roots of this work come from my nearly two decades of involvement in open source communities. In the Drupal community, I founded the Drupal Diversity and Inclusion Working Group. With a few others, we grew the group from 5 to 700 over two years, and grappled with ways to make Drupal a safer space for folks from underrepresented groups.

Out of this work came Open Demographics, a project that uses open source paradigmns to construct demographic questions, following the disability justice paradigm “Nothing About Us Without Us.” I’ve consulted for Mozilla, Stack Overflow, WordPress on asking demographic questions and then, doing the even tougher work of figuring out what comes next.

This work is frequently less legible and supportable in academic contexts, where I currently work, and so I’m looking for partners to think with me about lifecycles of community change, and to build out my framework Open DEI - a project which gives communities a culturally-relevant foundation for thinking about their own community health apart from statistically significant metrics.