Racial Justice
Decode
the
Default

Technology has never been colorblind. Since the beginning of the internet, calling out racial inequities of data and algorithms means facing denials and backlash. It’s time to abolish notions of “universal” users of software and beneficiaries of digital rights.

January 2021
Decorative illustration with a black fist in the horizon An decorative illustration showing a black platform with several dots in different colors and a black fist in the horizon.
Racial Justice
Decode
the
Default

Technology has never been colorblind. Since the beginning of the internet, calling out racial inequities of data and algorithms means facing denials and backlash. It’s time to abolish notions of “universal” users of software and beneficiaries of digital rights.

January 2021

When the Blackbird web browser got its wings in 2008, its three creators didn’t anticipate there would be a backlash. News headlines about a “browser for Black people” inspired seething accusations of racism and hate email. The three friends and internet entrepreneurs saw an opportunity to surface African American news websites and blogs in new browser windows with hashtags and a custom Google search of a curated list of Black media websites and blogs. “People would ask, ‘Where is the Whitebird browser’?” says Arnold Brown, a co-founder of Blackbird with Ed Young and Frank Washington.

In an article that chronicles Blackbird’s rise and fall, Brown writes that their software (based on open Firefox code) was downloaded 300,000 times in the first few months, but that even some in the African American community felt that it promoted “self-segregation.” He believes their experiment to uplift Black media online was so misunderstood because many didn’t see the problem. “The internet was supposed to be colorblind,” he says.

When you open a typical browser window and begin an internet search query, what you see on the first page of results is a reflection of relative popularity as well as profit driven hierarchy. On Google (which has a near 90% global market share for search), it usually includes paid results backed by the largest dollar amount and content created by Google itself. The basic logic of what to privilege evolved from identifying what pages were most clicked and linked to by others on the web, easily leading to a ‘mainstream’ bias.

Advanced algorithms that surface “useful and relevant” content from billions of webpages have made Google one of the world’s most profitable businesses. In the process of creating this powerful window on the web, they have sometimes turned a blind eye to ethical quandaries of search that are byproducts of how inequalities are reflected online.

True colors exposed

One striking example of how search engines reinforce racism comes from Safiya Umoja Noble, an internet researcher in California. Noble famously searched Google for the term “Black girls” in 2009 and was shocked to see how women of color were predominantly represented by pornography. In 2018, Noble went on to publish the best-selling book, Algorithms of Oppression in which she tears down the notion that a search engine can ever offer a level playing field when both the algorithms and the datasets (which includes content, referrals and traffic) are biased against people of color.

Eventually, Google’s search results for “Black girls” were adjusted to show more positive representations, even listing the education initiative Black Girls Code. But Google did not engage openly with researchers or signal a commitment to more sustainable structural changes to search functions. As recently as June 2020, digital ad buyers using the terms “Black girls”, “Latina girls,” or “Asian girls,” were recommended dozens of pornographic keyword suggestions by Google’s automated keyword tool, reported The Markup.

With Blackbird, Brown and his colleagues said they aimed to solve for how a person could search for something like “barbershop” and not need to scroll to page 8 or 9 before coming across a result from a Black perspective. It was alienating for individual users, but it was also a barrier for community media and blogs who depended on online ad views for survival. Similar dynamics play out on a global scale among internet users who crave local and ethnic representation. In fact, according to Brown, nearly 40% of Blackbird’s downloads stemmed from Afro-Brazilians in Brazil who felt an affinity for the content.

“From the dawn of the internet until now, there has been this fantasy that the internet should be race blind and color blind. But the normative experience is actually ‘Whitebird’. If your browser is only making things visible to you that are not produced by people like you, it’s not really an egalitarian medium,” says Charlton McIlwain, author of the book Black Software, and a professor of media and communication at New York University. “If we want to talk about corrections and interventions, let’s think about building and mobilizing in ways that directly help to expand race-based opportunity,” he says.

Diversity? Not yet

When the internet appears predominantly White* and US-centric by default, it is because it reflects a particular corpus of web content and the context of software developers, managers, and executives of technology companies who are rarely diverse in terms of race, ethnicity or gender. It has direct bearing on outputs. While the mainstream notion of a “default” user is ubiquitous, this monolithic concept of a generalized user is often based on invisible specificities that are mischaracterized as universal: White, cisgendered male, American. Considering the powerful role the biggest companies play in every aspect of our lives, the stunning lack of Black, Latinx and non-male representation is a core concern.

Matt Mitchell, the founder of CryptoHarlem and a technology fellow with the Ford Foundation, says he has heard countless excuses for lack of diversity in technology companies, but does not believe it is a “pipeline problem”. He says the main problem is that tech companies fail to create work environments where people of color and women want to stay. “Does tech deserve me?” is a reasonable question for young people to ask at the start of their careers, he says, especially, considering so many of the problems of discrimination and sexism at companies appear to be foundational. “I want to be wrong, but we all know the medicine for a diversity problem, and they won’t take it,” says Mitchell. “Maybe the best way to fix big tech is to build something adjacent to it.”

Blind spots as a result of lacking diversity in the tech workforce are particularly noticeable with more cameras, sensors and artificial intelligence infiltrating daily life. When automatic soap dispensers in public bathrooms do not recognize dark-skinned hands, it begs the question: who was the imagined default user? These are not insignificant errors. On the contrary, they represent fundamental problems of our present and future.

Data that discriminates

AI researchers Joy Buolamwini and Deborah Raji from MIT Media Lab caused a stir with a research paper in 2019 that demonstrated how Amazon’s facial recognition software, Rekognition, mistakenly referred to women with dark skin as men, 31% of the time. As founder of the Algorithmic Justice League, Buolamwini heads research and advocacy on gender and racial bias in numerous corporate AI systems. She and others have successfully used studies about disparate accuracy levels to pressure tech companies to improve their products and even pause collaborations with law enforcement.

In March 2019, 78 senior AI researchers called on Amazon to stop selling Rekognition to police based on evidence of high error rates. Such faulty technologies (and there are more than just Amazon’s) have been implicated in the misidentification and arrests of innocent people with little or no accountability from the companies involved.

These efforts to roll back technologies like facial recognition are part of a broader struggle to resist carceral applications of digital technology, based on an understanding that the same types of technology are marketed for different use cases. As Chelsea Barabas writes, for some people, facial recognition represents an easy way to unlock an iPhone. For others, it shows up in their lives in the form of public surveillance or in police body cameras. And often, it is the biggest consumer tech companies, like Google, Microsoft or Amazon who also develop technologies for the military and police.

Black lives matter

Throughout 2020, one of the largest people’s movements in U.S. history has forced a global conversation about racial justice that is long overdue, especially in the realm of technology. The killing of George Floyd, an African American man, by a White police officer in broad daylight, was a spark that led millions more to participate in decentralized protests against systemic racism and police violence under the hashtag #BlackLivesMatter.

Black and Brown neighborhoods have long been targets of over-policing and mass surveillance. With the expansion of internet-connected devices, cameras and artificial intelligence, the ability of law enforcement to exert unwarranted control has grown. Drones, automated license plate readers, facial recognition and predictive policing systems are part of an expanding digital toolbox that automate bias and lend legitimacy to narratives used to justify violence against Black and Brown communities.

At the same time, surveillance tech is growing more pervasive in homes, schools, workplaces and neighborhoods everywhere. In the U.S. such technology often links up digitally with law enforcement, whether it’s biometric scanners for entry to housing complexes, or consumer gadgets like Amazon’s video doorbell, Ring. At least 1,500 police and sheriff departments across the U.S. have entered into partnerships with Ring, gaining easier access to images from hundreds and thousands of outward facing cameras. Social apps where people share their own surveillance images stoke racial suspicion and division.

This isn’t just an issue on U.S. soil but remains pivotal to the economic and military expansion of the United States abroad. Police and surveillance technology built in the U.S. use urban neighborhoods as testing grounds for military and counter-terrorism technologies intended for worldwide use. In an interview with Logic Magazine in August of 2020, Sarah T. Hamid of the Carceral Tech Resistance Network shares several examples and points to direct partnerships between U.S. police departments and local law enforcement in different countries.

Equipping the powerful

One after another, in the wake of 2020’s protests, tech companies pledged to make donations to racial justice causes, to expand their workforce diversity, and in some cases to curb racism on their public facing platforms. Airbnb said they will investigate racial discrimination in processes for vacation rentals with Color of Change and Upturn. And the dating service Bumble said they would increase efforts to root out hate speech in profiles and messages in partnerships with the Anti-Defamation League and Southern Poverty Law Center. [Mozilla’s commitments are described here.]

For the tech companies who earn large paychecks from military and law enforcement, any reckoning will have to cut deeper than statements of solidarity in order to satisfy human rights defenders. Since 2018, the #NoTechForICE campaign, led by Mijente, a political action hub for Latinx and Chicanx organizing, has campaigned with tech workers to expose the role of big tech in facilitating surveillance, detention and deportations by the U.S. Immigration and Customs Enforcement (ICE). The campaign calls on companies like Palantir, Thomson Reuters, Clearview AI, Salesforce, Microsoft and Amazon to take a stand against the use of their technology to target, capture, and detain people in ICE detention facilities, where people are kept under deplorable conditions.

Surveillance and facial recognition systems reinforce discrimination and this is something even companies who sell such software were pushed to own up to publicly in 2020. In June, IBM announced they would withdraw “general purpose IBM facial recognition or analysis software” citing racial bias among several concerns. In a letter to the U.S. Congress, IBM’s CEO, Arvind Krishna, called for policies to determine whether law enforcement agencies could use such technology “responsibly.” Two days later, Amazon announced a one-year moratorium on police-use of its facial recognition software, Rekognition, and also called for lawmakers to propose new regulations for its “ethical use.”

While critiques about the poor accuracy of facial recognition systems have hit home, it doesn’t mean they will be safe to use even if they recognize people more accurately. Deborah Raji says the bias in datasets she has studied run deep. “The more I became engaged in this work, I realized it’s not just representation that is poor, but it’s also the labels we are assigned. I’ve seen attributes like a big nose or big lips in a dataset marked as ‘criminal’ or ‘failure’,” says Raji.

Technology developers inherit the labels and assumptions passed on by the gatekeepers of data, and these often prop up harmful and extractive practices that criminalize and exploit racialized communities.

Beyond inclusion

Racism and discrimination exists in every part of the world, including in unique variations that reflect local histories. Wherever racism finds expression, it is most visible to those who suffer the consequences and too easily deniable by those who don’t. Worse, most histories of discrimination, as well as histories of colonialism, are wrapped in narratives of progress, civilization and enlightenment. In recent years, the notion that the internet itself should be “decolonized” has gained traction as a critique of how vastly dominant companies control the data, the communication systems, the computational power, and the information ecosystem of billions of people worldwide.

Understanding how easily biased and discriminatory data makes its way into algorithmic systems is important, but it is even more urgent to question whether such technologies should be developed in the first place and by whom.

For technology to serve racial justice, developers need to ask fundamentally different questions to interrogate power holders (and their data) beyond merely seeking representation of marginalized people. Big tech companies who benefit from the status quo have little incentive to question systems of power—particularly their own power—let alone transfer power to those most deeply impacted by technologies they develop and from where they are deployed.

In December 2020, a renowned AI ethics researcher, Timnit Gebru, was ousted from Google following a disagreement with leadership over a research paper that cast doubt on the ethics of Google’s use of unauditably large amounts of text from the internet (including racist and sexist sources) to train AI capable of writing new prose. Among other things, Gebru and her co-authors argued that AI-generated text will overwhelmingly reflect the most advantaged languages, countries and communities. Twelve years after Blackbird experienced a backlash for proposing a curated alternative to the “default” experience of search, there is still resistance to hearing critiques that the internet isn’t colorblind.

These days, the internet is no longer something you simply access through a browser. Internet-enabled and data-driven technologies are everywhere. It makes it ever more urgent to decode the “defaults” now that technologies operate beyond our control to influence things like job and loan opportunities, immigration status, health care options, and more. “Tech exacerbates a cycle of disadvantage,” writes Seeta Peña Gangadharan of the research and organizing coalition Our Data Bodies, emphasizing how few rights we have to refuse technologies that are used for social control in different contexts.

Today, Gebru has vocal allies, who are demanding answers from Google. Digital rights advocates in Europe have begun to step back from notions of general “users” instead of considering a variety of specific lenses. Design justice advocates are developing new community-driven processes that more fairly allocate benefits of design. The internet can help tip the balance to be racially just through the efforts of many people to rethink systems, question powerful institutions, and develop community focused alternatives. The right to choose, and the right to refuse: individually, collectively and especially in the context of design, is key to unlocking new opportunities for true equity.

* The words ‘Black’ and ‘White’ are capitalized in this article in accordance with the style guide of the National Association of Black Journalists which was updated in June 2020.