Racism By The Gigabyte

Bryant James
7 min readSep 28, 2021

Artificial intelligence and other emerging tech harbor inequitable racial bias.

Full stop, racism exists in technology. Technologists can be racist, but so can software itself. And it makes sense. A pervasive scourge, racism and its roots touch every facet of our global population. Of course its insidious reach has breached technology.

Like other fields, technology too harbors racism. Artificial intelligence and machine learning in particular have a stealthy (and frequently unintentional) tendency towards racism.

What’s in a face?

That’s a pretty serious accusation, so it’s probably best we hop into some facts. An easy technology to start with is facial recognition. The computer algorithms that power facial recognition have an undeniable record of inequitable racial bias.

The most popular facial recognition providers (Microsoft, Face++, and IBM) boast an accuracy of over 90%. That seems pretty good, right? Let’s look deeper.

When looking at white male photographs, classification has a success rate of approximately 97%. But what happens when we look at darker skin tones? Accuracy for black female photographs drops to 65%. That’s a 32% gap in accuracy.

Study after study has found that there is a marked discrepancy (read: failure) in these algorithms’ ability to accurately classify ethnic faces.

And that’s just a cursory glance at the research! The harder we look, the greater evidence we’ll find of bias in our technology. This can not be ignored. By adopting a false sense of machine neutrality, we risk slippage in the gains made by the civil rights and women’s rights movements.

Machines are not neutral by default. Software evokes and represents the same inclinations, priorities, and prejudices of its creator.

An Error Rate Does Not Racism Make (Or Does It?)

Although George Floyd’s death was the spark, there was an instantaneous recognition that what happened to him that day was the product of systemic racism and many years of thoughts, choices, and actions. Current policies, shaped by that history, must be subjected to scrutiny and critiqued. Plans for the future, based upon new understandings about how to achieve a more racially just society, must be formulated.

This remarkable quote by historian Annette Gordon-Reed really drills down to the core of the issue. The roots of racism are in the processes and data by which we’ve (intentionally or otherwise) codified color biases and how we apply racial classifications to our legal, medical, social, and interpersonal decision making processes.

Historically, racial classification has always had a bizarre pairing with the sciences. During World War 2 blood from black and white donors was kept segregated. The trial of O.J. Simpson revealed that modern criminology featured heavy reliance on unfounded racial eugenics theory. The racial eugenics pseudo-science has even been used as a basis for hiring and housing practices.

Gross, right? It hasn’t gotten any better.

What Does Racist Technology Look Like?

We can categorize technological racism into two distinct groupings. Firstly, we have active racism. People, governments, and institutions weaponize technology with the clear goal of discriminating against a group of people. The second type of racism we see is more insidious and systemic. It is caused when a “neutral” software unintentionally produces biased results.

Let’s look at the latter. How can a software generate biased results without someone intentionally developing it to do so?

The answer is simple: data. Artificial intelligence and machine learning algorithms are trained on data, and where you source that data greatly impacts the results of said algorithms. When large data sets are culled from databases that were populated via biased or racist policies, that data set (and the subsequent algorithm) are now tainted.

Simply put, biased data leads to biased results. Let’s see racist technology in practice.

The healthcare industry uses risk assessment algorithms to determine when and how to allocate medical support to at-risk patients. Given the long-standing wealth and transportation gaps in the black community, it was found that the data powering these algorithms significantly under-represented black patients. Studies have found that this blind spot in the data has led to a 46% failure rate in identifying at-risk patients of color. Biased data, biased results.

Many police precincts utilize predictive policing software to predict criminal hotspots and allocate police resources. Sounds pretty straightforward, right? Put more patrols in areas with higher crime rates. Wrong. Historically, police departments have over-patrolled non-white neighborhoods and have cultivated a large repository of dirty data through the use of illegal, unethical, and biased policing practices. This data, which over-represents black communities, then fuels an algorithm which instructs police departments to continue over-patrolling black communities. Sounds like a self-fulfilling prophecy, right?

And then we have the judicial system. It is common practice for judges to consult a tool called COMPAS during pretrial and sentencing. This software is used to determine a person’s likelihood to reoffend. Again, though, the data powering the algorithm proves biased. During police questioning, a black person has a 500% greater chance of being detained than a white person in similar circumstances. When you have that kind of disparity, it’s no wonder that this software erroneously produces higher risk scores for the non-white.

Pretty. Fucking. Gross.

When Fair’s Not Fair.

We’re starting to get a little uncomfortable, right? Maybe we’re even starting to feel a little defensive. I get it. Machines don’t have emotions. They don’t have intentions. They’re objective. That’s true. Know what’s also true?

Machines will absolutely reflect the sentiments of their creators. Of course we didn’t intentionally build a racist system. But we still built it. We built it through inattention and inaction.

During the development of an artificial intelligence, software engineers rarely consider the racial equity of the finished product. The whole purpose of the AI in this situation is to reduce human accountability. It’s pretty easy to look at the finished product and feign ignorance. “Whelp, it’s the fourth one this hour, but if the machine says to send another police patrol to that black neighborhood, then by George we’re sending another patrol.”

We don’t stop to think and appeal the decision the algorithms make. We don’t stop and ponder on whether we’ve unintentionally codified racism into the very software designed to eliminate racism.

Surprise, folks, we did!

Data-driven policing has led to over-representation of minority communities in law enforcement databases. The collection of biometric data during the immigration process has caused similar issues. And the list goes on.

We may have the best people with the best intentions working to fix the problem, but they’re fighting a losing battle. If the data used to fuel these predictive algorithms is not vetted and understood, we’re just codifying racism.

How Do We Fix The Damn Thing?

Ironically, the fix for our problem is trivial. We just need to stop using the bad data. Bad data leads to broken algorithms, and when racial equity is the goal, a broken algorithm is a racist algorithm.

Easier said than done. In order to right technology, we need to cleanse ourselves of humanity’s nastier traits. We need to:

Let go of willful ignorance. We, humanity, do not readily accept that racism exists. When acclaimed social psychologist Rebecca Hetey presented Americans with statistical proof that non-white offenders represented a disproportionately large percentage of prison populations, the masses were unaffected. Instead of discussing racial injustice and criminal justice reform, surveyees became more emphatic in their support of more punitive judicial policies.

Let go of the results-first mindset. What’s good for business is rarely good for society. When her employer, Google, took issue with a research paper she’d authored regarding racist results produced by machine learning language models, researcher Timnit Gebru was fired. Attempts at social change are met with coercion and their proponents buried if results and profits are at risk.

Let go of inaction. Getting folks to acknowledge the issue of racial inequity is only half the battle. Spurring change and action is an entirely different ball game.

It was discovered in 2018 that Amazon’s Rekognition, a facial identification software, frequently misclassifies female and non-white faces. Amazon continues to peddle the biased software to police departments across the country.

Google formed its Ethical AI Team to ensure its use of artificial intelligence did not violate ethical social and cultural principles. When members of the team published a paper describing unintentionally biased results in machine learning algorithms, Google responded by firing two of its top ethics researchers to cover up the issue.

Twitter debuted a new cropping algorithm in 2018 that was found to favor white and female faces. Twitter promised a fix, but has thus far failed to deliver an unbiased algorithm.

Where Do We Start?

Change starts small. It starts with accountability. We must devote ourselves to change and stick to that conviction. We must hold ourselves and the businesses we patron accountable. We must begin demolishing racism and inequitable bias in technology.

With a focus on racial literacy, we can mitigate racism’s impact on technology.

We must begin with an academic understanding of how racism in technology arises. We must have the empathy to understand emotionally the impact these biased algorithms have on our communities. We must have the commitment to reduce the harm that racist technology inflicts on our fellow humans.

Racism in technology exists.

Racism in technology is hidden, masked by the apparent neutrality of machines.

Together, we can end racism in technology, but we need to start now.

Follow me on Twitter or LinkedIn. Or catch more content on my website.

--

--

Bryant James

Award-winning Technology Consultant. Speaker. Philanthropist.