What is algorithmic racism?

Artificial intelligence can also carry bias and harm the lives of black, Asian and Latinx people

Have you ever heard of algorithmic racism? Yes, technology can also act in a prejudiced way and harm people who are not white. This happens when racism is already incorporated into algorithms or automated decision-making systems.

Depending on how these algorithms and systems have been programmed to make choices, they can reinforce and perpetuate racial biases, leading to unfair treatment and unequal outcomes for Black or Asian people.

“I use the term 'algorithmic racism' to explain how sociotechnical technologies and imaginaries in a world shaped by white privilege strengthen the racialized ordering of knowledge, resources, space and violence to the detriment of non-white groups”, explains researcher Tarcízio Silva, author of book “Algorithmic Racism: artificial intelligence and discrimination in digital networks” in an interview with CIEE-Fiocruz blog.

How can technology discriminate against people?

One example is facial recognition software that is less accurate at identifying people with darker skin tones. One of the biggest activists against algorithmic racism, Joy Buolamwini, says that when she was doing her graduate studies at MIT, the facial analysis software she was working with did not detect her face.

That's because whoever wrote the algorithm's code didn't teach it to identify a wide range of skin tones and facial structures. “In another job, I had to wear a white mask so that my presence was recognized,” he writes in one article for The New York Times.

She explains that artificial intelligence systems are shaped according to the priorities and prejudices (conscious or not) of programmers, a phenomenon she calls coded vision. And he cites other examples of algorithmic racism in the documentary “Coded Bias” (available on Netflix).

Other Examples of Algorithmic Racism

Artificial intelligence can also be racist in other ways. Tarcízio Silva cites some examples, which he organizes and updates on a timeline.

Some of the most recent are smart watches that cannot correctly measure oxygenation of people of color and black people who were broken down by a social housing score in the United States. 

To the analyze images generated by artificial intelligence (like DALL-E 2), researchers from the University of Leipzig (in Germany) and the startup Hugging Face discovered that the trend was to create images of white people or men. DALL-E 2 generated images of white men in 97% from the requests to create “CEO” images, for example.

Algorithmic racism has also been identified in the speech of chatbots, in systems that “predict” people who are more likely to have problems with the police or criminal recidivism and even in applications that block access to black and trans people.

One Amnesty International report shows that, in New York, the security camera map shows more surveillance in neighborhoods where black, Latino and Asian people live — and that these places also had higher rates of searches carried out by police officers.

This shows how the effects of algorithmic racism can have serious consequences on people's lives, especially black people. In addition to the potential for unfair arrests and harassment, algorithms can perpetuate systemic inequalities — by barring the selection of a black person in a job interview or denying a mortgage.

What is the cause of algorithmic racism?

One of the main drivers of algorithmic racism is the lack of diversity in the technology industry. Black people are underrepresented in this sector, which is why their specificities and experiences are not taken into account in the development of algorithms, systems and applications.

Without this diversity, the chances increase that algorithms will perpetuate racial biases and discrimination.

Another issue is the lack of transparency in decision-making by algorithms. Without knowing the criteria that lead them to make choices (not always open to the public), it is difficult to identify racism. Furthermore, there are still few laws in place to hold companies and government agencies accountable for biased algorithmic decisions.

According to experts, one of the ways to tackle algorithmic racism is to increase diversity in the technology industry and ensure that the development of algorithms is guided by principles of equity and social justice. 

Furthermore, there needs to be greater transparency about how algorithmic decision-making works, as well as clear rules and laws on how to avoid racial discrimination.

“I really like the idea of a public audit, with debates, tests, mapping of cases and impacts, which are opened by society itself”, explains Tarcízio Silva to the CIEE-Fiocruz blog. “In this way, the impacts of algorithmic racism can be demonstrated and civil society can react.”

Was this content useful to you?
YesNo

Related posts