AI perpetuates databases that reflect prejudices and inequalities

Amid the rapid expansion of artificial intelligence (AI), concerns are growing that the technology will perpetuate and even intensify social disparities. AIs, as they can also be called, are increasingly employed in crucial decision-making processes such as selecting job candidates, granting loans, setting court sentences and even medical diagnoses. However, the recent identification of cases in which AI was influenced by the dominant culture present in the databases, contributing to social stratification and accentuating inequality, raises questions about the impact of these algorithms.

The databases used as the basis for the development of AI systems often reflect existing biases and inequalities, which end up being reproduced by the decisions made, as can be seen recently when The Bulimia Project , an awareness group about eating disorders, tested artificial intelligence imagers, including Dall-E 2, Stable Diffusion and Midjourney , to reveal what programs imagine a “perfect” physique in women and men .

According to the result obtained, 40% of the images showed blonde women, 30% women with brown eyes and more than 50% had white skin, while almost 70% of the “perfect” men had brown hair and 23% brown eyes. Similar to women, the vast majority of men had white skin and almost half had facial hair.

Many of the designs still boasted almost cartoonish features, such as full lips, chiseled cheekbones and super-defined muscles, as well as wrinkle-free, poreless skin and perfect noses. All highly coveted features imitated using plastic surgery and fillers.


But the complications of pre-conceived data with information, values ​​and ideals have consequences in the most diverse sectors. Professor Moacir Ponti, from the Institute of Mathematical and Computing Sciences of São Carlos (ICMC) at USP, points out that the problem lies in the development of artificial intelligences by individuals who do not understand this possible inequality and in their use by users who do not know how they were created.

The professor exemplifies: “Algorithms in the selection of job candidates are trained based on previous CVs and, therefore, tend to favor certain profiles and marginalize others”. If past hiring history is uneven, such as the selection of men for senior positions such as directors, managers, judges and superintendents, and the selection of women for positions such as secretaries, nurses and chambermaids, “AI tends to automatically perpetuate these patterns and even intensify the disparity,” reports Ponti.


The incident was seen at the company Amazon, which used an artificial intelligence tool to help the HR team hire professionals, automating the search for job candidates and performing a pre-selection of candidates. The system analyzed the resumes sent, giving each a 1 to 5 star rating, in the same scheme as the products sold in its online store.

The tool’s discrimination against female candidates in the selection process for new employees happened because it was created based on CV standards sent to the company over the last ten years. In the vast majority, these resumes were from men, as happens in most of the technology industry, thus considering male candidates naturally more suitable for vacancies.

A simple mention of the term women in the curriculum was penalized by the tool and reduced the chances of the professionals to get the vacancy, “not because the tool is sexist, but because they learned it the wrong way”, informs Ponti.


Lívia Oliveira, professor of Computer Science, says that racial injustices can arise from the use of artificial intelligence, mainly in the management of judicial sentences. She comments that the AI ​​is much stricter with black people than it is with white people. “A judge, when entering the data of two people to calculate the time of incarceration, would assign a much lower value to the white person compared to the black person. This racial bias contributes to the disproportionate incarceration of people of color.”

Lívia also correlates the database and social stratification to ChatGPT and explains how AIs tend to be based on society’s dominant point of view. “ChatGPT, when asked who built the plane, would mention the Wright brothers, while Brazilians would associate Santos Dumont, because the Wright brothers are figures from the United States, who have the dominant point of view in this tool.”

She goes on to argue that this kind of uniformity of knowledge according to who runs the AI ​​has the power to squelch minority stories and conclusions, privileging the majority point of view.

Faced with these issues, both professors agree that programmers have an ethical and moral obligation, as they are responsible for shaping systems that can significantly impact society. “It is a maxim among computer professionals that all models are wrong and therefore must be evaluated, re-evaluated, tested and verified.” reports Livia.

For her, the professional who works within ethics must test false positives and false negatives, identify errors and the effects of decisions based on algorithms. “AI training is not about running algorithms, but about understanding your data and the impact it can have, because understanding them and proper training are crucial to responsible AI development.”