University of Amsterdam: Internet search results fuel male-female prejudice

Gender neutral searches in Google – using, for example, ‘human’ or ‘person’ – produce results that are dominated by men. This fosters gender bias, to the extent that it can influence people’s decisions when recruiting new staff. This is shown in new research by psychologists from the University of Amsterdam (UvA) and New York University (NYU), which was published on 12 July 2022 in the scientific journal Proceedings of the National Academy of Sciences (PNAS). It is one among a number of recent studies revealing how artificial intelligence (AI) can change our perceptions and actions.


‘There is increasing concern that algorithms used by modern AI systems produce discriminatory outputs, presumably because they are trained on data in which societal biases are embedded,’ says Madalina Vlasceanu, an NYU postdoctoral fellow and lead author of the new study. ‘This can lead to an increase rather than a decrease in existing inequalities.’ Together with David Amodio, professor of Social Psychology at the UvA, Vlasceanu investigated whether the degree of gender inequality in a society is related to biases in algorithmic output (internet search results) and, if so, whether exposure to such outputs can influence people to act in accordance with these biases.

Differences between countries
The researchers first looked at whether the word ‘person’, which can refer to a woman just as easily as a man, is most frequently assumed to refer to a man. To this end, they performed Google image searches for the word ‘person’ in 37 countries, in the dominant local language of the country concerned. The proportion of male images that emerged from the searches was higher in countries with greater gender inequality than in countries with little to no gender inequality (based on the Global Gender Gap Index ranking). Amodio: ‘Algorithmic gender biases therefore seem to be related to societal gender inequality. When we repeated our study three months later with a sample of 52 countries, we saw the results confirmed.’

Who is the peruker?
Vlasceanu and Amodio then conducted a series of experiments to investigate whether exposure to the search engine results with gender bias can influence people’s perceptions and actions. The nearly 400 participants in the experiments were told they would be shown Google image search results from four professions they probably didn’t know: chandler (candle maker), draper (cloth maker), peruker (wig maker), and lapidaris (one who works with stone). Before seeing the images, they had to make prototypical judgments about each occupation (such as, ‘Who is more likely to be a peruker, a man or a woman?’). Both the female and male participants believed that all four occupations were more likely to be practiced by a man than by a woman.

The images they were then shown of each occupation were selected in terms of male-female composition to be representative of countries with high gender inequality scores (approximately 90% males versus 10% females in, for example, Hungary and Turkey) or countries with low scores for gender inequality (around 50% men versus 50% women in, for example, Iceland and Finland). In this way, the researchers were able to mimic the results of internet searches from different countries. After seeing the images, the participants who had seen the results with little to no gender bias changed their male bias from their previous judgment. In contrast, the participants who had seen the images with a high degree of gender inequality maintained their bias, reinforcing their perception of male prototypes.

Ethical AI model
Finally, participants were asked to indicate how likely it would be that a man or a woman would be hired for each of the occupations (‘Which type of person – male or female – is most likely to be hired as a peruker?’). In addition, after seeing images of two candidates – a woman and a man – they had to make a choice as to which they would hire (‘Choose one of these two candidates for a job as a peruker’). Again, exposure to images with an equal representation of males and females led to more equitable assessments and greater likelihood of a female applicant being selected. Prejudices based on internet searches therefore also affect choices in the recruitment of new personnel and thus work to maintain, and even strengthen, social inequality between men and women.

‘Our findings make it clear that we need an ethical AI model that combines human psychology with computational and sociological approaches to address the formation, functioning and mitigation of algorithmic biases,’ concludes Amodio.