University of Mannheim: Algorithms are accepted – but only if humans have the last word
When the Austrian employment agency AMS used an algorithm at the end of 2020 to adapt the type of job and further training offers to the individual profiles of job seekers, there was a great deal of public excitement in Austria. Many criticized the procedure because it was based on historical data and thus potentially disadvantaged people who had already been discriminated against on the job market in the past: women, for example, received a point deduction per se, mothers had to accept a further point deduction. This in turn could have reduced their chances of participating in reintegration measures.
But the use of algorithms is not only widespread in the labor market, but also in banking, human resources and medicine – and is the subject of controversial debate. The Mannheim data scientist Prof. Dr. Florian Keusch in cooperation with Prof. Dr. Frauke Kreuter from the Ludwig Maximilian University in Munich. Her study proves that decisions involving people are judged to be fairer than those made by an algorithm alone.
“The results suggest that the use of algorithms without additional human control is viewed as particularly problematic,” states Keusch. “So it is not the use of algorithms per se that is controversial,” the Mannheim professor continued.
For their study, the researchers interviewed more than 4,000 people online as part of the German Internet Panel (GIP). They had to answer questions about how fair and acceptable they judged the use of AI-supported decisions in four different scenarios: when awarding a financial product, applying for a job, imprisonment and measures for job seekers.
In all four areas, the use of AI is already a reality, at least in part. The so-called automated decision making(ADM) is used by companies and state institutions to increase the efficiency of decision-making processes and to reduce the influence of personal attitudes of decision-makers. The task is often shared between man and machine: For example, if hundreds of candidates apply for a job, a computer program sorts the selection based on historical data and the person responsible then makes the final decision. It is still a rarity for a machine to make a decision on its own. But it is quite conceivable that certain processes will be completely automated in the future, say the study authors.
When it comes to acceptance, transparency is usually a big issue. Many algorithms resemble a black box – even for those who use them. The reason: Some of them are bought externally, so that the decision-makers themselves do not even know how the algorithm arrives at its result. “From a scientific and social point of view, it is of course desirable to know how the algorithm weights individual criteria,” says Keusch. This is also an important prerequisite for its social acceptance.