Pontificia Universidad Católica de Chile: NYU’s Julia Stoyanovich warns: “The reality is that machines are wrong”

The academic from the Center for Responsible Artificial Intelligence (New York University), spoke at the UC Applied Ethics Conference Cycle on the responsibility associated with the design, development and implementation of equitable data systems.

Julia Stoyanovich , director of the Center for Responsible Artificial Intelligence , at New York University (NYU) , recalled in her presentation on ” Responsible Data Science ” that the use of artificial intelligence (AI) has sought to position itself as a promise to improve quality of life, acceleration of science and promotion of innovation. However, the academic warned that “the reality is that machines are wrong.”

Stoyanovich exemplified these AI errors with an amendable case, such as a customer service that misinterprets an order, but revealed that they can escalate to situations in which “mistakes – thinking for example in autonomous cars – can cause catastrophic and irreversible damage; even the loss of human life ”. The increasing use of technology endorsed the expert’s call not to underestimate these errors, which in addition to affecting a particular individual, can harm an entire sector of the population and even the entire society.

“The mistakes – thinking for example in autonomous cars – can cause catastrophic and irreversible damage; even loss of human life. ”- Julia Stoyanovich, Director of the NYU Center for Responsible Artificial Intelligence


Proof of this would be those cases in which, according to Stoyanovich, automated recruitment and hiring tools designed to increase the diversity of workers have been applied, but which in practice have operated in the opposite direction, generating discriminatory effects that “reinforce the results of historical disadvantages ” .

These errors fundamentally originate biases in the system that force us to work with ethical responsibility for “data fairness”. For Stoyanovich, this consists of treating people according to their abilities and needs, and focusing on the equity of results in its threefold dimension: equity of representation, who wonders if the data accurately reflects the world; the fairness of access , that means having the information needed to evaluate and mitigate inequality; and equity of results , referring to the unforeseen consequences that are beyond the direct control of the system and the consideration of the possible evaluation and mitigation of those inequalities.

“Technologists should care to help build accountability-oriented systems and work to create regulatory mechanisms.” – Julia Stoyanovich, director of the NYU Center for Responsible Artificial Intelligence


As a way to reverse these biases, the academic warned that “merely technical solutions will never be enough”, and that the focus should be on proposing solutions based “on explicitly stated values ​​and beliefs, which in turn arise from public conversation and the social consensus ”. This dimension was especially highlighted by Professor Marcelo Arenas, from the UC School of Engineering, in his comment on the presentation.

Stoyanovich attributed a leading role to technologists in this phase: “In order to progress we have to get out of our engineering comfort zone.” Technologists, he said, should be concerned with helping to build accountability-oriented systems and work to create regulatory mechanisms, since, while “responsibility for decisions made by a system always rests with one person,” Stoyanovich believes that “everyone we are responsible for detecting and mitigating the injustices that lead to discrimination ”.

The expert concluded that algorithms and artificial intelligence, as creations of the human spirit, “will be what we want them to be; it is up to us to choose the world in which we want to live ”.


Comments are closed.