New York University: NSF/Amazon Grant Supports Research at NYU to Help Cities Detect and Reduce Biases in Algorithmically Supported Decision-making
Three-year project will develop ways to bring about equitable policy impacts on city inspections, policing, courts, and other public sector domains.
Keboard tiles arranged to spell BIAS
Getty Images
A team of researchers at New York University will develop new methods and tools aimed at minimizing systemic biases and producing more equitable public policy impacts on such areas as city housing inspections, policing, and courts.
Under a $1 million grant from the National Science Foundation (NSF) and Amazon, Computer Science Professor Daniel B. Neill will lead the three-year research project centered on the growing use of Artificial Intelligence (AI) by urban public sector organizations—work that will include the creation of open source tools for assessing and correcting biases.
“Human decisions and algorithmic decisions have potential for systematic biases that may lead to poor downstream outcomes such as disparities and inequity across racial, gender, and socioeconomic lines,” said Neill, a professor at NYU’s Wagner Graduate School of Public Service and the Center for Urban Science and Progress (CUSP) at NYU’s Tandon School of Engineering. “What we want to understand is how algorithms can enhance human decision making by eliminating implicit biases, and to develop methods and tools to assist those designing and implementing policy interventions in cities.”
In looking into both the risks and the benefits of algorithmic decision-making, the project team will develop a new, pipelined conceptualization of fairness consisting of seven distinct stages: data, models, predictions, recommendations, decisions, impacts, and outcomes. This “end-to-end fairness pipeline” will account for multiple sources of bias, model how biases propagate through the pipeline to result in inequitable outcomes, and assess sensitivity to unmeasured biases.
Second, the team will build a general methodological framework for identifying and correcting biases at each stage of the pipeline, a kind of bias scan, along with algorithmic decision support tools that provide recommendations to a human decision-maker (such as algorithmic “nudges” to guide human decisions toward fairness).
Finally, the project team will create new metrics for measuring the presence and extent of bias in the criminal justice and housing domains, and the tools that can be used to: (a) reduce incarceration by equitably providing supportive interventions to justice-involved populations; (b) prioritize housing inspections and repairs; (c) assess and improve the fairness of civil and criminal court proceedings; and (d) analyze the disparate health impacts of adverse environmental exposures, including poor-quality housing and aggressive, unfair policing practices.
“The ultimate impact of this work is to advance social justice for those who live in cities and who rely on city services or are involved with the justice system, by assessing and mitigating biases in decision-making processes and reducing disparities,” said Neill, director of NYU’s Learning for Good Laboratory and a faculty member at NYU’s Courant Institute of Mathematical Sciences.
In addition to Neill, the research team includes Ravi Shroff, an assistant professor at CUSP and NYU’s Steinhardt School of Culture, Education, and Human Development, Constantine Kontokosta, a professor at NYU’s Marron Institute of Urban Management and associated faculty at NYU Tandon, and Edward McFowland III, a professor at the University of Minnesota’s Carlson School of Management.
The grant was made under the NSF Program on Fairness in Artificial Intelligence in Collaboration with Amazon (2040898).