University of Amsterdam’s New Research Lab Implements Responsible AI Practices

Video-AI is at the heart of many of controversies around AI (Artificial Intelligence), from automated public surveillance to drones for military use. In the ‘HAVA-Lab’, part of the Data Science Centre, researchers from all 7 UvA faculties investigate how to align video-AI with human values and ethical principles. In the run-up to the launch on Data Science Day, 13 October, Prof. dr. Blanke and Prof. dr. Lindegaard share their thoughts on the importance and ambitions of the lab.

DSC HAVA-Lab (Human Aligned Video-AI)

The ‘HAVA-Lab’, aims to incorporate cognitive, ethical, and legal perspectives in the development of video-AI algorithms. What is special about the lab is that both technological challenges and social issues are addressed. From detecting crime without perpetuating unwanted bias, to improving the quality and efficiency of diagnostic training of dentists. Also, the interdisciplinary approach makes the research lab unique.

Learn more about the Data Science Centre

 

Marie Rosenkrantz Lindegaard (Photo by Els Zweerink)

Increasing use of video-AI

‘Video-AI, also referred to as computer vision, is a field of scientific inquiry that aims to develop techniques for computers to automate tasks that the human visual system can also do, and maybe even tasks that we are incapable of doing. These tasks include processing, analysing, and understanding sequences of digital images (videos)’, explains Marie Rosenkrantz Lindegaard.

Marie Rosenkrantz Lindegaard is professor by special appointment of Dynamics of Crime and Violence. In her research she focuses on criminology, AI and the use of video data recorded with public cameras.

Challenges cannot be addressed by one discipline

‘The challenges surrounding the application of Video AI cannot be tackled from one discipline, says Prof. Dr. Tobias Blanke, one of the Co-PIs of the HAVA Lab. Tobias: ‘Controversies demonstrate that video-AI cannot be done without an interdisciplinary commitment. It starts with societal, ethical and cognitive perspectives on video-AI. How does it relate to fundamental values and incorporate privacy? But also: where are the human decisions in the production processes that make video-AI? The human-machine relationship is often difficult to disentangle in AI productions. This is where traditional humanities and social science expertise comes in, and where some of the most exciting research in Humane AI is currently happening.’

 

Tobias Blanke is University Professor of Artificial Intelligence and Humanities. His principal research interests lie in AI and big data devices for research, particularly in the human sciences.

Prof. dr. Marie Rosenkrantz Lindegaard gives an example: ‘Let’s say that computer vision scientists would like to develop an algorithm that can detect robberies. They need to know what a robbery looks like before they develop the algorithm. Robberies do not happen as they are portrayed in movies. They are more complex than that.’

Unique project involving all 7 UvA faculties

Having all 7 faculties involved, makes this project truly unique. ‘Video-AI is a hard enough technological challenge in itself, but trying to at the same time address societal and application questions is simply extraordinary’, says Tobias.

Marie adds: ‘Cees Snoek [Principal Investigator of the HAVA-Lab] and I started working on questions of behavioural detection 5-10 years ago because we are interested in the same thing: understanding human behaviour in videos. But we approached it entirely differently. The main challenge is language: across disciplines, we use different languages, and even the same word can have a different meaning. Bridging these differences can be challenging. We become enthusiastic about the same things, and that makes it fun. Maybe that is actually the most important thing when you work in complex teams.’