Researchers Advocate for a Human-Centric Future in AI Development

Artificial intelligence (AI) is revolutionising the world. From finance and geopolitics to education and social interactions, everything is being affected by AI developments. This state of constant change is challenging existing governance structures.

Dr. Joost Batenburg, Professor of the Institute of Advanced Computer Science and Programme Director for the Society, Artificial Intelligence, and Life Sciences (SAILS), highlighted several specific challenges, such as:

How fast AI is developing (not even developers and researchers can keep up!);
AI works as a black box and designers do not fully understand how AI creates its output;
There is also a new balance of power between the public and private sector, as those who own the data, have huge power over society;
AI developments are also triggering a generalised lack of trust, as it is increasingly harder to identify what is and is not real.
Professor dr. Jan Aart Scholte, head of the Global Transformations and Governance Challenges (GTGC) programme, added that a key challenge for AI governance is its polycentric nature. Unlike traditional governance structures, AI cannot be governed by singular actors independently; cooperation between them is needed. This involves a network of organisations of different sizes from the private and public sector who will be forced to act in a coherent and efficient way to create new governance structures and enforce them, which is a challenge.

However, this also offers opportunities, such as the huge range of information, insight, and experience available for policy development, and wider participation in democratic control.

The governance of AI also concerns specific uses of this technology. Dr. Hsini Huang, Assistant Professor of the Insitute of Public Administration, provided an additional challenge to consider, algorithmic discretion in the use of AI in public organisations. This is a concept that describes AI’s autonomous decision-making power. Public institutions handle delicate data and are entrusted with public decisions and policy making; hence, automating processes with AI can lead to issues, as not even developers fully understand how these decisions will be made.

Dr. Huang emphasised the need to keep humans in the loop to complement AI technologies, creating collaborative intelligence. She highlighted the need for “meaningful human controls” that can harness the power of AI in delicate operations, to achieve responsible, safe, and trustworthy AI.

Overall, the panel agreed that the governance of AI remains an area of great discussion and emerging challenges. They all underscored the relevance of keeping humans in the loop when it comes to regulation, key decision making, and handling of delicate data. The future of AI must be human.