University of York: Researchers to develop framework to determine legal and moral responsibilities for autonomous systems

The project brings together computer scientists, engineers, developers, lawyers, regulators and philosophers, as well as the general public, to establish who is responsible for the decisions behind Artificial Intelligence (AI).

The project will ask the question: when an autonomous system takes an action that affects you, how do we establish who is responsible for the action and its outcome?

The project, funded by UK Research and Innovation, will look to address this complex problem.

Trust

Dr Ibrahim Habli, Reader in the Department of Computer Science and Principal Investigator for the new project, said: “Currently, we have no clear or consistent answers to questions about who is responsible for the decisions taken by autonomous systems and for the impact those decisions have.

“The benefits of autonomous systems will only be harnessed if people have trust in the human processes around their design, development, and deployment. Clarity about who is responsible for the decisions and outcomes of autonomous systems, and when and why they are responsible, is critical to an ecosystem of trust in these new technologies.”

Assurance

The project, which runs for 30 months, will have three levels: conceptual, assurance, and practical.

The initial conceptual work will bring together philosophers and lawyers to clarify the fundamental concepts of responsibility, identify the agents involved, where ‘responsibility gaps’ appear to arise, and how they can be addressed. This is particularly important given the risk of ambiguity inherent in discussions about responsibility and the need to establish a common language for all stakeholders.

Research at the assurance level will adapt methods used in the technical assurance of high-risk systems to achieve confidence that responsibility for the systems can be traced and allocated.

Finally, this will culminate at a practical level with an implementable and systematic methodology that will enable stakeholders to show that the tracing and allocation of responsibility that has been achieved for a specific autonomous system is well-justified and complete.

Transparency

Zoë Porter, an ethicist in the Assuring Autonomy International Programme and co-investigator on the project, said: “Establishing who is responsible for the decisions and outcomes of autonomous systems is particularly difficult because when we replace a human with a decision-making machine, our traditional frameworks for attributing responsibility are disrupted.

“In addition, there are different kinds of responsibility – for example, causal, role, moral, and legal responsibility – and a complete answer to the question requires us to consider them all, and the relations between them.”

The resulting methodology will cover the design and development phases as well as deployment and the investigation of accidents and incidents. In partnership with clinical, engineering and regulatory collaborators and the general public, it will be evaluated through a series of real-world case studies.

“Importantly, this is about trust and transparency not blame,” says Dr Habli, “It’s inevitable that we will see occasional accidents and incidents and it’s essential that we learn from these. By using our methodology, we will be able to trace responsibility in the decisions taken that led to the incident. By investigating these incidents, we will continue to iteratively adapt our methodology, learning from experience and helping ensure that this isn’t about blame but about transparency and trust for all stakeholders.”