Global leaders agree to launch first international network of AI Safety Institutes to boost cooperation of AI
A new agreement between 10 countries plus the European Union, reached today (21 May) at the AI Seoul Summit, has committed nations to work together to launch an international network to accelerate the advancement of the science of AI safety.
The “Seoul Statement of Intent toward International Cooperation on AI Safety Science” will bring together the publicly backed institutions, similar to the UK’s AI Safety Institute, that have been created since the UK launched the world’s first at the inaugural AI Safety Summit – including those in the US, Japan and Singapore.
Coming together, the network will build “complementarity and interoperability” between their technical work and approach to AI safety, to promote the safe, secure and trustworthy development of AI.
This will include sharing information about models, their limitations, capabilities and risks, as well as monitoring specific “AI harms and safety incidents” where they occur and sharing resources to advance global understanding of the science around AI safety.
This was agreed at the leaders’ session of the AI Seoul Summit, bringing together world leaders and leading AI companies to discuss AI safety, innovation and inclusivity.
As part of the talks, leaders signed up to the wider Seoul Declaration which cements the importance of enhanced international cooperation to develop AI that is “human-centric, trustworthy and responsible”, so that it can be used to solve the world’s biggest challenges, protect human rights, and bridge global digital divides.
They recognised the importance of a risk-based approach in governing AI to maximise the benefits and address the broad range of risks from AI, to ensure the safe, secure, and trustworthy design, development, deployment, and use of Al.
Prime Minister, Rishi Sunak, said:
AI is a hugely exciting technology – and the UK has led global efforts to deal with its potential, hosting the world’s first AI Safety Summit last year.
But to get the upside we must ensure it’s safe. That’s why I’m delighted we have got agreement today for a network of AI Safety Institutes.
Six months ago at Bletchley we launched the UK’s AI Safety Institute. The first of its kind. Numerous countries followed suit and now with this news of a network we can continue to make international progress on AI safety.
Technology Secretary Michelle Donelan said:
AI presents immense opportunities to transform our economy and solve our greatest challenges – but I have always been clear that this full potential can only be unlocked if we are able to grip the risks posed by this rapidly evolving, complex technology.
Ever since we convened the world at Bletchley last year, the UK has spearheaded the global movement on AI safety and when I announced the world’s first AI Safety Institute, other nations followed this call to arms by establishing their own.
Capitalising on this leadership, collaboration with our overseas counterparts through a global network will be fundamental to making sure innovation in AI can continue with safety, security and trust at its core.
Deepening partnerships with AI safety institutes and similar organisations is an area of work the UK has already kickstarted through a landmark agreement with the United States earlier this year. The UK’s AI Safety Institute is the world’s first publicly-backed organisation, with £100 million of initial funding. Since it was created, a number of other countries have launched their own AI Safety Institutes, including the US, Japan and Singapore, all of which have signed the commitments announced today.
Building on November’s Bletchley Declaration, the newly agreed statement recognises safety, innovation and inclusivity and interrelated goals, and advocates for socio-cultural and linguistic diversity being embraced in AI models.
These follow the freshly announced “Frontier AI Safety Commitments” from 16 AI technology companies, setting out that the leading AI developers will take input from governments and AI Safety Institutes in setting thresholds when they would consider risks unmanageable. In a world first, the commitments have been signed by AI companies from around the world including the US, China, Middle East and Europe.