World leaders, top AI companies set out plan for safety testing of frontier as first global AI Safety Summit concludes

Countries and companies developing frontier AI have agreed a ground-breaking plan on AI safety testing, as Prime Minister Rishi Sunak brought the world’s first AI Safety Summit to a close today (Thursday 2 November).

In a statement on testing, governments and AI companies have recognised that both parties have a crucial role to play in testing the next generation of AI models, to ensure AI safety – both before and after models are deployed.

This includes collaborating on testing the next generation of AI models against a range of potentially harmful capabilities, including critical national security, safety and societal harms.

They have agreed governments have a role in seeing that external safety testing of frontier AI models occurs, marking a move away from responsibility for determining the safety of frontier AI models sitting solely with the companies.

Governments also reached a shared ambition to invest in public sector capacity for testing and other safety research; to share outcomes of evaluations with other countries, where relevant, and to work towards developing, in due course, shared standards in this area – laying the groundwork for future international progress on AI safety in years to come.

The statement builds on the Bletchley Declaration agreed by all countries attending on the first day of the AI Safety Summit. It is one of the several significant steps forward on building a global approach to ensuring safe, responsible AI that has been achieved at the Summit, such as the UK’s trailblazing launch of a new AI Safety Institute.

The countries represented at Bletchley have also agreed to support Professor Yoshua Bengio, a Turing Award winning AI academic and member of the UN’s Scientific Advisory Board, to lead the first-ever frontier AI ‘State of the Science’ report. This will provide a scientific assessment of existing research on the risks and capabilities of frontier AI and set out the priority areas for further research to inform future work on AI safety.

The findings of the report will support future AI Safety Summits, plans for which have already been set in motion. The Republic of Korea has agreed to co-host a mini virtual summit on AI in the next 6 months. France will then host the next in-person Summit in a year from now.

Prime Minister Rishi Sunak said:

Until now the only people testing the safety of new AI models have been the very companies developing it. We shouldn’t rely on them to mark their own homework, as many of them agree.

Today we’ve reached a historic agreement, with governments and AI companies working together to test the safety of their models before and after they are released.

The UK’s AI Safety Institute will play a vital role in leading this work, in partnership with countries around the world.

Secretary of State for Science, Innovation and Technology Michelle Donelan said:

The steps we have agreed to take over the last two days will help humanity seize the opportunities for improved healthcare, better productivity at work, and the creation of entire new industries that safe and responsible AI is set to unlock.

Ensuring AI works for the good of us all is a global endeavour, but I am proud of the singular role the UK has played in bringing governments, businesses and thinkers together to agree on concrete steps forward, for a safer future.

Yoshua Bengio said:

The safe and responsible development of AI is an issue which concerns every one of us. We have seen massive investment into improving AI capabilities, but not nearly enough investment into protecting the public, whether in terms of AI safety research or in terms of governance to make sure that AI is developed for the benefit of all.

I am pleased to support the much-needed international coordination of managing AI safety, by working with colleagues from around the world to present the very latest evidence on this vitally important issue.

The UK has already taken a lead in these efforts by launching the AI Safety Institute, to build public sector capability to conduct safety testing and to conduct AI safety research.

The ‘State of the Science’ report to be led by Turing Award winning Professor Yoshua Bengio will help AI policymakers in the UK, and internationally, to keep abreast of the rapid pace of change in AI, alongside a group of leading academics from around the world.

As the most-cited computer scientist in the world, the founder of the internationally renowned Mila – Quebec AI Institute, and an advisor to both the UK government and the UN, Professor Bengio is uniquely placed to lead this work.

The foundations laid at Bletchley Park over the past 2 days will be critical in ensuring AI’s enormous potential can be harnessed, safely and responsibly, to unlock a gear-change in what’s possible in terms of economic productivity, healthcare, education and more.

Country support for the AI Safety Summit
Deputy Prime Minister of Australia Richard Marles said:

Australia welcomes a secure-by-design approach where developers take responsibility. Voluntary commitments are good but will not be meaningful without more accountability. Australia is pleased to partner with the UK on this important work.

Canadian Minister of Innovation, Science and Industry the Honourable François-Philippe Champagne said:

Canada welcomes the launch of the UK’s AI Safety Institute. Our government looks forward to working with the UK and leveraging the exceptional Canadian AI knowledge and expertise, including the knowledge developed by our AI institutes to support the safe and responsible development of AI.

President of the European Commission Ursula von der Leyen said:

At the dawn of the intelligent machine age, the huge benefits of AI can be reaped only if we also have guardrails against its risks. The greater the AI capability, the greater the responsibility. A credible international governance should be built on 4 pillars: a well-resourced and independent scientific community; widely accepted testing procedures and standards; the investigation of every significant incident caused by errors or misuse of AI; and a system of alerts fed by trusted flaggers. It’s time to act.

The French Government said:

French authorities will participate in this initiative by mobilizing the stakeholders and resources already active on AI safety, in particular Digital Europe’s Testing and Experimentation Facilities for AI partners and the French Confiance.ai program.

The German Government said:

Germany is interestedly taking notice of the foundation of the AI Safety Institute and is looking forward to exploring possibilities of cooperation.

Prime Minister of Italy Georgia Meloni said:

Artificial intelligence is entering every domain of our lives. It is our responsibility, today, to steer its ethical development and ensure its full alignment with humankind’s freedom, control and prosperity. We need to develop the practical application of the concept of ‘Algor-ethics’, that is, ethics for algorithms.

The Japanese Government said:

The Japanese Government appreciate the UK’s leadership in holding the AI Safety Summit and welcomes the UK initiative to establish the UK AI Safety Institute. We look forward to working with the UK and other partners on AI safety issues toward achieving safe, secure, and trustworthy AI.

Singaporean Minister for Communications and Information Josephine Teo said:

The rapid acceleration of AI investment, deployment and capabilities will bring enormous opportunities for productivity and public good. We believe that governments have an obligation to ensure that AI is deployed safely. We agree with the principle that governments should develop capabilities to test the safety of frontier AI systems.

Following the MoUs on Emerging Technologies and Data Cooperation signed by Singapore and the UK earlier this year, we have agreed to collaborate directly with the UK to build capabilities and tools for evaluating frontier AI models. This will involve a partnership between Singapore’s Infocomm Media Development Authority and the UK’s new AI Safety Institute. The objective is to build a shared understanding of the risks posed by frontier AI. We look forward to working together with the UK to build shared technical and research expertise to meet this goal.

U.S. Secretary of Commerce Gina Raimondo said:

I welcome the United Kingdom’s announcement to establish an AI Safety Institute, which will work together in lockstep with the U.S. AI Safety Institute to ensure the safe, secure, and trustworthy development and use of advanced AI. AI is the defining technology of our generation, carrying both enormous potential and profound risk. Our coordinated efforts through these institutes is only the beginning of actions to facilitate the development of safety standards, build testing capabilities for advanced AI models, and to expand information-sharing, research collaboration, interoperability, and policy alignment across the globe on AI safety.

Company support
Demis Hassabis, Co-founder & CEO of Google DeepMind said:

AI can help solve some of the most critical challenges of our time, from curing disease to addressing the climate crisis. But it will also present new challenges for the world and we must ensure the technology is built and deployed safely. Getting this right will take a collective effort from governments, industry and civil society to inform and develop robust safety tests and evaluations. I’m excited to see the UK launch the AI Safety Institute to accelerate progress on this vital work.

Dario Amodei, co-founder and CEO of Anthropic said:

While AI promises significant societal benefits, it also poses a range of potential harms. Critical to managing these risks is government capacity to measure and monitor the capability and safety characteristics of AI models. The AI Safety Institute is poised to play an important role in promoting independent evaluations across the spectrum of risks and advancing fundamental safety research. We welcome its establishment and look forward to partnering closely to advance safe and responsible AI.