A new UK-U.S. agreement on artificial-intelligence safety could buttress efforts to bring partner and allied nations into a broad agreement on AI risks. It follows Defense Department-led collaboration between the two countries on military AI ethics.
Signed on Monday, the memorandum of understanding commits the countries to work together to design safety tests for popular AI tools, a development that could pave the way for regulation that will shape the future AI industry across the world.
Industry leaders at the signing ceremony included Demis Hassabis from Google’s DeepMind, Elon Musk, and OpenAI’s Sam Altman, who has been particularly outspoken about his belief that governments should more firmly regulate the nascent AI industry to boost consumer trustworthiness and ward off catastrophe.
The announcement comes as the Defense Department is also working to better understand the potential risks and rewards of generative AI models, both those that are publicly available and ones that the military may create.
Under the agreement, the institutes will work together to perform at least one joint test on a publicly available AI model and will “build a common approach to AI safety testing and to share their capabilities to ensure these risks can be tackled effectively,” the Commerce Department said in a statement on Monday.
The announcement follows November’s summit in the UK, where 28 countries and the European Union agreed to “support an internationally inclusive network of scientific research on frontier AI safety.” Of course such agreements are easy. Among the signatories were China, Israel, and Saudi Arabia, all of whom have been criticized for their data collection and use practices. However, the EU has been a leader in data-protection ethics. In March, the body published the AI Act, a policy package to “support the development of trustworthy AI, which also includes the AI Innovation Package and the Coordinated Plan on AI,” the EU writes on their website.