The agreement will see the UK’s new AI Safety Institute and its US counterpart collaborate to formulate a framework to test the safety of large language models. Credit: istock/FotografieLink The US and the UK have signed an agreement to test the safety of that underpin systems. The — signed in Washington by US Commerce Secretary Gina Raimondo and UK Technology Secretary Michelle Donelan on Monday — will see both countries working to align their scientific approaches and working closely to develop suites of evaluations for AI models, systems, and agents. The work for developing frameworks to test the safety of LLMs, such as the ones developed by OpenAI and Google, will be taken by the UK’s new AI Safety Institute (AISI) and its US counterpart immediately, Raimondo said in a statement. The agreement comes into force just months after the UK government hosted the global in September last year, which also saw several countries including . The countries signed the agreement, dubbed the , to form a common line of thinking that would oversee the evolution of AI and ensure that the technology is advancing safely. The agreement came after signed warning that AI evolution could lead to an extinction event in May last year. OpenAI, Meta, Nvidia to cooperate with AI Safety Institutes The US has also taken steps to regulate AI systems and related LLMs. In November last year, the Biden administration issued a long-awaited is kept in check while also providing paths for it to grow. Earlier this year, the US government created an , including AI creators, users, and academics, with the goal of putting some guardrails on AI use and development. The advisory group named the US AI Safety Institute Consortium (AISIC), which is part of the National Institute of Standards and Technology, was tasked with coming up with guidelines for red-teaming AI systems, evaluating AI capacity, managing risk, ensuring safety and security, and watermarking AI-generated content. Several major technology firms, including OpenAI, Meta, Google, Microsoft, Amazon, Intel, and Nvidia, joined the consortium to ensure the safe development of AI. Similarly, in the UK, firms such as OpenAI, Meta, and Microsoft have signed voluntary agreements to open up their latest generative AI models for review by the country’s AISI, which was set up at the UK AI Safety Summit. The EU has also made strides in the regulation of AI systems. Last month, the European Parliament signed the world’s first comprehensive law to govern AI. According to the final text, the regulation aims to promote the “uptake of human-centric and trustworthy AI, while ensuring a high level of protection for health, safety, fundamental rights, and environmental protection against harmful effects of artificial intelligence systems.” SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe