U.S. Expands AI Model Testing to Boost Cybersecurity

The U.S. government has extended its program to assess unreleased AI models, including those from Google’s DeepMind, xAI, and Microsoft, to mitigate risks such as cyberattacks. OpenAI and Anthropic were already cooperating voluntarily. The initiative aims to curb misuse of AI in cyber warfare and biosecurity threats.

U.S. Expands AI Model Testing to Boost Cybersecurity
This image is AI-generated and does not depict any real-life event or location. It is a fictional representation created for illustrative purposes only.

The Trump administration announced on Tuesday its expanded initiative to enable U.S. government scientists to evaluate unreleased artificial intelligence models. This program, now includes major players such as Google’s DeepMind, xAI, and Microsoft.

OpenAI and Anthropic were already partnered with the U.S. Center for AI Standards and Innovation to assess vulnerabilities in AI models. The focus is on risks like potential cyberattacks on American infrastructure and misuse of AI in chemical or biological weapon development.

In this expanded collaboration, OpenAI's GPT-5.5-Cyber model focuses on cybersecurity defense. Microsoft and Anthropic will assist with creating datasets and analyzing AI models for flaws. The initiative seeks to establish guidelines to protect the critical infrastructure of sectors like communications and emergency services from AI threats.

Give Feedback