U.S. Expands AI Model Testing to Boost Cybersecurity
The U.S. government has extended its program to assess unreleased AI models, including those from Google’s DeepMind, xAI, and Microsoft, to mitigate risks such as cyberattacks. OpenAI and Anthropic were already cooperating voluntarily. The initiative aims to curb misuse of AI in cyber warfare and biosecurity threats.
The Trump administration announced on Tuesday its expanded initiative to enable U.S. government scientists to evaluate unreleased artificial intelligence models. This program, now includes major players such as Google’s DeepMind, xAI, and Microsoft.
OpenAI and Anthropic were already partnered with the U.S. Center for AI Standards and Innovation to assess vulnerabilities in AI models. The focus is on risks like potential cyberattacks on American infrastructure and misuse of AI in chemical or biological weapon development.
In this expanded collaboration, OpenAI's GPT-5.5-Cyber model focuses on cybersecurity defense. Microsoft and Anthropic will assist with creating datasets and analyzing AI models for flaws. The initiative seeks to establish guidelines to protect the critical infrastructure of sectors like communications and emergency services from AI threats.
ALSO READ
-
Tragic Strikes Amidst Ceasefire Calls: Escalating Tensions in Eastern Ukraine
-
Trump says operation to reopen Strait of Hormuz will be 'paused'
-
Retail Sector Urges Government to Tackle Cost Crisis
-
Rubio Declares 'Operation Epic Fury' Concluded: Peace with Iran Remains Uncertain
-
Regional Leaders Unite Against Iranian Aggression in UAE
Google News