AI Models Under Scrutiny: Are They Ready for Europe's New Rules?
Some leading AI models are not meeting European regulations, particularly in cybersecurity and discrimination. LatticeFlow has tested models by major tech firms using EU criteria. Non-compliance could lead to hefty fines. The study highlights areas for improvement and is a step towards enforcing AI laws.
Several prominent artificial intelligence models are reportedly falling short of European Union regulations, particularly in crucial areas such as cybersecurity resilience and discriminatory output, as per data obtained by Reuters.
The introduction of OpenAI's ChatGPT to the public in late 2022 triggered intense public discourse and spurred EU lawmakers to design specific regulations for 'general-purpose' AIs. In response, Swiss startup LatticeFlow, partnered with EU officials, has developed a tool to test generative AI models created by major tech companies like Meta and OpenAI, aligning assessments with the EU's expansive AI Act, slated to phase in over the next two years.
LatticeFlow's 'Large Language Model (LLM) Checker' scores AI models against several categories, identifying shortcomings in key areas that companies must address to ensure compliance. LLM Checker results published on Wednesday indicate that models by Alibaba, Anthropic, OpenAI, and others scored 0.75 or better on average, yet still exhibit weaknesses impacting compliance with EU standards.
(With inputs from agencies.)
ALSO READ
TÜV Rheinland Enhances Clean Room Validation Services to Ensure Sterility and Compliance
Contempt of Court: Crisis in Compliance
Court Demands Compliance: Delhi's Fare Fixation Under Scrutiny
Sebi Revamps Stockbroker Regulations for Simplified Compliance
EPFOA Demands Empowerment for Compliance Enforcement

