Hidden risks of ‘evidence-based’ AI policy: Are we delaying critical regulations?

One of the fundamental challenges in developing evidence-based AI policies is the bias in how evidence is gathered and presented. The study highlights several biases, including selective disclosure, where AI developers selectively release positive findings while withholding risks. For instance, the study references how major tech companies have historically prioritized public relations over transparent risk assessments, making it difficult for regulators to gain a full picture of potential harms.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 18-02-2025 10:36 IST | Created: 18-02-2025 10:36 IST
Hidden risks of ‘evidence-based’ AI policy: Are we delaying critical regulations?
Representative Image. Credit: ChatGPT

Artificial intelligence (AI) is advancing at an extraordinary pace, promising transformative benefits while also posing unprecedented risks. Governments worldwide are racing to craft regulatory frameworks to manage AI’s impact, yet they face a fundamental dilemma: how much evidence is enough before taking action? While rigorous, data-driven policymaking is essential, waiting for irrefutable proof before implementing AI regulations could lead to catastrophic consequences. Historically, industries have used the call for "more evidence" as a tactic to delay necessary interventions, and AI is no exception. The growing influence of AI in decision-making, from automated hiring to law enforcement, demands an urgent yet measured regulatory approach.

A recent study titled "Pitfalls of Evidence-Based AI Policy" by Stephen Casper, David Krueger, and Dylan Hadfield-Menell, published as a blog post at ICLR 2025, argues that an overreliance on evidence before enacting AI policies may lead to regulatory paralysis and allow risks to go unaddressed. The study critically examines historical cases, biases in evidence collection, and the role of vested interests in delaying necessary regulations.

The biases in AI evidence collection

One of the fundamental challenges in developing evidence-based AI policies is the bias in how evidence is gathered and presented. The study highlights several biases, including selective disclosure, where AI developers selectively release positive findings while withholding risks. For instance, the study references how major tech companies have historically prioritized public relations over transparent risk assessments, making it difficult for regulators to gain a full picture of potential harms.

Furthermore, some AI risks are inherently difficult to measure, such as long-term societal impacts or ethical dilemmas that lack clear benchmarks. This is particularly problematic in comparison to easily quantifiable AI risks, such as biases in facial recognition systems, which receive disproportionate attention simply because they are more measurable. The study argues that an overreliance on measurable risks skews regulatory priorities and neglects complex but potentially severe dangers.

The "deny and delay" strategy in AI regulation

The study draws historical parallels between AI policy debates and past regulatory battles in industries such as tobacco, fossil fuels, and climate change. The "deny and delay" strategy - where industry players push for extensive evidence before action is taken - has been used to stall regulations for decades. By insisting on high evidentiary standards, industry leaders can delay policies that might otherwise curb harmful AI applications.

Casper and colleagues warn that calls for "more research" often serve as a rhetorical tool for inaction rather than genuine scientific inquiry. AI companies, which have significant influence over research funding and public discourse, often downplay existential risks while highlighting AI’s economic benefits. The study suggests that regulators must be cautious of industry-driven narratives that call for excessive proof before taking action, as these arguments may be designed to protect commercial interests rather than public welfare.

Moving beyond evidence-dependent regulation

Rather than waiting for perfect evidence, the study advocates for a precautionary regulatory approach, which prioritizes proactive governance based on reasonable concerns rather than definitive proof of harm. This does not mean enacting overly restrictive policies but rather adopting process-based regulations that encourage transparency, accountability, and ongoing risk assessments.

For instance, regulations requiring AI developers to conduct internal and third-party risk assessments, report on model specifications, and document safety measures could help mitigate risks without stifling innovation. By implementing these measures, governments can build an AI governance ecosystem that continuously adapts to emerging challenges rather than being stuck in reactive mode.

A call for smarter AI governance

The study concludes that the most effective AI policies will balance evidence-based decision-making with proactive safeguards. Policymakers should not let the absence of complete evidence become an excuse for inaction, especially in a field evolving as rapidly as AI. Instead, they should focus on building mechanisms that facilitate continuous monitoring, independent audits, and adaptable regulations.

As AI systems increasingly shape economies, governance, and social structures, waiting for irrefutable evidence of harm before acting could prove disastrous. The key takeaway from this research is clear: governments must prioritize AI policies that enable evidence collection while simultaneously addressing risks, ensuring that regulatory frameworks evolve in tandem with technological advancements. This study serves as an urgent reminder that in AI governance, caution must not come at the cost of responsibility.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback