Low-quality AI could crowd out better systems without strong standards


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 02-02-2026 09:12 IST | Created: 02-02-2026 09:12 IST
Low-quality AI could crowd out better systems without strong standards
Representative Image. Credit: ChatGPT

AI tools are being adopted at scale, but users often lack reliable ways to distinguish high-quality systems from weaker alternatives. Researchers now warn that this uncertainty could undermine both trust and innovation in AI markets.

In When Life Gives You AI, Will You Turn It Into a Market for Lemons? Understanding How Information Asymmetries About AI System Capabilities Affect Market Outcomes and Adoption, a new research paper, the authors examine how information gaps about AI performance shape user behavior and market efficiency.

How information gaps distort AI adoption decisions

The study identifies a major challenge in contemporary AI markets: information asymmetry. Developers and vendors typically have detailed knowledge about system performance, limitations, and training conditions, while end users must infer quality from limited signals such as branding, surface accuracy metrics, or prior experience. This imbalance mirrors conditions that historically led to market failure in other domains, such as used car markets.

Through a series of controlled experiments, the researchers simulate environments in which participants repeatedly choose whether to rely on AI systems of varying quality. Participants are rewarded based on decision accuracy, allowing the authors to observe how adoption behavior evolves under different disclosure regimes. The results show that users struggle to calibrate trust appropriately when system quality is uncertain.

When low-quality AI systems are rare, users tend to underuse AI overall, missing potential gains from high-performing tools. Conversely, when low-quality systems are common, users often over-rely on AI despite repeated negative outcomes. This pattern persists even after extensive interaction, suggesting that experiential learning alone is insufficient to correct miscalibration.

The study finds that users rely heavily on prior beliefs and heuristics rather than updating trust based on performance feedback. This leads to systematic inefficiencies, where individuals either avoid beneficial automation or delegate excessively to unreliable systems. Importantly, these behaviors are not random errors but predictable responses to uncertainty.

Transparency helps, but only to a point

To test whether transparency can correct these distortions, the authors introduce varying levels of disclosure about AI system quality. In partial disclosure conditions, users are informed about average accuracy or performance ranges. Under full disclosure, users are given precise information about system quality.

Partial disclosure improves decision quality by helping users avoid the worst-performing systems. Participants become more selective, delegating tasks to AI only when expected benefits outweigh risks. However, the study finds that partial transparency does not significantly increase overall AI adoption. Instead, it shifts how and when AI is used, improving efficiency without encouraging indiscriminate reliance.

Surprisingly, even full disclosure does not resolve all inefficiencies. When users know exactly how capable an AI system is, many still underuse high-quality tools. The authors attribute this to behavioral factors such as aversion to automation, desire for control, and skepticism toward machine-generated advice. As a result, markets continue to experience lost efficiency despite perfect information.

These findings challenge the assumption that transparency alone is sufficient for responsible AI adoption. While disclosure reduces harm from low-quality systems, it does not guarantee optimal use of high-quality ones. The study suggests that cognitive and psychological factors play a central role in shaping human–AI interaction.

The risk of a market dominated by low-quality AI

AI markets are vulnerable to adverse selection. When users cannot confidently identify high-quality systems, developers of better-performing AI may struggle to differentiate themselves. Over time, incentives favor cheaper, lower-quality tools that can survive in a market shaped by miscalibrated trust.

This dynamic mirrors the classic lemons problem, where good products exit the market because buyers are unwilling to pay a premium without reliable quality signals. In the AI context, this could lead to widespread deployment of suboptimal systems, reduced innovation, and declining user trust.

The authors argue that preventing such outcomes requires more than voluntary transparency. They call for enforceable, standardized disclosure mechanisms that communicate AI capabilities in ways users can meaningfully interpret. These disclosures should be designed with behavioral insights in mind, focusing on how users actually make decisions rather than how regulators assume they should.

The study also highlights the importance of market-level interventions. Certification schemes, benchmarking standards, and third-party audits could help reduce information asymmetry and support healthier competition. At the same time, user education and interface design must address overreliance and underreliance tendencies.

AI adoption is not simply a technical challenge but an economic and behavioral one. Without careful attention to information asymmetry and human decision-making, AI markets risk drifting toward inefficiency, mistrust, and diminished value. A

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback