Premium disparities exposed as EU AI Act tightens oversight on insurers

The study uncovers significant pricing disparities in existing underwriting systems. Premiums for low-income policyholders are elevated by 5.8% in life insurance and 7.2% in health insurance compared to actuarially fair benchmarks. These discrepancies, the authors note, are driven by proxy variables embedded within machine learning models, such as occupational codes and urban density indicators, which indirectly amplify socio-economic biases in risk assessments.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 23-08-2025 22:55 IST | Created: 23-08-2025 22:55 IST
Premium disparities exposed as EU AI Act tightens oversight on insurers
Representative Image. Credit: ChatGPT

European insurers face mounting pressure to adapt their risk models as the EU AI Act reshapes the regulatory and operational landscape for artificial intelligence systems in life and health underwriting. A new study, titled “Algorithmic Bias Under the EU AI Act: Compliance Risk, Capital Strain, and Pricing Distortions in Life and Health Insurance Underwriting” and published in Risks, delivers a rigorous analysis of how fairness compliance is set to alter premium pricing, capital requirements, and enforcement risks across the industry.

Based on a proprietary dataset of 12.4 million quote, bind, and claim records from four pan-European insurers collected between 2019 and 2024, the researchers deploy advanced statistical and machine learning models to quantify pricing distortions and fairness gaps. Their findings reveal that the integration of fairness requirements is no longer a peripheral concern but a central operational challenge that insurers must navigate to stay compliant and competitive.

Pricing distortions and fairness breaches in underwriting

The study uncovers significant pricing disparities in existing underwriting systems. Premiums for low-income policyholders are elevated by 5.8% in life insurance and 7.2% in health insurance compared to actuarially fair benchmarks. These discrepancies, the authors note, are driven by proxy variables embedded within machine learning models, such as occupational codes and urban density indicators, which indirectly amplify socio-economic biases in risk assessments.

Advanced models like XGBoost, while delivering superior predictive accuracy compared to traditional generalized linear models (GLMs), exacerbate fairness breaches by tripling disparities in key metrics such as equalized odds. This presents a critical compliance issue as these algorithms, widely deployed in automated underwriting systems, risk running afoul of the high-risk classification framework mandated under the AI Act.

Under the new regulation, life and health underwriting models are explicitly categorized as high-risk applications, making them subject to stringent oversight, including requirements for transparency, explainability, and fairness auditing. Breaches can trigger fines as high as EUR 35 million or 7% of global turnover, a financial exposure that compels insurers to rethink their reliance on opaque, accuracy-optimized systems.

Mitigation strategies and capital implications

The authors systematically evaluate three mitigation techniques: re-weighing, reject-option, and adversarial debiasing to assess their ability to close fairness gaps while maintaining performance integrity. Among these, adversarial debiasing emerges as the most effective and capital-efficient strategy, reducing bias by up to 82% with only a 14 basis point increase in the Solvency Capital Requirement (SCR).

By contrast, re-weighing methods, while effective in reducing disparities, impose a heavier capital burden and operational complexity, making them less attractive from a risk-return perspective. The research emphasizes that the capital strain from fairness remediation, under realistic scenarios, remains manageable, adding up to 4.1% of own funds under severe assumptions. This cost is significantly lower than the potential penalties and reputational risks associated with non-compliance.

The analysis also highlights that supervisory detection probabilities play a pivotal role in shaping the economic calculus for compliance. Once the probability of detection exceeds 8.9%, the expected cost of fines outweighs the expense of remediation. When incorporating dynamic learning models and iterative supervision, this threshold falls to 7.2%, reinforcing the financial prudence of proactive compliance.

Governance, oversight and future outlook

The research underscores that the AI Act is more than a compliance challenge; it represents a structural shift in the governance of algorithmic decision-making in insurance. Boards and chief risk officers are urged to treat fairness as a prudential issue, embedding it within the core framework of risk governance rather than treating it as an ancillary concern.

Explainability and transparency are flagged as critical enablers of sustainable compliance. The authors recommend the development of explainable AI (XAI) models, enabling insurers to balance performance with regulatory accountability. Investments in internal capabilities, from upskilling teams to building robust monitoring infrastructures, are identified as essential for aligning technical systems with supervisory expectations.

Cross-industry collaboration is also encouraged to accelerate the standardization of best practices. Sharing datasets, benchmarking mitigation techniques, and aligning fairness definitions across jurisdictions can help insurers streamline compliance efforts while minimizing operational inefficiencies.

Despite the progress highlighted in the study, the authors acknowledge key limitations. The analysis is constrained to four European carriers and applies fairness definitions centered on group-level metrics. Further research is needed to validate the findings across other markets, insurance lines, and fairness frameworks, as well as to explore alternative modeling approaches that integrate fairness objectives directly into training algorithms.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback