ILO Warns AI in HRM Risks Reinforcing Inequality Without Stronger Safeguards

According to the ILO’s findings, many AI-driven HR systems are designed on the premise that quantification equals objectivity.


Devdiscourse News Desk | Geneva | Updated: 19-11-2025 13:25 IST | Created: 19-11-2025 13:25 IST
ILO Warns AI in HRM Risks Reinforcing Inequality Without Stronger Safeguards
The ILO cautions that these structural flaws not only distort HR decision-making but also expose employers to legal, ethical and reputational risks. Image Credit: ChatGPT

A new working paper from the International Labour Organization (ILO) has raised serious concerns about how artificial intelligence (AI) is being integrated into human resource management (HRM) systems across industries worldwide. The analysis finds that many AI tools used for hiring, scheduling, pay-setting and performance management are built on unclear objectives, incomplete data and opaque algorithms, posing significant risks to fairness, equality and decent work.

Titled AI in Human Resource Management: The Limits of Empiricism, the paper issues one of the most comprehensive examinations yet of the structural challenges associated with the rapid digitalization of HR functions. As companies increasingly adopt AI to streamline operations and reduce costs, the study warns that these systems often embed and reproduce existing inequalities rather than eliminating them.

The Problem: AI Systems Built on Flawed Assumptions

According to the ILO’s findings, many AI-driven HR systems are designed on the premise that quantification equals objectivity. This long-standing assumption within HR management has contributed to the uncritical adoption of digital tools that may not be appropriate for managing complex human interactions.

The paper highlights several key risks:

Unclear or Misaligned Objectives

Many AI systems lack well-defined goals, leading algorithms to optimize outcomes that may not align with decent work standards or ethical employment practices.

Biased or Incomplete Data

AI models are only as reliable as the data used to train them. When datasets reflect historic biases—gender, racial, socioeconomic or otherwise—the resulting systems can inadvertently discriminate against vulnerable groups.

Opaque Programming and Decision-Making

The “black box” nature of many AI tools makes it difficult for employers, auditors, and workers to understand how decisions are made, challenging transparency and accountability.

Reinforcement of Inequality

AI that draws on biased metrics (such as past hiring patterns) can reproduce discrimination at scale, further entrenching barriers faced by women, minority groups, persons with disabilities and migrant workers.

The ILO cautions that these structural flaws not only distort HR decision-making but also expose employers to legal, ethical and reputational risks.

“Without a Human-Centred Approach, AI Can Undermine Trust”

Janine Berg, Senior Economist in the ILO’s Research Department and one of the report’s lead authors, stressed the urgent need for organizations to rethink their approach to AI:

“Organizations often assume AI will improve efficiency or reduce bias, but these systems depend on the quality of their objectives and data. Without a human-centred approach, AI can inadvertently undermine fairness, transparency and trust in the workplace.”

Her warning echoes a growing body of global research showing that AI tools, when deployed without safeguards, can worsen labour market inequalities and erode worker confidence.

A Practical Framework for Responsible AI Adoption

To help organizations address these risks, the publication introduces a practical analytical framework that employers, HR professionals, policymakers and labour representatives can use to assess the appropriateness and ethical soundness of AI systems.

The framework emphasizes:

1. Worker Participation

Involving workers and their representatives in technology selection, testing and monitoring ensures that AI serves the interests of both employers and employees.

2. Governance and Oversight Mechanisms

Clear decision-making structures and accountability measures help prevent the misuse of AI and enable corrective action when issues arise.

3. Transparency and Explainability

Employees should be informed when AI is used in HR decisions and should have access to explanations and mechanisms to contest automated outcomes.

4. Respect for Fundamental Principles of Decent Work

AI must support—not undermine—rights related to equality, non-discrimination, social dialogue, privacy, data protection and freedom of association.

The paper argues that adopting AI without such guardrails can erode labour standards and contribute to unsafe or unfair workplace conditions.

Part of the ILO’s Broader Digital Transformation Agenda

The working paper contributes to the ILO’s expanding body of research on the future of work, digitalization and labour governance. As more countries explore AI regulation and companies adopt HR technologies, the ILO’s insights aim to support governments, employers and workers in crafting responsible, rights-based approaches to automation.

The analysis aligns with global discussions on algorithmic accountability, fair recruitment, data protection, and the ethical design of AI systems. It also reinforces the ILO’s message that social dialogue — involving governments, workers’ organizations and employers — is essential for shaping inclusive digital transitions.

A Call for Caution and Collective Action

The paper concludes that AI has significant potential to improve HR processes, increase efficiency, and expand access to employment. However, without careful design, continuous monitoring and strong participatory governance, AI risks embedding inequities deeper into the labour market.

The ILO urges policymakers, employers and HR professionals to:

  • Avoid overreliance on automated tools

  • Consider the social and ethical implications of AI

  • Prioritize transparency and worker rights

  • Strengthen regulatory frameworks and labour inspections

  • Foster stakeholder engagement at all stages of AI deployment

As workplaces become increasingly digitized, this research serves as a timely reminder: technology can enhance decent work only when grounded in human-centred principles and robust governance systems.

Give Feedback