Workers face growing surveillance and power imbalance under EU’s AI frameworks
According to the author, AI systems now shape how tasks are allocated, how performance is evaluated, how pay is determined, and how disciplinary decisions are made across a wide range of sectors. These systems automate supervision, amplify productivity demands, and create new forms of oversight that far exceed traditional managerial control.
The rapid integration of artificial intelligence into everyday work has outpaced Europe’s regulatory frameworks, leaving millions of workers exposed to intensified surveillance, opaque decision-making, and expanding algorithmic control. A new analysis warns that the European Union’s attempt to regulate AI in the workplace relies on optimistic assumptions about employer-worker alignment and fails to address the structural power imbalance created by algorithmic management.
The findings appear in the study “Regulating AI in the workplace: A critique of the EU AI Act and the Platform Work Directive through a worker-centred lens,” published in Platforms & Society. The study provides one of the clearest evaluations to date of how two major EU legislative instruments, the AI Act and the Platform Work Directive (PWD), address AI-driven management, and concludes that both fall short of safeguarding workers in an economy increasingly shaped by automated oversight systems.
Drawing on labour process theory, critical labour law, and emerging evidence from logistics, retail, platform work, and white-collar sectors, the author argues that AI systems in the workplace embed control mechanisms that intensify managerial power. The study warns that the EU’s current regulatory approach treats these technologies as neutral tools or consumer-like products, rather than technologies that fundamentally transform labour relations.
Algorithmic management as a structural shift in power
According to the author, AI systems now shape how tasks are allocated, how performance is evaluated, how pay is determined, and how disciplinary decisions are made across a wide range of sectors. These systems automate supervision, amplify productivity demands, and create new forms of oversight that far exceed traditional managerial control.
The study highlights that algorithmic management is not merely a technical upgrade but a structural reconfiguration of power. AI-enabled systems classify workers, measure their behaviour, and drive predictive modelling that influences employment outcomes in ways that are often inaccessible to the worker. The author notes that these technologies create a one-directional flow of information where employers gain unprecedented insight into workers’ actions while workers receive little to no transparency about how systems evaluate them.
This dynamic is most visible in gig economy platforms, where algorithms allocate tasks, monitor performance, determine deactivation, and incentivise constant availability. However, the study shows that similar systems are now embedded in traditional employment settings such as warehouses, retail stores, call centres, and corporate offices. The spread of algorithmic supervision subjects employees to increasingly granular forms of monitoring, often without clear legal safeguards.
The author identifies a core regulatory blind spot: EU policy frameworks assume that AI-driven decisions are primarily technical matters rather than tools of managerial control. This assumption leads to rules that emphasise transparency and documentation over power redistribution. In practice, workers are expected to challenge decisions made by systems they cannot fully access or understand, placing the burden of accountability on those with the least control.
Weak protections in the EU AI act leave workers exposed
The author argues that while the Act labels many workplace AI systems as “high-risk,” the protections it creates are not sufficiently robust to counter the structural risks associated with algorithmic management.
One of the study’s key findings is that the AI Act allows employers to continue self-assessing their own high-risk AI systems. This approach, according to the author, gives companies broad discretion in determining whether workplace AI meets safety, transparency, and risk-mitigation standards. Without independent oversight, workers effectively rely on employers to police themselves.
Another limitation highlighted in the study is the Act’s narrow framing of workers’ rights. Workers receive only basic notification that AI is being used and a restricted individual right to explanation. While these rights appear protective on the surface, the study argues that they fail to capture the collective nature of algorithmic harm. Algorithmic management affects groups of workers simultaneously, influencing systemic issues such as workload distribution, performance benchmarks, discrimination, or precarity. The AI Act’s emphasis on individual remedies prevents workers from collectively contesting or negotiating the deployment of such systems.
The author also stresses the absence of labour inspectorates and worker representatives in the Act’s enforcement mechanisms. Instead, the regulation relies heavily on technical standards bodies, dominated by industry actors, to develop guidelines for implementation. This model sidelines labour institutions and reinforces the perception that AI’s impact on work is an engineering problem rather than a matter of workplace democracy.
The Act further assumes that employers and workers share a common interest in implementing safe and high-quality AI systems. The study argues that this assumption ignores the conflict inherent in algorithmic management, where employers gain efficiency, control, and data advantages, while workers absorb risks related to surveillance, intensified workloads, and precarious decision-making.
Platform work directive offers improvements but remains fragmented
The third key question addressed by the research concerns the Platform Work Directive, a legislative effort aimed at improving conditions for gig workers who are managed almost exclusively through AI systems. the author finds that the PWD is more attentive to worker interests than the AI Act, but still limited in scope and uneven in its protections.
The Directive introduces a presumption of employment, a major step forward for platform workers who have long been misclassified as self-employed contractors. This shift is designed to grant access to core labour rights, enabling workers to challenge unfair algorithms, access due process, and benefit from social protections. The study acknowledges this as a meaningful advancement.
Furthermore, the Directive expands workers’ rights to information, explanation, and contestation of algorithmic decisions. It restricts intrusive data collection and provides worker representatives with access to aggregated information that can be used to scrutinise platform-wide practices. the author highlights that these provisions create new avenues for transparency and accountability that surpass those offered by the AI Act.
However, the study points out that the PWD applies only to platform-mediated work. Workers in traditional employment settings who face similar algorithmic harms remain excluded from these stronger protections. As a result, the EU regulatory landscape becomes fragmented: one set of rights for platform workers, and a weaker set for everyone else.
The study warns that such fragmentation increases the risk that employers outside of platforms will rely on the less stringent AI Act, further widening disparities in worker protection. The author argues that regulating algorithmic management requires a universal approach, not a sector-specific one, because the risks are systemic across labour markets.
- READ MORE ON:
- AI regulation
- EU AI Act
- Platform Work Directive
- algorithmic management
- workplace surveillance
- worker rights
- labour law
- digital labour platforms
- algorithmic decision-making
- worker protection
- AI governance
- automated management systems
- gig economy regulation
- employment classification
- data rights
- workplace monitoring
- worker-centred AI
- European labour policy
- digital workplace
- AI oversight
- FIRST PUBLISHED IN:
- Devdiscourse

