Generative AI undermines worker sorting on freelance platforms

The study addresses how the rise of generative AI changes the basic economics behind written job applications. In most online labor markets, workers try to stand out by writing tailored proposals that showcase expertise, motivation or past experience. Employers often interpret the effort and clarity of these messages as a signal of quality.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 18-11-2025 14:33 IST | Created: 18-11-2025 14:33 IST
Generative AI undermines worker sorting on freelance platforms
Representative Image. Credit: ChatGPT

A new economic study warns that the spread of generative artificial intelligence is changing how employers judge workers online, weakening long-standing hiring signals and reshaping outcomes in digital labor markets. The researchers show that AI tools now make it easier for workers to produce customized job applications at almost no cost, which reduces the value employers place on written submissions and disrupts how ability is recognized across platforms.

Their paper, “Making Talk Cheap: Generative AI and Labor Market Signaling,” examines how large language models transform the link between written effort and worker quality. The study uses real data from the global freelancing platform Freelancer.com and builds a structural model to understand how shifts in signaling affect hiring. The findings point to a sharp drop in the information value of written applications after the arrival of generative AI, raising concerns about fairness, sorting and competition in online labor markets.

How AI reduces the cost of signaling in online job markets

The study addresses how the rise of generative AI changes the basic economics behind written job applications. In most online labor markets, workers try to stand out by writing tailored proposals that showcase expertise, motivation or past experience. Employers often interpret the effort and clarity of these messages as a signal of quality.

Before the arrival of modern AI tools, such applications were costly to produce. Workers needed time, language skill and subject knowledge to craft a convincing message. Because producing a strong application required real effort, employers treated these signals as credible. High-ability workers tended to invest more in their proposals, and platforms that rely on written interactions used this mechanism to sort candidates.

The authors show that generative AI changes this balance. Large language models now produce clean, structured, personalized proposals in seconds. Workers no longer need strong language ability or deep subject understanding to create polished applications. As the cost of producing tailored text approaches zero, written submissions lose their power to separate high-ability from low-ability workers.

By analyzing data from Freelancer.com, the researchers find direct evidence that employers reduced their willingness to pay for customized proposals once generative AI tools became widely available. The drop reflects a broad change in employer behavior: when writing becomes cheap and easy for everyone, employers stop treating written content as a reliable signal.

This shift has deep consequences for how workers compete. When signaling becomes cheap, strategies that once helped employers sort talent start failing. As a result, markets drift away from merit-based sorting toward more random outcomes, where signals no longer match underlying ability.

What the structural model reveals about sorting and meritocracy

The study further explores how weaker signaling affects the sorting of workers by ability. To answer this, the researchers build a structural model that simulates labor market outcomes under different signaling conditions. The model treats written applications as tools that workers use to communicate their type, and it tracks how employers respond when the information in those signals becomes less reliable.

After estimating the model with real platform data, the authors run a counterfactual scenario that removes the ability of written submissions to reveal worker quality. The results show that weakening the link between applications and ability does not simply reduce accuracy. Instead, it reshapes hiring outcomes for workers across the entire ability spectrum.

High-ability workers experience the largest losses. According to the model, workers in the top ability quintile are hired 19 percent less often when written applications no longer signal true quality. This drop occurs because employers cannot tell who is genuinely skilled. As signals weaken, high-ability workers can no longer separate themselves from average or low-ability applicants.

At the same time, workers in the lowest ability quintile are hired 14 percent more often. Because polished applications no longer reveal who can deliver strong performance, low-ability workers get more opportunities simply because the sorting mechanism has collapsed. Employers are more likely to make decisions with less information, which introduces noise and reduces merit-based hiring.

This change does not necessarily increase fairness. Instead, the authors argue that it reduces efficiency and distorts the match between workers and tasks. When employers cannot identify high-quality workers, both sides of the market lose potential gains. Skilled workers face reduced earnings and fewer opportunities. Employers face greater risk and weaker performance from their hires.

The model also highlights that platforms built around written communication are especially vulnerable. When the main signal in a market is weakened, outcomes become more random and less tied to true ability. The study suggests that this shift may continue to spread across occupations as AI tools become even more capable.

What generative AI means for the future of digital labor markets

Next up, the study examines is what these changes mean for the future structure and fairness of online labor markets. As generative AI becomes embedded into job-seeking behavior, the cost of producing convincing text will continue to fall. This raises key concerns about how employers can identify reliable workers and maintain merit-based hiring practices.

The researchers note that the labor market depends on credible signals. When those signals are weakened, markets must either develop new forms of sorting or accept more randomness. Written communication has been the main screening mechanism on freelancing platforms for years. Its erosion forces employers to seek alternative cues, such as past ratings, speed of response, verified credentials or platform-level trust indicators.

However, not all workers have equal access to alternative signals. New entrants may struggle the most because they rely heavily on written communication when they have no prior ratings. If written signals lose power, newcomers face greater difficulty standing out in a crowded field. The model suggests that this could redirect economic opportunity away from emerging workers and toward those with existing reputations.

The study also raises concerns about inequality across skills and languages. Generative AI tools tend to produce fluent text that exceeds the writing quality of many global workers who rely on online platforms for income. When AI tools replace written ability as a key differentiator, markets may treat workers from diverse linguistic backgrounds as more similar than they truly are. This could reduce the competitive advantage of workers who previously excelled at communication, while also increasing competition for workers who once faced barriers.

The authors point out that employers may shift toward heavier reliance on platform-verified metrics or behavioral data. While such measures may help restore sorting, they also introduce new risks. Overreliance on historical ratings could lock workers into their past performance, making it harder for individuals to recover from early mistakes. At the same time, platforms may need to redesign interfaces to support alternative evaluation tools that do not depend on written text.

In the broader context, the study suggests that generative AI is reshaping more than job applications. It is changing the economics of effort and credibility across digital marketplaces. When signals become cheap, trust becomes harder to maintain. The researchers note that future work will need to explore how platforms react to these changes, how hiring criteria evolve, and which new forms of signaling become most valuable.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback