The Cost of Intelligence: IMF Urges AI Firms to Align Pricing with Human Welfare

The IMF paper by Nils H. Lehr and Pascual Restrepo argues that socially responsible AI firms should price their technologies near cost to maximize welfare while temporarily slowing deployment to protect workers. It concludes that fair taxation and redistribution, not taxing automation, are key to ensuring AI benefits society broadly.


CoE-EDP, VisionRICoE-EDP, VisionRI | Updated: 11-11-2025 10:03 IST | Created: 11-11-2025 10:03 IST
The Cost of Intelligence: IMF Urges AI Firms to Align Pricing with Human Welfare
Representative Image.

The International Monetary Fund (IMF) Working Paper, authored by Nils H. Lehr of the University of Bonn and Pascual Restrepo of Boston University, examines how artificial intelligence firms with social objectives should determine pricing and deployment strategies. Produced under the IMF’s Research Department, the study combines economic theory, welfare analysis, and U.S. labor market data to explore how socially conscious AI companies, like OpenAI or Anthropic, can balance profitability, fairness, and social stability. The authors argue that the cost of AI is not merely a market outcome but a moral choice shaping inequality, job security, and access to technology.

Rethinking Monopoly Pricing for a Moral Age

Lehr and Restrepo extend the traditional Lerner Rule, which defines how monopolies set markups, by introducing a “Modified Lerner Rule” that includes moral and social dimensions. This new rule factors in a firm’s commitment to efficiency, fairness, and labor stability. A profit-maximizing firm seeks to charge high markups, while an altruistic firm might set lower prices to expand access or delay deployment to protect workers. The model identifies four motives shaping corporate pricing: profit maximization, efficiency (pricing closer to cost), distributional fairness (considering income impacts), and stability (slowing disruptive transitions). Together, these motives define how a socially minded monopoly should navigate the tension between market dominance and moral responsibility.

Simulating AI’s Impact Across Occupations

Using detailed U.S. occupational data from the American Community Survey (2017–2021), the authors simulate how a monopolistic AI firm would price a technology that can replace human labor at half the cost across 525 occupations. They classify five firm types: profit-maximizing, utilitarian, welfarist, conservative, and multi-objective. A profit-maximizer starts with a markup of around 32 percent, rising to 50 percent in the long run. A utilitarian firm lowers this to about 15 percent, promoting wider adoption. Conservative firms, fearing labor disruption, set even higher prices than profit-seekers initially, especially when automation targets low-wage jobs. The most realistic scenario, a multi-objective firm, charges about 33 percent on AI automating low-wage work and 15 percent on high-wage tasks, converging to 20 percent over time. This dynamic shows how moral considerations evolve as AI diffuses through the economy.

Efficiency, Stability, and the Moral Trade-off

The study reveals a fundamental trade-off between economic efficiency and social stability. In the short term, socially conscious AI firms price high to slow disruption, allowing workers to adjust. Over time, as retraining and reallocation occur (modeled at a 4 percent annual rate), firms reduce prices and expand access. By a century’s horizon, markups converge across all firm types, signaling a long-run shift toward efficiency. Yet, distributional concerns, favoring poorer workers, remain surprisingly weak. Low-wage jobs, such as cashiering or cleaning, are not exclusive to the poorest groups, and protecting them yields limited welfare benefits. Even in an extreme egalitarian model where firms weigh inequality ten times more heavily, price changes barely move. The authors conclude that the real moral challenge lies in managing transition speed, not in persistent income redistribution.

When Firms Act Like Planners

To benchmark morality-driven firms, Lehr and Restrepo compare them to a social planner, an ideal public actor unconstrained by profit. Such a planner would set very low prices, around a 7 percent markup for AI replacing low-wage work and 1 percent for high-wage automation, removing restrictions as labor adjusts. Even socially responsible private firms fall short of this outcome because they must preserve profit margins. The paper also considers three broader contexts: under progressive taxation, firms can focus more on efficiency since governments handle redistribution; when AI creates new goods rather than replacing labor, it should be priced at marginal cost; and under competition, prices fall naturally, shifting firms’ moral role from ensuring access to managing stability.

A Moral Economy for Artificial Intelligence

The authors ultimately argue that monopoly power already slows AI diffusion, serving as a built-in brake against social upheaval. Taxing automation further would limit access and dampen innovation without greatly helping displaced workers. Instead, governments should strengthen redistribution and safety nets while allowing AI to boost productivity. The paper closes with a reflection that the “price of intelligence” is not only an economic measure but a test of collective ethics. Socially conscious AI firms should strive for marginal-cost pricing in the long run, using temporary restraint only to smooth disruptive transitions. If paired with fair fiscal policy and worker adaptation, AI could become a force for shared prosperity rather than division. Lehr and Restrepo thus remind policymakers that how we price intelligence will determine whether it enriches humanity or leaves many behind.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback