The Hidden Force Behind AI's Future: Why Algorithmic Advances Matter More Than Chips

The RAND Corporation’s report forecasts that future AI progress will hinge more on smarter algorithms such as data tailoring, new training objectives, and efficient architectures than on faster hardware. It warns that algorithmic advancements could erode the effectiveness of hardware export controls and reshape global AI competition.


CoE-EDP, VisionRICoE-EDP, VisionRI | Updated: 29-04-2025 08:42 IST | Created: 29-04-2025 08:42 IST
The Hidden Force Behind AI's Future: Why Algorithmic Advances Matter More Than Chips
Representative Image.

Recent advancements in artificial intelligence have significantly widened public discourse and heightened policy interest, with transformative products like OpenAI’s ChatGPT, Anthropic AI’s Claude, and Meta’s Llama reshaping expectations. Amidst this evolving landscape, researchers Carter C. Price, Brien Alkire, and Mohammad Ahmadi from the RAND Corporation’s Technology and Security Policy Center embarked on a comprehensive investigation to chart where AI might be headed next. Their report takes a strikingly different approach: instead of forecasting progress based on faster hardware, they explore the hidden but critical role of algorithmic improvement. Drawing on research traditions from numerical analysis, operations research, and computer science, the RAND study aims to illuminate how smarter algorithms alone could profoundly reshape AI’s future, particularly in areas critical for policymakers and the broader public.

The study defines algorithmic advancement pragmatically as any change that either improves model performance or reduces the computational resources required to achieve a task. With a sharp focus on efficiency, it argues that the intensive margin, achieving more with the same inputs, will be where the most meaningful gains occur. Previous literature offers a mixed picture: Katja Grace’s research indicated that algorithmic improvements were responsible for 50 to 100 percent of performance gains across several fields, while Yash Sherry and Neil Thompson showed that in some cases, algorithmic advances exceeded Moore’s Law in their impact. In the specific domain of large language models, researchers like Anson Ho found that 5 to 40 percent of performance improvements since 2012 could be attributed to algorithms rather than hardware. Yet the future remains deeply uncertain: some experts see signs of an AI plateau, while others like Leopold Aschenbrenner predict a steep and sustained trajectory of rapid gains. Rather than speculate, the RAND team approached the question by identifying the specific mechanisms through which past algorithmic improvements occurred, offering a pragmatic view of how they might unfold next.

How Algorithms Could Drive the Next Big Leap

Seven primary mechanisms of algorithmic improvement are laid out in the report: reducing the number of iterations to reach convergence, injecting stochasticity into training, lowering numerical precision needs, exploiting data sparsity, tailoring training datasets, developing better objective functions, and designing alternative algorithmic architectures that are more efficient. However, RAND makes it clear that not all mechanisms are equally promising. Some, like reducing precision or iteration counts, have already been largely tapped out and offer diminishing returns. Others, such as leveraging sparsity through techniques like Mixture of Experts models, promise steady but moderate improvements.

The real breakthroughs, RAND suggests, will likely come from three avenues. First is data tailoring, either pruning unnecessary data or generating synthetic examples to maximize training efficiency. Second is improving objective functions: moving beyond traditional cross-entropy loss toward optimization methods that better align models with human users' goals. Third is the adoption of radically different architectures like Mamba or Kolmogorov-Arnold Networks, which could break the current computational bottlenecks that transformers face. These advances could unlock orders-of-magnitude improvements in model training efficiency, vastly expanding what is possible within current computational budgets.

Three Different Futures for AI Development

RAND outlines three distinct future scenarios for AI development, each shaped by how successful researchers are in overcoming current constraints. In the first scenario, data limitations become binding. If synthetic data generation does not meaningfully scale and current datasets are exhausted, the pace of AI advancement will slow, and small, specialized models will dominate commercial use. In the second scenario, synthetic data becomes plentiful, but algorithms fail to learn efficiently from it. In this world, giant models continue to grow, but at increasingly unsustainable costs, making them commercially viable only for prestige rather than practical use.

The third and most dynamic scenario envisions a world where both synthetic data generation and algorithmic efficiency rapidly improve. If that happens, the age of massive, highly capable AI models will not just continue, it will accelerate, bringing a new generation of systems that can learn better, faster, and cheaper than anything seen before. This possibility, RAND argues, could fundamentally reshape competition among tech companies and nations alike.

Why Hardware Controls May Not Be Enough

A particularly urgent implication of RAND’s findings concerns global AI competition and security policy. Today, U.S. export controls seek to limit China’s access to advanced AI chips, hoping to slow its technological progress. But RAND warns that smarter algorithms could undermine this strategy. If training efficiency improves enough, countries facing hardware restrictions could still stay within a few upgrade cycles of the frontier by using algorithmic advances to compensate for limited compute. The case of DeepSeek-V3, an open-source Chinese model unveiled in late 2024 that matched closed-source Western models while using significantly less compute, is cited as a vivid illustration of how fast the ground can shift. If smarter algorithms democratize high-end AI capabilities, controlling hardware alone may no longer be a sufficient lever for maintaining technological advantage.

A Call for Vigilance and Strategic Adaptation

The study urges policymakers to invest heavily in technology scanning, systematically monitoring developments in synthetic data generation, alternative objective functions, and new architectures. The report also highlights the strategic importance of innovations like Reinforcement Learning from Human Feedback (RLHF), which have proven to align AI systems more closely with human goals even at smaller model scales, although the cost of scaling RLHF remains a key challenge. Future advances in this area could again tilt the balance of power. Ultimately, the RAND researchers offer a powerful warning and an opportunity: as hardware improvements slow, algorithmic ingenuity is poised to become the new frontier in the AI arms race. Policymakers, researchers, and industry leaders must understand that in this emerging world, victory will belong not to those with the fastest chips, but to those with the smartest code.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback