Future on Fast-Forward: AI Accelerates Foresight but Sparks Concerns Over Trust
AI is rapidly reshaping strategic foresight by accelerating research, expanding analytical capacity and supporting scenario creation, yet its limitations—bias, hallucinations, low transparency and uneven skills—pose significant risks to trust and quality. The OECD–World Economic Forum report warns that only human-centred governance, ethical safeguards and stronger AI literacy can ensure AI enhances rather than undermines anticipatory decision-making.
A joint study by the Organisation for Economic Co-operation and Development (OECD) and the World Economic Forum presents a vivid portrait of how artificial intelligence is transforming strategic foresight worldwide. Drawing on responses from 167 practitioners across 55 countries, the report reveals a field in the midst of rapid change, energised by AI’s unprecedented analytical power yet troubled by its limitations, biases and unclear ethical boundaries. Once defined primarily by human judgement and the ability to envision alternative futures, foresight is now absorbing AI as both a catalyst and a potential disruptor, forcing practitioners to rethink their methods and reassess the foundations of anticipatory governance.
A Field Embracing AI, But Unevenly
Strategic foresight relies on scanning for emerging signals, challenging assumptions, and crafting multiple future scenarios. As AI becomes mainstream, these processes are shifting. Two-thirds of foresight experts already use AI tools for activities such as trend clustering, horizon scanning, and scenario generation. The most commonly used tools, ChatGPT, Copilot, Claude, Gemini, Perplexity, and DeepSeek, have quickly become integral to research workflows. Yet skills and confidence vary dramatically. A striking 93% of private-sector respondents say they possess strong AI capabilities, compared with barely half in government, academia, or civil society. Public-sector practitioners in particular face data-security rules, confidentiality barriers, and the absence of clear guidelines, limiting their ability to experiment with cutting-edge tools.
Three Tiers of AI Integration
The report highlights three distinct layers of AI maturity in strategic foresight. At the most basic level, AI is used for augmentation: summarising documents, sorting signals, and synthesising information. These tasks save time, often up to 15%, but still require significant human review. The second level is more collaborative. Here, AI becomes a creative partner, stress-testing assumptions, suggesting scenario structures, expanding signal libraries, and offering alternative perspectives, which enhances productivity and widens analytical reach. The third and most advanced level, still rare, involves AI woven seamlessly throughout the entire foresight pipeline. Custom-built systems autonomously collect documents, detect patterns, model complex systems, and support scenario development with AI agents. These experiments hint at a future where AI becomes a co-designer of anticipatory strategies, though such usage is limited to a handful of well-resourced organisations.
Benefits Paired With Deep Reservations
AI’s appeal is undeniable. Practitioners praise its ability to accelerate analysis, process vast datasets, spark creativity, and generate scenarios at previously impossible speeds. Many say AI enhances the structure and depth of their work, and some emphasise its role in lowering entry barriers for newcomers. Yet these benefits are closely shadowed by concerns. The most common complaint is unreliability: AI’s tendency to hallucinate, produce shallow content, or overlook novel disruptions, the very phenomena foresight seeks to uncover. Its outputs often lack transparency, forcing analysts to spend considerable time verifying sources and logic. Respondents also cite culturally biased training data, limited access to relevant internal information, and organisational resistance, especially within bureaucratic environments, where risk aversion and resource constraints hinder innovation. Ethical preparedness remains notably weak. Only 27% of organisations using AI have formal guidelines, leaving most practitioners without structured guardrails for responsible deployment.
The Road Ahead: Promise, Peril and Responsibility
Practitioners are divided on AI’s long-term impact. Many believe it will enhance their roles, enabling deeper integration of foresight into decision-making, while others fear a reduction in relevance. A majority agree that the risks are context-dependent: AI could strengthen foresight if designed and governed responsibly, or undermine it if allowed to produce low-quality insights that erode trust. The report warns of a paradox: AI may democratise foresight, making it more widely accessible, yet could simultaneously damage the discipline if poor outputs spread faster than expert analysis. To prevent this, organisations must build AI literacy, develop ethical frameworks, and create safe spaces for experimentation. Crucially, they must preserve the human imagination, intuition, and judgement that remain essential for identifying weak signals, exploring unprecedented futures, and navigating uncertainty. The authors argue that integrating AI into foresight is not a technical problem but a strategic shift, one that will determine whether the future of anticipatory governance becomes more resilient, more inclusive, and more insightful, or more brittle and more biased.
- FIRST PUBLISHED IN:
- Devdiscourse

