AI-powered search is transforming information access but creating major trust gaps

Privacy concerns also arise from the use of AI-driven search tools. These systems collect and process extensive user interactions to improve model performance. Without clear governance, this data could be misused or lead to surveillance of research behavior. This vulnerability is particularly concerning for researchers working in politically sensitive areas.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 02-12-2025 14:10 IST | Created: 02-12-2025 14:10 IST
AI-powered search is transforming information access but creating major trust gaps
Representative Image. Credit: ChatGPT

A new research paper published in the Journal of Information Science raises concerns that the rapid integration of generative artificial intelligence (AI)  into academic and research search platforms is reshaping global information access faster than governance systems can respond.

The analysis, published as “Generative AI and Information Access: A Sustainability Model and a Research Agenda”, examines how tools built on large language models are transforming the way users discover, interpret and rely on scholarly information. The authors point up that this shift brings unprecedented convenience, but also serious challenges around trust, equity, transparency and environmental impact.

The study evaluates a range of AI-enhanced platforms, from conversational engines like ChatGPT to research-focused tools such as Consensus, Scholar GPT, ScholarAI and proprietary GenAI layers deployed by major database providers including Scopus, Clarivate and JSTOR. These tools condense search processes into brief conversational outputs, enabling users to bypass traditional keyword queries and long lists of search results. The paper warns that while the new tools accelerate information retrieval, they also pose risks if deployed without sustainability measures across social, economic and environmental dimensions.

GenAI is reshaping scholarly search but creating new risks for trust, equity and access

The authors describe a profound structural change in how people engage with research information. Instead of navigating multiple databases, reading abstracts or applying complex filters, users now rely on GenAI tools that generate short summaries, answer-style outputs and curated citations. This shift reduces friction and democratizes access for people unfamiliar with scholarly databases. However, the study highlights deep concerns that these systems may limit breadth of coverage, weaken reproducibility and obscure how sources are selected.

The appeal of GenAI stems from its ability to simplify complex research landscapes. Students, academics, policymakers and general users can extract information far faster than in traditional search environments. Yet this convenience introduces a risk of overreliance on tools that may not accurately represent the full body of available evidence. The condensed outputs make it difficult for users to verify completeness or understand the underlying reasoning behind AI-generated summaries.

The study notes that conventional search systems allow users to judge relevance by examining abstracts, keywords and metadata. GenAI-driven engines frequently bypass these steps, replacing them with highly polished responses that can mask omissions. As a result, users may unknowingly base decisions on incomplete or biased outputs.

The paper also identifies structural inequities emerging within the AI-driven search ecosystem. Many of the most powerful GenAI tools rely on expensive subscription models layered on top of existing database fees. These costs make them inaccessible to institutions and research communities already struggling with rising subscription expenses. The authors argue that the widening gap in access threatens to create a two-tier information environment in which well-funded institutions benefit from AI-enhanced retrieval while others remain dependent on slower, less integrated systems.

Another challenge is the lack of transparency in how GenAI engines select or prioritize sources. Some systems provide limited citation trails, making it difficult to trace where information originated or determine whether the sources used are authoritative. The study emphasizes that this lack of clarity undermines the principles of credible research and makes it hard for users to assess reliability.

Privacy concerns also arise from the use of AI-driven search tools. These systems collect and process extensive user interactions to improve model performance. Without clear governance, this data could be misused or lead to surveillance of research behavior. This vulnerability is particularly concerning for researchers working in politically sensitive areas.

Economic and environmental implications raise questions about long-term sustainability

Alongside the social impact, the study examines the economic sustainability of GenAI-powered information environments. On one hand, AI-driven tools sharply reduce the time researchers spend locating relevant material. By automating summarization and synthesis, they help academics work more efficiently and can potentially reduce costs associated with prolonged literature reviews. The authors note that GenAI could enable new high-value services, including intelligent research assistants and multi-agent scientific workflows that streamline complex research tasks.

However, the economic advantages come with substantial financial risks. GenAI systems require significant computational resources to operate and maintain. Commercial providers often build AI layers as premium add-on services, increasing subscription fees. Many institutions, particularly universities in low- and middle-income countries, cannot afford these costs. The study warns that unless open-access GenAI tools are developed, financial barriers will continue to grow and may limit global scientific participation.

The authors also stress the environmental sustainability challenges posed by large language models. The computational intensity required to train and run these systems leads to high levels of energy consumption and water use. As more publishers and database providers deploy AI features, data centers must scale, increasing their environmental footprint. The study argues that the environmental cost of AI-driven information access could become unsustainable without deliberate intervention.

The paper calls for the expansion of “green information retrieval” principles to GenAI systems. These include reducing the size of AI models where possible, optimizing computation, and encouraging database providers to consider environmental footprints when designing search infrastructures. The authors note that sustainability assessments should weigh social and economic benefits against ecological impact.

They also point out that universities, libraries, publishers, and governments must work together to create equitable AI-driven research infrastructures that do not disproportionately harm the planet. Without coordinated action, environmental costs may undermine the long-term viability of AI-enhanced information access.

Study calls for a multi-stakeholder sustainability model and outlines key research priorities

To address these challenges, the authors propose a sustainability model grounded in the three pillars of social, economic and environmental responsibility. The framework emphasizes human-centered governance, ethical AI design, equitable access, responsible deployment and improved user literacy.

For social sustainability, the model calls for protecting human-centered values in the AI-driven information ecosystem. This includes improving explainability, ensuring user access to underlying sources, strengthening AI literacy programs, and developing governance mechanisms that support transparency and fairness. The authors argue that search tools must empower users rather than simply replace traditional information practices.

For economic sustainability, the model recommends the development of cost-sharing strategies and open-access AI tools that reduce financial barriers. The study proposes collaborative infrastructures involving academic institutions, libraries, database providers and governments. The aim is to distribute costs more evenly and prevent GenAI access from becoming an exclusive resource available only to wealthy institutions.

For environmental sustainability, the model highlights the need to measure and reduce the ecological footprint of AI systems. The authors recommend developing smaller, more efficient models, optimizing infrastructure and embracing renewable energy solutions. They also stress that environmental considerations must become part of mainstream AI governance, not an afterthought.

The paper lastly presents a structured research agenda with 20 key research questions spanning user behavior, AI system evaluation, relevance standards, economic modeling, equity of access, environmental metrics and governance mechanisms. The agenda aims to guide future inquiry and foster global collaboration across academia, industry and public agencies.

The authors warn that AI-driven search is expanding faster than the systems designed to regulate it. For the research ecosystem to remain trustworthy and sustainable, institutions must deepen their understanding of how GenAI tools shape information behavior, access and equity. The study argues that building sustainable AI-driven information systems will require coordinated action across technological, institutional and regulatory domains.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback