Online retailers risk losing shoppers when AI recommendations feel opaque
New research finds that transparency alone is not enough to win online consumer trust. Users are more likely to trust AI-powered recommendation systems and act on their suggestions when they understand how the system works, view its recommendations as fair and feel they can control the interaction.
The study, titled "AI Transparency and User Behavior in Human–AI Collaboration: Evidence from E-Commerce Recommendation Systems," was published in the Journal of Theoretical and Applied Electronic Commerce Research. The author tested a model of how AI transparency shapes trust and purchase intention in e-commerce, using survey data from 312 recommender-system users and partial least squares structural equation modeling to examine the links between transparency, algorithmic understanding, fairness perception, perceived control, trust and buying intent.
E-commerce decisions are becoming a human-AI collaboration
Recommendation systems have become a key part of digital commerce. They no longer work only as passive filters that help users sort through large catalogs. They shape what customers see, how choices are arranged and which products appear most relevant during online shopping. This interaction is a form of human-AI collaboration, not because the user and the system jointly adapt in a complex two-way partnership, but because the user actively interprets, evaluates and integrates AI-generated suggestions into purchase decisions.
That shift changes how consumer behavior should be understood. Traditional e-commerce models often focused on platform quality, ease of use, trust and purchase intention. The new study argues that AI-mediated shopping requires a more detailed explanation. In algorithmic environments, the user is not only reacting to a website. The user is trying to make sense of a system whose decision logic may be partly hidden.
Hence, AI transparency is treated as an informational stimulus. It refers to the degree to which users believe a system provides clear information about how recommendations are generated, what criteria are used, how user data shapes suggestions and why certain products are shown. In simple terms, transparency matters because it gives users clues about the system's logic.
The study rejects the idea that transparency automatically creates trust. Instead, it proposes a process-based model. Transparency first affects cognitive mechanisms. These include algorithmic understanding, or the user's ability to interpret how the recommendation system works, and fairness perception, or the user's judgment that recommendations are unbiased, legitimate and aligned with their interests. These cognitive evaluations then influence perceived control, meaning the extent to which users feel they can guide, adjust or manage the interaction with the system. Perceived control then strengthens trust, and trust increases purchase intention.
A recommender system may disclose information, but if users cannot interpret that information or use it to manage their choices, transparency may not deliver the expected trust gains. The study shows that users need more than visibility. They need clarity, fairness and agency.
The research also places AI literacy at the center of the issue. Users do not interpret AI explanations in the same way. People who understand AI systems better are more capable of making sense of transparency cues and turning those cues into algorithmic understanding and fairness assessments. Users with lower AI literacy may receive the same explanation but gain less value from it.
The study's sample included digitally active users recruited online, with respondents drawn from fields such as information technology, business and marketing, digital services and e-commerce, along with other professional backgrounds. The use of real user experiences, rather than simulated purchase scenarios, allowed the study to examine how people perceive recommendation systems in practical digital commerce contexts.
The findings matter because AI-based recommendations are now embedded in many consumer journeys. Online marketplaces, streaming platforms, travel sites, fashion retailers, grocery services and digital advertising systems all use AI to shape choices. As recommendations become more personalized and more influential, consumers are being asked to trust systems they may not fully understand.
This trust is built through a layered process. Shoppers must first understand the system's logic enough to make sense of its suggestions. They must believe that the system treats them fairly and does not manipulate or mislead them. They must also feel that they can influence the output, refine preferences, reject irrelevant recommendations or guide future suggestions. Only then does transparency become a meaningful driver of trust and purchase behavior.
Understanding, fairness and control turn transparency into trust
AI transparency had a strong positive effect on algorithmic understanding and fairness perception. Users who perceived recommendation systems as more transparent were more likely to say they understood how those systems operated and more likely to view their recommendations as legitimate and fair.
Transparency also had a positive direct effect on perceived control. When users believed the system explained how recommendations were generated, they were more likely to feel capable of managing the interaction. This suggests that explanation is not only about knowledge. It can also change how much agency users feel during the shopping process.
Algorithmic understanding and fairness perception also significantly increased perceived control. This means that users who understand recommendation logic and believe the system is fair are more likely to feel they can influence the interaction. That sense of influence is essential in e-commerce because recommendation systems can otherwise feel opaque, automatic and difficult to question.
Perceived control emerged as one of the strongest drivers of trust. Users who felt they could manage or guide the system were more likely to trust it. The study found that trust was not simply a reaction to whether recommendations looked useful. It was shaped by whether users felt able to participate meaningfully in the recommendation process.
Trust, in turn, had a major effect on purchase intention. Users who trusted AI-based recommendation systems were more willing to consider recommended products, rely on the system when choosing products online and follow suggestions in future purchase decisions. This confirms the central role of trust in e-commerce, while showing that trust in AI settings depends heavily on cognitive and control-related mechanisms.
The model explained a substantial share of the variation in user trust and purchase intention. Purchase intention was strongly explained by trust, while trust was explained by perceived control and transparency. Perceived control was explained by transparency, algorithmic understanding and fairness perception. This layered structure supports the study's argument that consumer responses to AI are not triggered by transparency alone but by a sequence of user evaluations.
The findings also show that the direct path from transparency to trust remains significant, but weaker than the full indirect process. This means transparency can affect trust directly, but much of its influence works through user understanding, fairness perception and control. For businesses, that suggests transparency should be designed not as a compliance checkbox but as a user-experience strategy.
The moderating role of AI literacy adds another important layer. AI literacy strengthened the relationship between transparency and algorithmic understanding, as well as between transparency and fairness perception. In practice, users with stronger AI knowledge were better able to translate system explanations into useful cognitive evaluations.
A single transparency format may not work equally well for all users. Some shoppers may prefer simple, plain-language explanations that clarify why a product was recommended. Others may want more detailed information about the signals used by the model, such as browsing history, product attributes, purchase patterns or similarity to other users. Adaptive explanation design may therefore be necessary.
The study identifies several forms of transparency relevant to recommendation systems. Rationale-based explanations tell users why a recommendation was made. Feature-based explanations identify which attributes or behaviors influenced the recommendation. Data-based explanations show how user data is used. Social-based explanations connect recommendations to patterns among similar users. While the study treats transparency as an overall user perception, it recognizes that different explanation types may affect users in different ways.
The results also challenge companies that assume more information always increases trust. Excessively complex explanations may increase cognitive burden, especially for users with lower AI literacy. Transparency works best when it is understandable, relevant and tied to user control. A long technical explanation may be less effective than a concise, meaningful explanation that lets users adjust preferences or correct the system.
Fairness perception is another critical element. Users are more likely to trust a recommender system when they believe its suggestions are unbiased, consistent and aligned with their interests. This is particularly important because consumers may worry that recommendations are driven by platform incentives, advertising relationships or commercial priorities rather than user benefit. If shoppers suspect that recommendations are unfair or manipulative, transparency may not translate into trust.
Perceived control is where the study's model becomes especially practical. Control can be supported through interface features that let users refine recommendations, change preferences, remove irrelevant suggestions, ask why an item was recommended or request more products like a selected item. These design choices turn the user from a passive recipient of AI suggestions into an active participant in the decision process.
Retailers need transparent, controllable AI recommendation systems
E-commerce platforms should design AI recommendation systems around user understanding and control, not only predictive accuracy. A recommender system that is technically accurate but opaque may still fail to build trust if users cannot understand or influence its suggestions.
Retailers must prioritise explanation quality. Recommendation systems should clearly communicate why a product is being suggested. Explanations should be simple enough to process quickly but specific enough to be useful. A vague message that a product is "recommended for you" is less likely to support trust than a clear reason based on prior browsing, selected preferences or product similarities.
The second priority is user control. Consumers should be able to refine recommendations through direct feedback. Options such as "not relevant," "show more like this," "adjust preferences" or "why this recommendation" can help users feel that they are managing the interaction rather than being managed by the algorithm. The study suggests that this sense of agency is a key route to trust.
The third priority is fairness by design. Platforms need to ensure that recommendation systems do not only optimize for sales, sponsored placements or engagement. Users are more likely to trust AI when they believe the system supports their interests. This means recommendation logic should balance commercial objectives with relevance, transparency and user benefit.
AI literacy-sensitive design is the fourth priority. Not all users have the same ability to interpret algorithmic explanations. Platforms may need layered explanations, with simple summaries for general users and more detailed views for users who want deeper information. This can prevent transparency from becoming either too vague or too complex.
The fifth priority is continuity. Users are more likely to trust systems that behave consistently over time. If recommendations change unpredictably or appear disconnected from user preferences, perceived control may weaken. Systems should maintain a clear link between user actions and recommendation changes, allowing consumers to understand how their feedback affects future results.
The findings also matter for digital commerce, where responsible AI governance will shape trust, accountability and consumer protection. As recommendation systems influence consumer exposure and choice, transparency cannot be limited to internal documentation or technical disclosure. It must be experienced by users in a way that supports informed decision-making.
However, the study also sets boundaries around its findings. It focuses on consumer-oriented, relatively low-stakes recommendation systems, where decisions are generally reversible and risk is moderate. The model may not apply in the same way to high-stakes AI settings such as healthcare, finance or legal recommendations, where risk, accountability and regulatory requirements are far more serious.
The study's reliance on self-reported user perceptions also means the findings should be interpreted as evidence about perceived trust and purchase intention, not actual transaction behavior. Future research could combine surveys with behavioral tracking, click-through data, purchase logs or controlled experiments to test whether the same mechanisms operate in real shopping behavior.
Another limitation is that the research examines general perceptions of recommender systems rather than a specific algorithmic architecture. Different systems, such as collaborative filtering, content-based filtering or deep learning-based recommenders, may create different transparency and control challenges. Future studies could compare how specific system designs affect understanding, fairness and trust.
- FIRST PUBLISHED IN:
- Devdiscourse
Google News