How privacy fears shape adoption of AI personalization
Personalized artificial intelligence (AI) systems promise more efficient online experiences, from smarter shopping recommendations to tailored digital services. However, these benefits often depend on users sharing sensitive information, raising concerns about how data leaks might affect both privacy and economic outcomes. Researchers Alexander Erlei, Tahir Abbas, Kilian Bizer, and Ujwal Gadiraju are investigating how individuals navigate this growing tension between convenience and data security.
Their study, “The Data-Dollars Tradeoff: Privacy Harms vs. Economic Risk in Personalized AI Adoption,” presented at the CHI ’26 Conference on Human Factors in Computing Systems, explores how users respond when AI personalization requires sharing personal data that could potentially leak to third-party algorithms. The research sheds light on how uncertainty about privacy risks can influence the adoption of AI-powered digital services.
The tradeoff between AI personalization and privacy risk
Personalization has become one of the defining features of modern digital services. Recommendation systems, intelligent assistants, and algorithm-driven marketplaces use personal data to customize user experiences, often delivering tangible benefits such as improved recommendations or financial savings.
However, the same data that enables personalization can create vulnerabilities if it leaks to third parties. In many cases, such leaks can enable companies or sellers to adjust prices or target advertisements more aggressively, shifting economic advantages away from consumers.
To explore how users weigh these competing factors, the researchers designed a controlled online experiment that simulated an e-commerce environment where participants interacted with a hypothetical AI recommendation system. In the experiment, participants could choose between a standard product basket that generated a fixed reward or an AI-personalized basket that produced higher earnings but required sharing personal data.
The AI option offered a clear economic advantage in the short term. However, sharing personal data carried a potential risk: if the information leaked to a third-party pricing algorithm, that algorithm could later use the data to raise prices during a bargaining phase of the experiment.
The design allowed the researchers to measure how individuals respond when faced with a common real-world dilemma: whether the convenience and financial benefits of AI personalization outweigh the potential privacy costs associated with data sharing.
Importantly, the study also distinguished between two types of privacy harm. The first involved direct economic consequences, such as higher prices resulting from personalized pricing strategies. The second involved non-monetary concerns such as feelings of betrayal, loss of control, or reputational harm caused by the exposure of personal information.
By separating these factors, the researchers aimed to understand how both financial incentives and psychological concerns influence decisions about AI adoption.
Why uncertainty matters more than risk
According to the study, uncertainty about privacy risks has a stronger effect on user behavior than clearly defined risks. In the experiment, participants encountered two different information environments. In the first scenario, the probability of a personal data leak was precisely defined. Participants knew exactly how likely it was that their data might be exposed. In the second scenario, the probability was ambiguous. Participants were told that a leak could occur within a certain range of probabilities, but they were not given a precise likelihood.
This difference produced a striking behavioral shift. When participants faced clearly defined risks, their willingness to use the AI personalization system remained largely unchanged. Even though a data leak could result in higher prices later, many users continued to adopt the AI system because the immediate financial benefits outweighed the quantified risk.
However, when the probability of a leak was ambiguous rather than clearly defined, adoption rates dropped significantly. Participants became more cautious and were less likely to rely on AI personalization when the potential risk could not be precisely calculated.
The results highlight a key principle in behavioral economics and decision science: people often respond more strongly to uncertainty than to measurable risks. When individuals cannot determine the likelihood of a negative outcome, they may assume the worst and avoid the option altogether.
In the context of AI systems, this finding suggests that vague warnings about privacy risks may be more damaging to user trust than transparent disclosures about specific probabilities. Clear communication about data practices may therefore play a critical role in fostering confidence in AI-driven technologies.
The researchers also found that the type of data shared had only a modest influence on user behavior. Participants were somewhat more hesitant to share sensitive demographic data compared with preference data such as risk tolerance or time preferences, but the difference was not statistically strong enough to dominate overall decision-making.
Instead, the broader information environment surrounding privacy risks proved to be the primary driver of behavior.
Privacy labels and the economics of trust
In addition to measuring AI adoption, the study also explored whether consumers would be willing to pay for stronger privacy protections. In the final stage of the experiment, participants were given the opportunity to purchase a privacy label that would guarantee their personal data would not leak in future interactions with AI systems. This label functioned as a form of verification, assuring users that the system met a high standard of data security.
The results revealed that many participants were willing to spend real money to obtain this protection. On average, individuals were willing to pay amounts close to the expected economic cost associated with a potential data leak.
This finding suggests that users may be prepared to financially support systems that provide credible guarantees about data security. In other words, privacy protection may function as a marketable feature rather than simply a regulatory requirement.
The study also uncovered an unexpected behavioral pattern. Individuals who chose to use AI personalization were often just as willing, or even more willing, to pay for privacy protection compared with those who avoided the AI system entirely.
This pattern challenges a common assumption that people who adopt personalized services are indifferent to privacy risks. Instead, the results indicate that many users simultaneously value both personalization and strong privacy protections.
Practically, this means that users may be willing to accept the benefits of AI personalization as long as they are provided with trustworthy mechanisms to manage privacy risks.
For designers and policymakers, transparent verification systems, such as certified privacy labels or third-party audits, could play a key role in strengthening user trust in AI systems.
Implications for AI design and regulation
The research highlights the importance of transparency in the design of AI interfaces. When users understand the probabilities associated with potential risks, they are better able to make informed decisions about whether to adopt AI-driven services. Providing clear explanations about data handling practices and risk probabilities could therefore help reduce user hesitation and encourage responsible adoption of AI technologies.
The results suggest that reducing ambiguity about privacy risks may be more effective than imposing blanket restrictions on data collection. Policies that encourage standardized disclosure of privacy risks, including probability estimates and security assurances, may be more successful at empowering consumers.
The study also highlights the potential role of market-based privacy solutions. Verification systems that certify data protection standards could provide users with clear signals about which platforms handle personal data responsibly. Such mechanisms could also create competitive incentives for companies to invest in stronger privacy protections, since consumers appear willing to pay for credible assurances about data security.
The research sheds light on the broader relationship between AI adoption and trust. With AI systems becoming more integrated into everyday digital interactions, the success of these technologies will depend not only on their technical capabilities but also on the confidence users place in them. Understanding how people perceive and respond to privacy risks will therefore remain essential for designing AI systems that are both effective and socially acceptable.
- FIRST PUBLISHED IN:
- Devdiscourse

