Americans more likely than Japanese to exploit cooperative AI
Although the study’s exploratory nature in some sections limits definitive causal claims, the pattern is clear: Americans are more likely than Japanese participants to exploit AI agents even when cooperation would benefit both parties. Japanese participants, by contrast, displayed stronger emotional aversion to such exploitation.

Researchers have uncovered sharp cultural differences in how people interact with cooperative artificial intelligence agents. The study, published in Scientific Reports, finds that participants in the United States were significantly more likely to exploit AI agents for selfish gain compared to participants in Japan, despite both groups holding similar expectations of AI’s cooperative behavior.
The large-scale cross-cultural experiment, led by Jurgis Karpus and colleagues from institutions in Germany, Japan, and the UK, used two classic economic games - the Trust Game and the Prisoner’s Dilemma - to test whether people would behave differently when interacting with humans versus artificial agents. A total of 600 Japanese participants were recruited and compared with a matched sample of 604 Americans from an earlier study.
In both countries, participants were randomly assigned to interact either with another human or with an AI agent designed to act with a probability matching the choices of real humans in the same role. The AI agent was described as being capable of reasoning about outcomes similarly to humans, with all experiments run using incentivized game setups to mimic real-world consequences.
The key finding: Americans showed strong evidence of "algorithm exploitation" - that is, they were significantly more likely to take advantage of cooperative AI agents than cooperative human partners. In the Trust Game, only 34% of American participants reciprocated cooperation when playing with an AI partner, compared to 75% when playing with a human. In contrast, Japanese participants cooperated at comparable rates with both human (66%) and AI (56%) partners, a difference that was not statistically significant.
In the Prisoner’s Dilemma, American cooperation with AI was again lower (36%) than with human partners (49%). But Japanese participants demonstrated nearly equal cooperation with both types of partners (42% with AI, 41% with humans), suggesting they did not differentiate between humans and machines in the same way Americans did.
The study titled "Human cooperation with artificial agents varies across countries" also investigated participants’ emotional responses after making decisions. Americans who exploited cooperative AI agents reported feeling significantly less guilt and disappointment compared to when they exploited humans. In Japan, however, participants felt similarly guilty about exploiting both humans and machines - and, in fact, felt more guilt, anger, and disappointment when exploiting AI partners than did their American counterparts.
The emotional data suggests that Japanese individuals may imbue AI agents with moral status, consistent with prior cultural research highlighting animistic beliefs and a long-standing affinity for robots in Japanese society. Previous surveys have shown that people in Japan are more likely to attribute emotional capacity to robots and to accept them as moral patients - entities to whom moral obligations are owed.
By contrast, American participants tended to draw a sharper boundary between human and machine, displaying more lenient attitudes toward exploiting AI. These attitudes likely stem from distinct cultural perceptions of agency, responsibility, and anthropomorphism. In the U.S., artificial agents are more frequently viewed as tools, not peers.
This cultural divide has practical implications for the deployment of AI systems, particularly in public-facing roles. For example, the widespread use of autonomous vehicles, AI-powered customer service bots, or delivery drones could be undermined if people routinely exploit them rather than cooperate with them. The researchers suggest that interactive AI systems may enjoy faster and safer adoption in cultures where users treat them with similar ethical regard as human counterparts.
The study further tested whether cultural differences were due to varying expectations of AI behavior. Interestingly, participants in both countries expected AI agents to be about as cooperative as human partners. In the Trust Game, Japanese participants predicted 70% cooperation from AI agents versus 82% from humans, while Americans predicted 83% from AI and 80% from humans. These expectations did not significantly affect behavior - rather, the emotional weight people placed on exploiting cooperation seemed to play a larger role.
The research team conducted rigorous statistical analyses, including chi-square tests, binomial logistic regressions, and Bayesian inference to validate their findings. They also controlled for variables such as age, gender, and familiarity with game theory, finding no consistent effects beyond national context.
Although the study’s exploratory nature in some sections limits definitive causal claims, the pattern is clear: Americans are more likely than Japanese participants to exploit AI agents even when cooperation would benefit both parties. Japanese participants, by contrast, displayed stronger emotional aversion to such exploitation.
The researchers urge further global studies to examine how different societies perceive and interact with artificial agents. The term “culture” in this study was operationalized as country of residence, but the authors acknowledge it encompasses broader factors like religious beliefs, exposure to technology, and media representation of AI.
- READ MORE ON:
- cultural differences in AI interaction
- human-AI trust
- cross-cultural AI ethics
- AI behavior Japan vs USA
- exploiting artificial intelligence
- Japanese attitudes toward AI
- Japanese vs American attitudes toward AI
- human-machine cooperation
- emotional response to AI behavior
- how different cultures treat cooperative artificial intelligence
- emotional differences in exploiting AI across countries
- FIRST PUBLISHED IN:
- Devdiscourse