The Alien Within the Image: How People Emotionally Engage with Generative AI Art

A study by the University of Turin and Politecnico di Milano found that people judge AI-generated images more on clarity than creativity, often experiencing them as strange or unsettling. These reactions prompt users to humanize or reinterpret the AI, revealing deeper concerns about its unfamiliar, “alien” nature.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 18-04-2025 09:03 IST | Created: 18-04-2025 09:03 IST
The Alien Within the Image: How People Emotionally Engage with Generative AI Art
Representative Image.

In a world increasingly captivated by artificial intelligence, researchers from the University of Turin and the Politecnico di Milano have stepped away from debates about prompt engineering and artistic competition to ask a simpler, more human question: how do people feel when they encounter images created by generative AI? The study, published in the International Journal of Human-Computer Studies, looks beyond the surface gloss of images produced by Stable Diffusion to explore the emotional, cognitive, and interpretative responses of everyday people. Through in-depth interviews with 20 Italian participants of diverse professional and educational backgrounds, the researchers uncovered a rich landscape of reactions ranging from admiration and confusion to mistrust, awe, and even existential discomfort.

Beyond Beauty: What Makes a “Good” Gen-AI Image?

The participants were shown 20 images generated by Stable Diffusion based on abstract and concrete prompts like “Anger,” “Love,” “Wisdom,” and “Lake.” Each image was also accompanied by a heat map visualizing which parts of the image the AI model prioritized during generation. Interestingly, most participants did not view these creations through an artistic lens. Instead, their appraisals hinged on technical quality and subject fidelity. A “good” image, in their eyes, was one that visually aligned with the prompt, boasted balanced composition, or resembled a professionally rendered photograph. A “bad” image failed to clearly represent the subject or contained jarring or misplaced elements that disrupted coherence.

Creativity, as a value, barely registered in their feedback. Instead of celebrating visual experimentation, participants often interpreted deviations from expected representations as signs of failure or misunderstanding. Many overlooked any hint of stylistic innovation and focused primarily on clarity and recognizability. This pragmatic orientation suggests that in contexts outside of art galleries and design studios, the general public may judge Gen-AI outputs not as art but as communicative tools expected to obey rather than challenge human intentions.

The Strangeness Beneath the Surface

Several participants described a peculiar feeling that many of the AI-generated images provoked something they called “strangeness.” This sensation, which often emerged even before participants knew the images were artificially created, was sparked by subtle visual oddities: a face that looked slightly off, a background that felt artificial, or lighting that made a scene appear dreamlike and disconnected from reality. While at first glance the images appeared normal, closer inspection revealed inconsistencies that triggered unease.

Once informed that the images were created by AI, this unease often deepened. Participants began projecting their discomfort onto the Gen-AI itself, describing it as “alien,” “unfamiliar,” or “irrational.” The AI’s visual decisions, highlighted by the heat maps, did not always make sense to human viewers. For example, when an image representing “Justice” emphasized the base of a scale rather than its iconic arms, participants struggled to rationalize the model’s logic. This gap between human expectation and machine reasoning opened a psychological void that many participants felt compelled to fill.

Coping with the Alien: Humanizing the Machine

To resolve their discomfort, participants subconsciously adopted what the researchers called “relational strategies.” These included devaluing the AI by highlighting its limitations or, conversely, elevating it to the status of a superintelligent being. Some participants asserted their own superiority, emphasizing emotional intelligence, real-world experience, or spiritual depth. Others humbled themselves, admitting they lacked the knowledge or processing power to understand the AI’s visual choices. In both cases, the goal was the same: to restore a sense of control and coherence in the face of something deeply unfamiliar.

A third group of participants chose to rework their original interpretations of the images to match the AI’s logic. If the heat map highlighted an unexpected region of an image, they would reinterpret the subject to accommodate this information, even if it contradicted their initial understanding. These forms of sense-making highlight how viewers are not passive recipients of AI-generated media; they actively construct narratives around what they see, especially when the images defy conventional logic.

A Mirror to Society’s Imaginary

While much of the emotional response to Gen-AI images centered around strangeness or unease, many participants also recognized the social and cultural value embedded in these creations. Those with design or technical backgrounds appreciated the utility of Gen-AI for tasks like rapid prototyping, brainstorming, or visualizing abstract ideas. Others, particularly from non-technical backgrounds, were struck by how the images reflected existing societal biases and cultural norms.

For instance, the image of a “House” generated by the AI was consistently interpreted as a stereotypical American suburban home complete with lawn and picket fence, prompting critiques about Western bias in training data. Another participant noted that the image for “Night” reflected a capitalist work ethic, showing a city still bustling with productivity rather than rest or quiet. These reactions suggest that Gen-AI can unintentionally serve as a mirror for examining our collective social constructs, revealing the narrow lens through which data and by extension, machines view the world.

Rethinking Interaction: What This Means for Design

The researchers argue that these findings open new avenues for human-computer interaction (HCI) design. One proposal is to treat “humanness” itself as a design material. Just as ChatGPT can be imbued with personalities to make it feel more relatable, future Gen-AI image tools might allow users to adjust the perceived “personality” or communicative style of the model. By giving users the option to humanize, dehumanize, or superhumanize the AI, designers could help mitigate feelings of unease and promote more comfortable engagement.

Furthermore, the study suggests that Gen-AI images can be powerful tools for critical design. Their strangeness and stereotypicality can be used to provoke reflection and challenge assumptions in participatory design sessions, educational workshops, or social research. These images are not just representations; they are provocations inviting users to question what they know, how they see, and what kind of world they imagine when prompted with a single word.

In short, the study reveals that while Gen-AI can mimic visual sophistication, its impact goes far deeper. It changes how we interpret images, how we relate to machines, and how we see ourselves in the mirror they unintentionally hold up.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback