Global deepfake surge exposes gaps in law, platforms, and AI governance

Global deepfake surge exposes gaps in law, platforms, and AI governance
Representative image. Credit: ChatGPT

A new academic analysis claims that sexualized deepfakes are not simply a problem of fake content or misinformation but a continuation of long-standing gendered power structures embedded in technology. Authored by Dana Mahr of the Karlsruhe Institute of Technology, the research is based on feminist theory, science and technology studies, and legal scholarship to position deepfake pornography as a form of image-based sexual abuse amplified by artificial intelligence.

The study, titled "Sexualized deepfakes as a socio-technical continuation of gendered power," was published in AI & Society and challenges dominant narratives that frame deepfakes primarily as risks to truth and public trust. Instead, it identifies the key harm as the non-consensual creation and circulation of sexualized representations, disproportionately targeting women and reinforcing structural inequalities.

Deepfakes shift from misinformation threat to gendered abuse infrastructure

Lately, global attention to deepfake technology has intensified, particularly following incidents where AI systems generated explicit images of women without consent. While early debates around deepfakes focused on political manipulation and fake news, the study highlights that the overwhelming majority of such content is pornographic, with women as primary targets.

The paper argues that treating deepfakes as a deception problem obscures the real issue, which lies in the coercive use of visual representation. Even when viewers recognize that the content is fabricated, the damage persists. The harm is not dependent on belief but on the act of exposure, humiliation, and loss of control over one's identity.

The study introduces the concept of "visual coercion," describing how deepfakes force individuals into sexual scenarios they never consented to, effectively weaponizing their likeness. This transforms deepfakes into tools of symbolic violence, where power is exerted through images rather than physical force.

Based on feminist media theory, the research places this phenomenon within a longer history of the "male gaze," where women are positioned as objects for visual consumption. Deepfake technology intensifies this dynamic by allowing perpetrators to digitally manipulate and circulate images at scale, removing any need for real-world interaction or consent.

Perpetrators are predominantly male, while victims are overwhelmingly female, reflecting broader patterns of gender-based violence. This asymmetry is not incidental but rooted in cultural norms and technological design choices that prioritize certain uses and users.

The authors reject the idea that deepfakes are neutral tools misused by individuals. Instead, it frames them as socio-technical systems shaped by platform architectures, data practices, and societal biases. This perspective shifts responsibility away from individual actors alone and toward the broader ecosystem that enables such abuse.

Historical continuity reveals deepfakes as digital extension of visual violence

From revenge porn to manipulated photographs, the use of images to shame, control, and degrade women has deep roots. Deepfakes represent an evolution of this practice, lowering the barrier to entry and expanding its reach.

Unlike earlier forms of abuse that required access to private images, deepfake technology allows perpetrators to generate explicit content using publicly available photos. This creates a pervasive sense of vulnerability, where any image can be repurposed without consent.

The study emphasizes that this shift fundamentally alters the nature of risk. Victimization no longer depends on personal relationships or data breaches but can occur at scale, targeting public figures and private individuals alike. Women in visible roles, including journalists, politicians, and activists, are particularly vulnerable, as deepfakes are used to undermine their credibility and silence their participation.

This pattern reflects a broader strategy of gendered harassment. By attaching explicit imagery to women in positions of authority, perpetrators draw on entrenched stereotypes that equate female sexuality with moral weakness. The result is not only individual harm but a chilling effect on women's presence in public and digital spaces.

The study also examines how platform infrastructures amplify these harms. Social media systems designed for virality and engagement facilitate the rapid spread of deepfake content, while anonymity shields perpetrators from accountability. Even when platforms introduce bans, enforcement remains inconsistent, and content often migrates to alternative sites.

Technical detection tools, while improving, struggle to address the core issue. The study notes that identifying whether content is fake does not resolve questions of consent or harm. In many cases, the knowledge that an image is fabricated does little to mitigate its impact on the victim's reputation and well-being.

The persistence of digital content further compounds the problem. Once circulated, deepfakes can reappear across platforms, creating a continuous cycle of exposure and retraumatization. This permanence transforms what might have been a one-time incident into an ongoing threat.

Governance gaps expose limits of legal, technical, and social responses

The study finds that current governance approaches remain fragmented and insufficient. Legal frameworks have begun to address non-consensual deepfake pornography, but enforcement challenges persist, particularly across jurisdictions and anonymous platforms.

Consent-based laws, while important, often treat deepfake abuse as an individual violation rather than a systemic issue. Victims face significant barriers in pursuing legal action, including identifying perpetrators and navigating complex legal processes. This limits the effectiveness of legal remedies and places a disproportionate burden on those affected.

Resilience-focused strategies, such as digital literacy campaigns and victim support services, provide necessary assistance but fail to address root causes. Encouraging individuals to protect themselves or reduce their online presence risks reinforcing the very inequalities that enable abuse.

Technical solutions, particularly detection and moderation tools, also face inherent limitations. The study draws attention to the ongoing arms race between content generation and detection technologies, as well as the difficulty of assessing consent through automated systems.

These shortcomings point to the need for a more integrated approach. The research proposes a consent-centered socio-technical framework that combines platform accountability, legal reform, technical safeguards, and cultural change.

Under this model, platforms would bear greater responsibility for preventing and removing harmful content, supported by stronger regulatory frameworks. Technical measures would shift toward consent verification and provenance tracking rather than detection alone. Legal systems would expand protections to include AI-generated representations, recognizing violations of autonomy and identity even in fabricated content.

Education and cultural interventions would also play a critical role, focusing on ethical norms around consent and image use rather than solely on identifying fake media. The goal is to address the underlying power structures that drive abuse, rather than merely its symptoms.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback