Generative AI forces rethink of authorship, ownership and creative rights
The study identifies human originality as the most vulnerable element of the current copyright system. Generative AI challenges originality not by copying works verbatim, but by reconstructing creative styles, voices, and semantic patterns in ways that are legally elusive yet culturally damaging.
New research published in the journal Encyclopedia shows that current legal frameworks in major economies are ill-prepared for a world where machines can reproduce not only the form of human expression, but also its meaning, style, and creative identity. The absence of clear rules on authorship, originality, and training data is rapidly becoming a systemic risk to cultural production and creative labor.
The study, titled AI, Authorship, Copyright, and Human Originality, delivers a comprehensive comparative analysis of copyright law in the United Kingdom, United States, and Germany, assessing how existing doctrines respond to generative AI and proposing a draft global framework to restore legal clarity and protect human originality.
Based on doctrinal analysis, an extensive synthesis of academic literature, and insights from interviews with legal scholars, AI developers, industry stakeholders, and creators, the study finds that copyright systems across jurisdictions are fragmented, inconsistent, and increasingly detached from technological reality.
Why existing copyright law is failing in the age of generative AI
Copyright law was designed for a world in which creativity was unambiguously human. Generative AI disrupts that foundation by producing outputs that closely resemble human-authored works without possessing authorship in any meaningful legal or moral sense. This tension has exposed deep doctrinal gaps that courts and legislators have yet to resolve.
In all three jurisdictions examined, human authorship remains a formal requirement for copyright protection. Yet the rules governing AI-assisted creation are unclear and uneven. In the United States, courts and regulators have consistently rejected copyright protection for works generated entirely by machines, reinforcing a strict human authorship baseline. However, this approach offers little guidance on hybrid works where human input and AI generation are intertwined. As a result, creators and publishers face uncertainty over ownership, attribution, and liability.
The United Kingdom adopts a more pragmatic but controversial approach by recognizing computer-generated works through statutory provisions that assign authorship to the person making the necessary arrangements. The study argues that this framework bypasses the originality requirement that underpins international copyright norms, creating tension with global standards and failing to address the deeper issue of semantic imitation. While administratively convenient, this model risks undermining the concept of human creativity as the source of copyright protection.
Germany maintains a stronger human-centered doctrine, anchoring copyright in personal intellectual creation. Moral rights protections are more robust, including posthumous safeguards. Yet even this system struggles to address AI-driven imitation of style, voice, and creative identity, which often falls outside traditional infringement tests. The result is a legal landscape where AI can replicate creative expression without triggering clear legal consequences.
Across all three systems, the study finds no coherent doctrine capable of addressing large-scale AI training practices. The question of whether scraping copyrighted works for model training constitutes infringement, fair use, or permissible data mining remains unresolved. Courts have treated training as a technical process rather than a form of semantic learning, allowing AI systems to absorb and reproduce creative identities at industrial scale. This disconnect has left creators without effective remedies while enabling widespread unlicensed exploitation.
The erosion of human originality and moral rights
The study identifies human originality as the most vulnerable element of the current copyright system. Generative AI challenges originality not by copying works verbatim, but by reconstructing creative styles, voices, and semantic patterns in ways that are legally elusive yet culturally damaging.
Traditional copyright doctrine focuses on expression rather than style. As a result, AI-generated outputs that imitate an artist’s distinctive voice or aesthetic often fall outside infringement thresholds, even when they compete directly with human creators in the market. The study shows that this gap has profound economic and ethical consequences, enabling market displacement while eroding the cultural value of human creativity.
Moral rights, intended to protect the personal and reputational bond between creators and their works, are particularly ill-equipped to address AI-driven imitation. In common law systems, moral rights are weak and narrowly defined. In civil law systems, they are stronger but remain rooted in analog concepts that do not translate easily to algorithmic reproduction. The study highlights the vulnerability of posthumous creators, whose voices and styles can be replicated indefinitely without consent or compensation.
The research introduces the MATH-COPE framework to map these failures systematically. The framework examines four core legal themes: moral rights, authorship and originality, training data and copyright, and human originality. These are analyzed across four contextual dimensions: commercialization, organizational practice, policy and governance, and ethical technology. This matrix reveals how doctrinal weaknesses are reinforced by market incentives, platform practices, fragmented regulation, and opaque AI development processes.
Stakeholder interviews reinforce these findings. Legal experts warn that recognizing AI as an author would hollow out copyright’s human-centered foundation. Creators report growing displacement and loss of identity as AI-generated content mimics their work without attribution or remuneration. AI developers acknowledge the risks of semantic replication but emphasize the lack of clear standards and liability frameworks. Industry stakeholders highlight commercial uncertainty and the need for harmonized licensing mechanisms to prevent competitive distortion.
Across all groups, a shared position emerges: AI should remain a tool, not a co-author. Yet without legal reform, that principle remains aspirational rather than enforceable.
A global framework to restore legal certainty and creative trust
To address these systemic failures, the study proposes a draft global copyright framework designed to recalibrate copyright law for the age of generative AI. Rather than abandoning existing international conventions, the framework builds on foundational principles while introducing AI-specific rules that can be adopted multilaterally.
Under the hood, it is a revised definition of a protected work, grounded in human semantic originality rather than mere syntactic arrangement. This definition explicitly excludes AI-only outputs from authorship while preserving protection for AI-assisted works where meaningful human creative control can be demonstrated. By anchoring copyright in human meaning-making, the framework seeks to close the loopholes that currently allow AI-generated imitation to flourish unchecked.
The proposal introduces structured licensing regimes for AI training data, recognizing that large-scale model training is neither incidental nor neutral. By requiring authorization and remuneration for the use of copyrighted works in training datasets, the framework aims to align innovation incentives with fair compensation. Both input-based and output-based remuneration models are included to reflect the varied ways AI systems derive value from creative works.
Moral rights are strengthened and modernized to cover AI-mediated imitation of style, voice, and likeness. Posthumous protections are explicitly extended, addressing one of the most glaring gaps in current law. Transparency obligations are layered to balance accountability with trade secret protection, enabling regulators to audit AI systems without undermining legitimate commercial interests.
The framework also focuses on enforceability. Provenance tracking, watermarking, and metadata standards are integrated into platform responsibilities, ensuring that compliance is verifiable rather than symbolic. Cross-border dispute resolution mechanisms and harmonized enforcement rules are designed to reduce forum shopping and uneven application across jurisdictions.
Importantly, the study stresses that doctrinal reform alone is insufficient. Legal definitions must be supported by operational infrastructure, including registries for training permissions, standardized licensing channels, and proportionate compliance pathways for small and medium-sized enterprises. Without these practical rails, even well-designed laws risk becoming ineffective.
- FIRST PUBLISHED IN:
- Devdiscourse

