AI-generated content sparks global debate over authorship rights
The current patchwork of rules is unsustainable in a world where AI-generated content flows across borders. Without greater harmonization, creators, AI developers, and rights holders face persistent uncertainty over both ownership and lawful training practices.
The accelerating capabilities of artificial intelligence are forcing courts, lawmakers, and copyright offices worldwide to confront one of the most contentious questions in intellectual property law: can works created by AI be protected under copyright, and if so, who should be recognized as the author? A new comparative legal analysis by Anthi Gaidartzi and Irini Stamatoudi provides a comprehensive examination of how jurisdictions are answering, and struggling with, these questions.
Published in Laws, the study “Authorship and Ownership Issues Raised by AI-Generated Works: A Comparative Analysis”, the study reviews statutory frameworks, case law, and administrative practices from the United States, European Union, United Kingdom, Australia, China, and beyond. It assesses how existing copyright rules apply to AI-assisted and fully autonomous AI outputs, examines the legal treatment of text and data mining for AI training, and outlines potential pathways toward a harmonized global approach.
Who is the author when AI creates?
In most systems, authorship is tied to a human’s intellectual contribution. The United States and Australia take a strict stance: purely autonomous AI-generated works are ineligible for copyright protection. U.S. courts and the Copyright Office have consistently refused registrations where no human authorship exists, as in the Thaler v Perlmutter decision, which reinforced that originality must stem from human creative choices. Australia’s case law follows similar reasoning, requiring demonstrable human involvement.
The European Union, through its originality standard, defined as the author’s own intellectual creation, also excludes fully autonomous works. However, it leaves room for AI-assisted works if the human operator’s input shapes the expressive elements. This places importance on the extent and nature of human intervention, from conceptual planning to technical adjustments.
The United Kingdom’s Copyright, Designs and Patents Act introduces a distinctive statutory provision: for computer-generated works without a human author, the author is deemed to be the person who made the necessary arrangements for creation. While this approach provides clarity in law, it raises conceptual challenges over whether arranging for a machine to create is equivalent to authorship.
China’s position has been more fluid. Early decisions produced conflicting outcomes, with some courts denying protection to AI outputs and others recognizing copyright where human input was substantial. The 2023 Li v Liu decision by the Beijing Internet Court marked a turning point by affirming copyright in an AI-generated image, on the basis that the human operator’s prompting and parameter settings demonstrated sufficient intellectual engagement.
How should AI training data be regulated?
The legality of AI’s creative process hinges on the rules governing text and data mining (TDM), the use of large datasets, often drawn from copyrighted works, to train AI systems. The EU’s Copyright in the Digital Single Market (CDSM) Directive sets a structured framework. Article 3 allows TDM for research purposes without rightholder consent, while Article 4 permits TDM for any purpose provided rightholders have not opted out. This creates a clear balance between enabling innovation and preserving rights-holder control.
On the other hand, the United States relies on the fair use doctrine. This more flexible but less predictable system has historically favored transformative uses, including certain large-scale data analysis. Whether AI training qualifies as fair use remains contested and will likely hinge on future case law.
Other jurisdictions mirror elements of these models or operate without explicit TDM provisions, leaving uncertainty for AI developers and rights holders. Case law is beginning to address these gaps. In Germany, the Kneschke v LAION ruling confirmed that non-profit dataset creation could fall within the Article 3 exception, offering legal breathing room for research-based AI training.
The differences between TDM regimes are not merely academic. They directly affect the cost, speed, and legal risk of developing AI systems. Jurisdictions with broad, clear TDM rules can foster innovation more predictably, while restrictive or unclear rules risk pushing AI development into more permissive legal environments.
Toward a global standard for AI copyright
The current patchwork of rules is unsustainable in a world where AI-generated content flows across borders. Without greater harmonization, creators, AI developers, and rights holders face persistent uncertainty over both ownership and lawful training practices.
The study identifies several priorities for international policy:
-
Clarifying the threshold of human input needed for copyright protection of AI-assisted works, ensuring that creative human contributions are recognized while excluding purely machine-generated outputs from protection.
-
Establishing transparent rules for AI training data, whether through TDM exceptions, fair use expansions, or licensing systems, to balance innovation with respect for intellectual property rights.
-
Considering sui generis protections or alternative regimes for AI-generated works, which could offer limited rights without equating machine outputs to human authorship.
-
Coordinating across jurisdictions to avoid conflicting standards that undermine enforcement and lead to forum shopping.
The authors note that legislative clarity will also help courts and copyright offices, which have increasingly been forced to make policy-level determinations in the absence of statutory guidance. Recent cases such as Zarya of the Dawn in the U.S., which granted copyright only to the human-written portions of a work containing AI-generated imagery, illustrate how fragmented approaches can leave creators without clear expectations.
For industry stakeholders, the implications are immediate. Businesses integrating AI into creative workflows must track evolving national rules to avoid inadvertently losing rights or infringing others. For policymakers, the challenge is to craft frameworks that encourage AI innovation while upholding the fundamental principle of copyright: rewarding and protecting human creativity.
- FIRST PUBLISHED IN:
- Devdiscourse

