Who is the author when AI writes? Public still unsure


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 05-02-2026 19:01 IST | Created: 05-02-2026 19:01 IST
Who is the author when AI writes? Public still unsure
Representative Image. Credit: ChatGPT

Artificial intelligence (AI) systems now produce news articles, academic summaries, marketing copy, and creative prose at scale, yet uncertainty persists over a foundational question: who, or what, is the author of machine-generated text. A new study published in AI & Society finds that despite dramatic advances in generative AI over the past seven years, public understanding of computer authorship has shifted only marginally, remaining fragmented, hesitant, and conceptually unstable.

The study, titled Revisiting Computer Authorship: A Longitudinal Perspective, compares public perceptions of authorship from 2017–2018 with a follow-up survey conducted in late 2024 and early 2025. By replicating an earlier experimental design, the authors provides a systematic comparison of how attitudes toward AI-generated text have evolved before and after the rise of large language models.

Public views of AI authorship remain conflicted despite generative AI’s rise

While generative AI has become far more visible and capable since the first survey wave, participants in the later study did not converge on a clearer or more settled definition of authorship when confronted with computer-generated text. Instead, responses continued to reflect ambiguity over whether authorship can be meaningfully attributed at all in such cases.

Participants were asked to assess authorship under four increasingly detailed conditions, beginning with a simple byline and progressing through disclosures about computer generation, human involvement, and organizational backing. Across both time periods, no single attribution model consistently dominated. Human authors, AI systems, development teams, and collective entities were all selected at various stages, and a significant portion of respondents resisted attribution entirely.

However, subtle shifts did emerge. In the 2024–2025 data, fewer respondents rejected authorship outright at the initial stage, suggesting a greater willingness to engage with the question rather than dismiss it as unanswerable. This change indicates growing familiarity with the idea that texts may be produced through hybrid or nontraditional means, even if the conceptual tools for assigning authorship remain underdeveloped.

Respondents simultaneously showed increased readiness to attribute authorship directly to the system itself once AI involvement was disclosed. This shift appears linked to broader public awareness of generative AI systems, particularly large language models, which are increasingly perceived as autonomous producers rather than simple tools. Yet the authors caution that these perceptions often rest on incorrect assumptions about how such systems function, including beliefs that AI systems “understand” or intentionally compose text.

Importantly, the study demonstrates that improved technical capability does not automatically lead to conceptual clarity. Even as AI systems generate more fluent and humanlike language, participants remain uncertain about how authorship should be defined, distributed, or assigned in machine-mediated contexts.

Authorship remains tied to responsibility, meaning, and social relations

The researchers analyse what respondents believe authorship represents. The study finds that authorship is not treated as a neutral label for text production but as a socially and morally loaded concept tied to responsibility, creativity, ownership, and accountability.

Across both survey waves, respondents associated authorship with human intention, lived experience, and communicative purpose. Writing was frequently understood as a relational act that connects an author and a reader across time, rather than a mechanical process of output generation. This framing complicates attempts to treat AI systems as authors in a straightforward sense.

The longitudinal comparison reveals a shift in how these associations are weighted. In the more recent survey, respondents were less likely to define authorship solely in terms of producing words on a page. Instead, they placed greater emphasis on connotations such as responsibility for content, ethical accountability, and participation in broader social systems. This suggests that while people may accept that machines can generate text, they remain reluctant to grant them the social standing traditionally afforded to authors.

This tension becomes especially pronounced when respondents consider questions of responsibility for harm. Participants implicitly linked authorship to liability, particularly in contexts such as misinformation, defamation, or misleading news content. The reluctance to assign authorship to AI systems alone reflects concern about where accountability should reside when automated systems produce consequential texts.

The study also highlights discomfort with attributing authorship to abstract entities such as corporations or development teams, even when respondents recognize their role in shaping AI systems. This discomfort underscores a mismatch between existing legal and institutional frameworks and public intuitions about authorship, which remain anchored in individual human agency.

Why unresolved authorship perceptions matter for media, law, and governance

The authors argue that persistent ambiguity around computer authorship carries real-world consequences as AI-generated content becomes increasingly embedded in media ecosystems. Public trust in information, perceptions of legitimacy, and expectations of responsibility are all shaped by who is seen as standing behind a text.

In journalism and public communication, unclear authorship complicates norms around transparency and credibility. Readers may struggle to assess the authority of AI-generated news or commentary if authorship labels do not align with their expectations about human accountability. The study suggests that simply disclosing AI involvement does not resolve these concerns and may, in some cases, increase uncertainty.

In legal contexts, the findings intersect with ongoing debates about copyright, intellectual property, and liability. If authorship is culturally associated with moral and economic rights, then expanding or redefining authorship to include AI systems risks destabilizing established legal concepts. At the same time, refusing to acknowledge AI-generated contributions raises questions about ownership and compensation in creative and informational labor.

Public perceptions of authorship influence whether people view AI-generated text as collaborative, deceptive, or legitimate. Design choices that anthropomorphize AI or obscure human involvement may reinforce misconceptions about autonomy and intention, further complicating attribution.

The study shows that perceptions evolve slowly, even amid rapid technological change. The seven-year comparison shows that while surface familiarity with AI has increased, deeper cultural understandings of authorship remain resistant to transformation. This lag suggests that policy responses, disclosure practices, and ethical guidelines must account for enduring public intuitions rather than assuming rapid conceptual adaptation.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback