Democratizing AI: Why the future of artificial intelligence belongs to everyone
AI is no longer just a product of research labs and private enterprises; it is deeply embedded in critical public services such as law enforcement, welfare distribution, job recruitment, and healthcare diagnostics. The study argues that treating AI as a private commodity rather than a collective resource leads to inequalities in how its benefits and harms are distributed.

Artificial Intelligence (AI) is rapidly shaping the future of society, influencing everything from healthcare and finance to education and urban planning. Yet, the power to design, govern, and regulate AI systems remains concentrated in the hands of a few corporations and government entities, leaving individuals and communities as passive recipients of its outcomes. This imbalance raises critical questions about fairness, accountability, and inclusivity in AI development.
A new position paper, “The Right to AI” by Rashid Mushkani, Hugo Berard, Allison Cohen, and Shin Koseki, submitted on arXiv, argues that AI should be recognized as a societal infrastructure rather than a private or corporate-owned entity. Inspired by Henri Lefebvre’s concept of the “Right to the City”, this paper proposes that individuals and communities should have a fundamental right to participate in the development and governance of AI systems that shape their lives.
The authors highlight the growing concentration of AI-related decision-making within elite circles, emphasizing the risks of algorithmic bias, opaque governance, and a lack of public agency. They introduce a four-tier model of participation, arguing that grassroots involvement can help mitigate biased outcomes, improve transparency, and ensure that AI serves diverse societal needs rather than reinforcing existing inequalities. The paper proposes a transformative shift - from a top-down governance model to a participatory approach - where the public plays an active role in AI oversight, data governance, and policy formulation.
Why AI should be a public good
AI is no longer just a product of research labs and private enterprises; it is deeply embedded in critical public services such as law enforcement, welfare distribution, job recruitment, and healthcare diagnostics. The study argues that treating AI as a private commodity rather than a collective resource leads to inequalities in how its benefits and harms are distributed. When AI is designed and deployed without inclusive participation, it risks amplifying biases, violating privacy, and reducing individual autonomy.
To illustrate the need for a Right to AI, the paper examines the following key challenges:
- Algorithmic Bias: AI systems trained on incomplete or biased data can reinforce discrimination in hiring, lending, and law enforcement.
- Opaque Decision-Making: Most AI-driven decisions are made through black-box models with little to no public accountability.
- Lack of Public Oversight: The governance of AI is largely controlled by corporations and policymakers, with minimal input from affected communities.
- Data Ownership and Ethics: The vast amounts of personal and community data used to train AI models are often extracted without consent, raising ethical concerns.
By positioning AI as a societal infrastructure, similar to utilities like water and electricity, the paper asserts that citizens should have an active role in shaping its rules and policies.
The right to AI: A four-tier model for participation
To move beyond tokenistic AI governance, the authors introduce a four-tier framework that categorizes different levels of public participation in AI decision-making:
Consumer-Based (Minimal Participation)
At the lowest level, individuals act as passive consumers of AI-driven services, with no influence over how these systems operate. Their only form of engagement may be through user feedback mechanisms, which are rarely considered in decision-making.
Private Organization-Led (Limited Participation)
Here, AI development remains largely corporate-controlled, with minimal transparency in data collection, system training, and governance. While companies may solicit public input through ethical guidelines or advisory panels, decision-making power remains with private entities.
Government-Controlled (Regulated Participation)
In this tier, state-led AI governance takes precedence, ensuring compliance with privacy laws, ethical standards, and anti-discrimination policies. While regulation increases transparency and accountability, it may also centralize power in government agencies, limiting direct community involvement in AI design and oversight.
Citizen-Controlled (Full Participation and Oversight)
At the highest level of participation, citizens actively shape AI policies, data governance, and deployment strategies through local AI councils, public audits, and cooperative data management structures. This model empowers communities to challenge biased AI systems, demand transparency, and co-create ethical AI frameworks.
The study argues that moving towards citizen-controlled AI governance can democratize technological decision-making, ensuring that AI reflects the needs and values of diverse communities rather than serving only corporate or state interests.
Lessons from participatory AI initiatives
To illustrate the potential for inclusive AI governance, the study analyzes nine real-world case studies where participatory AI initiatives have been successfully implemented. These examples highlight how grassroots engagement can lead to fairer, more accountable AI systems.
For instance, the Māori Data Sovereignty Initiative in New Zealand established community-led data governance protocols, allowing Indigenous groups to control and manage AI applications trained on their cultural and linguistic data. Similarly, the Participatory AI in Healthcare project involved patients, doctors, and ethicists in the co-design of medical AI systems, ensuring that diagnostic algorithms accounted for diverse patient needs.
Other initiatives, such as WeBuildAI, experimented with collaborative algorithmic governance, where citizens helped shape AI decision rules for public services. These cases demonstrate that when communities have a say in AI governance, the resulting systems are more ethical, fair, and responsive to social needs.
Policy recommendations for realizing the right to AI
The paper concludes with a set of policy recommendations aimed at operationalizing the Right to AI. These include:
- Public Education and AI Literacy: Citizens must be equipped with knowledge about AI systems, their impact, and their rights in shaping their development.
- Inclusive AI Councils: Governments and tech companies should establish public advisory boards where diverse stakeholders can participate in AI governance.
- Community-Led Data Trusts: Local communities should have collective control over data collection, usage, and AI model training to ensure ethical AI practices.
- Transparency Mandates: All AI systems deployed in public services should be subject to independent audits and public accountability measures.
- Legal Frameworks for Participatory AI: Governments should create laws that formalize citizen participation in AI governance, much like urban planning laws that require public consultation.
By implementing these policies, societies can transition from AI systems controlled by the few to AI systems governed by the many.
- FIRST PUBLISHED IN:
- Devdiscourse