AI reshapes democracy by changing who decides and how decisions are made


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 24-01-2026 19:15 IST | Created: 24-01-2026 19:15 IST
AI reshapes democracy by changing who decides and how decisions are made
Representative Image. Credit: ChatGPT

Algorithms now influence how public opinion forms, how citizen input is aggregated, and how policies are implemented, raising urgent questions about legitimacy, accountability, and power. New research suggests that the real democratic disruption of AI lies not in replacing elected officials, but in redefining what it means to represent the public in an algorithmic age. 

The study, titled Of the people, by the algorithm: how AI transforms the role of democratic representatives? and published in AI & Society, provides a systematic analysis of how artificial intelligence alters democratic representation across both political participation and policy implementation. Rather than framing AI as a threat to democracy by default, the study examines the institutional conditions under which AI can either weaken or strengthen representative governance.

How AI reshapes citizen input and political participation

The study identifies the first major transformation at the level of democratic input, where citizens express preferences, deliberate, and influence political agendas. AI-driven systems already play a central role in shaping political discourse through social media algorithms, recommendation engines, and large-scale data analysis. These systems influence what information citizens see, how opinions spread, and which issues gain prominence.

According to the research, this algorithmic mediation fundamentally alters the traditional role of democratic representatives. Historically, elected officials acted as interpreters of fragmented and often unclear public opinion, relying on elections, surveys, and direct interaction to infer citizen preferences. AI changes this dynamic by making public sentiment more visible, structured, and measurable at scale. Machine learning systems can cluster opinions, identify points of agreement and disagreement, and track shifts in sentiment in near real time.

The paper highlights the growing use of AI-supported mass online deliberation platforms, which enable thousands of citizens to participate in structured policy discussions simultaneously. These systems can surface consensus positions, reduce polarization by emphasizing shared values, and provide representatives with clearer signals about public priorities. In such contexts, representatives are no longer primarily translators of public will, but facilitators who integrate well-defined citizen input into formal decision-making processes.

The study simultaneously warns that AI-driven participation tools carry significant risks. Algorithmic curation on commercial platforms often prioritizes emotionally charged or polarizing content, distorting public debate and amplifying extreme voices. Political microtargeting enables campaigns to tailor messages to narrow audience segments, fragmenting the public sphere and undermining collective deliberation. These dynamics weaken inclusivity and legitimacy by privileging certain voices while marginalizing others.

The research stresses that AI does not inherently improve democratic participation. Its effects depend on institutional design. Platforms that are independent, transparent, and formally linked to political decision-making can enhance representation. On the other hand, systems controlled by private actors or used selectively by governments risk becoming tools of manipulation rather than empowerment. In this environment, democratic representatives face a new responsibility: not only to listen to citizens, but to govern the digital infrastructures through which citizen input is generated.

Algorithmic decision-making and the transformation of policy implementation

The second major transformation identified in the study occurs on the output side of democracy, where political decisions are implemented through administrative systems. AI is increasingly used to allocate resources, assess eligibility for public services, and manage regulatory enforcement. These systems promise efficiency, consistency, and cost savings, but they also embed political choices into technical design.

The research shows that when AI systems are deployed in public administration, representatives shift from being direct decision-makers to architects of automated decision frameworks. Key political judgments are encoded into system objectives, training data, risk thresholds, and optimization criteria. These choices determine who benefits from public policies, who is excluded, and how errors are distributed across society.

A major concern raised by the study is that these design decisions often occur without explicit democratic authorization. While elected officials may approve the use of AI in principle, they frequently lack oversight over how systems are built and calibrated. As a result, major policy shifts can be implemented through technical parameters rather than legislative debate. This weakens accountability by obscuring who is responsible for outcomes and how contested values are resolved.

The paper states that keeping humans “in the loop” is not sufficient to safeguard democracy. Human oversight at the point of decision-making cannot compensate for a lack of control over system design. If representatives do not set and justify the values embedded in algorithms, they effectively delegate political authority to technical experts or private vendors. This delegation undermines transparency and erodes the chain of democratic responsibility.

The study also highlights the risk of reinforcing inequality. AI systems trained on historical data may reproduce existing social biases, leading to discriminatory outcomes in areas such as employment, welfare, and law enforcement. When these outcomes are framed as technical results rather than political choices, affected citizens face significant barriers to contestation. Representatives, in turn, may struggle to explain or defend decisions they no longer fully control.

Despite these risks, the research does not argue against the use of AI in public administration. Instead, it calls for a redefinition of representative responsibility. Elected officials must take ownership of algorithmic governance by setting clear mandates, ensuring transparency of system logic, and creating mechanisms for appeal and correction. In this model, representatives become stewards of automated systems rather than passive overseers.

Legitimacy, accountability, and the future of democratic representation

To assess whether AI strengthens or weakens democracy, the study evaluates its impact across five core democratic criteria: legitimacy, inclusivity, accountability, transparency, and efficacy. The findings suggest that AI can enhance democratic representation only if these criteria are actively protected through institutional design.

Legitimacy depends on whether citizens recognize AI-supported decisions and processes as democratically authorized. This requires clear political mandates for algorithmic systems and meaningful public involvement in defining their goals. Inclusivity depends on ensuring that participation platforms and data-driven systems do not exclude marginalized groups or amplify existing inequalities. Transparency must extend beyond surface-level explanations to include visibility into system objectives, data sources, and design assumptions.

Accountability emerges as a central challenge. The study argues that democratic systems must enable citizens to challenge not only individual decisions, but the underlying premises of algorithmic governance. Representatives must retain the authority to modify, suspend, or dismantle AI systems when they conflict with public values. Without such authority, democratic control becomes symbolic rather than substantive.

Efficacy, while often cited as AI’s strongest advantage, is treated with caution in the paper. Efficiency gains are meaningful only if they align with democratically chosen goals. A highly efficient system that implements unjust or unaccountable policies ultimately weakens democratic legitimacy rather than strengthening it.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback