How AI can fix broken polls and restore public confidence
The reliability of public polling has been under scrutiny since major electoral mispredictions in 2016 and 2020, when traditional survey methods failed to capture shifting social sentiments. With declining participation rates and the rise of online misinformation, both political and market research polls have faced accusations of bias and manipulation.
Amidst declining public confidence in polling systems, researchers at Tennessee Technological University have proposed an artificial intelligence–based framework designed to rebuild transparency and trust in public opinion measurement. The study, authored by Amr Akmal Abouelmagd and Amr Hilal, outlines a data-driven solution that merges social network analysis with AI modeling to detect fraudulent participation and ensure polling credibility in the digital age.
Accepted at the 12th Annual Conference on Computational Science & Computational Intelligence (CSCI’25), the paper “Leveraging the Power of AI and Social Interactions to Restore Trust in Public Polls” introduces a decentralized, peer-to-peer polling model that replaces traditional, centrally governed systems with intelligent, self-validating social structures. By applying graph neural networks (GNNs) and centrality-based algorithms, the proposed system can autonomously identify ineligible or dishonest participants through the analysis of social connections rather than relying solely on self-reported information.
The crisis of trust: Why polling needs reinvention
The reliability of public polling has been under scrutiny since major electoral mispredictions in 2016 and 2020, when traditional survey methods failed to capture shifting social sentiments. With declining participation rates and the rise of online misinformation, both political and market research polls have faced accusations of bias and manipulation.
The Tennessee Tech study attributes this crisis to two structural weaknesses in conventional polling systems: dependence on centralized authorities for data validation and vulnerability to dishonest participation. In traditional models, participants are often self-selecting and unverified, making the results prone to distortion. Moreover, centralized organizations control access and interpretation of the data, which can lead to transparency issues and public skepticism.
The authors argue that digital society requires a decentralized and adaptive polling framework, one capable of ensuring eligibility verification, data integrity, and participant privacy without relying on third-party oversight. Their approach reimagines how polls are distributed and validated through peer-to-peer interactions, where participants invite others based on predefined eligibility criteria, such as region, age, or demographic background.
Instead of gathering responses directly through institutional surveys, the proposed model embeds AI-driven mechanisms into the flow of social dissemination. By learning how polls spread across digital networks, the system identifies anomalies indicative of fraudulent or ineligible activity. This transformation shifts polling from a human trust–based model to a data trust–based model, where credibility is derived from observed network behavior rather than unverified participant declarations.
The AI model: Detecting fraud through social patterns
The study introduces a graph neural network (GNN) model that processes data from simulated social networks to evaluate eligibility and honesty. The authors employ two large datasets representing real-world online communities, the Last.fm Multigraph and Musae-Twitch (Germany) networks, to test the robustness of their framework. These datasets mirror the complexity of digital interactions, capturing millions of nodes and edges that reflect user relationships, interests, and engagement patterns.
The researchers simulate polling scenarios where some users are eligible and others are not, with varying degrees of honesty and participation. The model then observes how polling requests propagate through the network, identifying patterns that correlate with ineligible behavior. It relies on centrality measures, such as degree, betweenness, and closeness, to detect users whose connections or forwarding activity deviate from normal dissemination behavior.
The results are compelling. The AI system achieved detection accuracies exceeding 90 percent in identifying ineligible participants, particularly in networks with higher clustering coefficients and well-defined community structures. Networks with stronger interconnections, such as Twitch’s gaming community dataset, outperformed more loosely connected ones like Last.fm, underscoring that social cohesion enhances AI detection accuracy.
Performance metrics, including F1-scores above 80 percent, confirm that the model not only detects fraudulent activity but also maintains balance between false positives and negatives. The study emphasizes that diversity within the network, where both eligible and ineligible participants coexist, further strengthens model learning by exposing it to a broader range of behavioral features.
Another key insight is the importance of trusted root nodes, participants who initiate poll dissemination. When a higher proportion of root nodes were honest and well-connected, the overall accuracy of eligibility prediction increased significantly. This finding suggests that the architecture of the social network itself, particularly the reliability of its foundational nodes, is as critical as the AI algorithm running on it.
Restoring public confidence through decentralization
The most significant contribution of the study is its decentralized approach to polling integrity. Traditional polling relies on a single authority to distribute, collect, and interpret responses. In contrast, the Tennessee Tech framework decentralizes this process entirely. Polling requests are distributed across a peer-to-peer network, and validation occurs collectively through the AI’s analysis of social dissemination patterns.
This structure ensures transparency, inclusivity, and privacy, all of which are essential for restoring public trust. Participants do not need to disclose personal data to a central agency; instead, their credibility is determined by their position and behavior within the network. Such an approach aligns with broader trends in digital governance and cybersecurity, where decentralized models are replacing legacy hierarchies.
The researchers highlight that this model could be further enhanced through integration with blockchain technology, which would record each polling interaction as a verifiable and immutable transaction. Combining blockchain’s transparency with AI’s pattern recognition could create a fully autonomous system for democratic participation, market research, or public policy consultation.
The ethical implications are equally profound. In an era when misinformation campaigns can manipulate public sentiment, AI-based social graph validation offers a new standard for fairness and accountability. The study envisions a future where polling accuracy is grounded not in trust alone but in observable, data-verified social behavior.
However, the authors caution that successful implementation requires ongoing attention to bias mitigation and fairness in AI design. While the model performs well in simulated datasets, real-world applications will need to consider factors such as unequal digital access, platform-specific interaction norms, and cultural diversity in online communication.
Towards AI-verified democracy and public discourse
AI-assisted social polling could transform how societies measure public sentiment, especially as participation shifts from institutional channels to decentralized digital platforms. The model’s self-validating nature could make large-scale public consultations more inclusive and accurate, especially in politically polarized environments where trust in data is scarce.
This approach also has potential applications in market research, social policy design, and civic engagement. By removing dependency on centralized survey agencies, the method empowers individuals to participate in public discourse directly, knowing that their inputs are being validated impartially through AI-driven network analysis.
The authors’s framework positions artificial intelligence as a neutral arbiter that safeguards both the privacy of individuals and the integrity of collective outcomes. The research anticipates a new era of polling where citizens are not merely data points but active nodes in a transparent, self-regulating information ecosystem.
Future development will focus on expanding the system’s scalability and interoperability. Integrating real-time data from social media APIs and developing hybrid models that combine GNNs with blockchain verification are identified as next steps. Such advances could make decentralized polling a practical reality, capable of processing millions of participants across global digital networks.
The authors also shed light on the social and democratic value of the technology. In societies where misinformation and polarization erode trust, AI-based transparency mechanisms can serve as a counterweight, reinforcing evidence-based decision-making. If widely adopted, decentralized AI polling could redefine democratic participation by blending computational precision with social authenticity.
- READ MORE ON:
- AI in public polling
- artificial intelligence
- decentralized polling system
- social network analysis
- graph neural networks
- polling trust restoration
- AI-based fraud detection
- peer-to-peer polling
- digital democracy
- polling transparency
- public opinion accuracy
- machine learning in surveys
- AI-driven polling
- decentralized governance
- data integrity
- ethical AI
- public trust in technology
- computational intelligence
- social data analytics
- misinformation prevention
- FIRST PUBLISHED IN:
- Devdiscourse

