AI values jobs, humans prioritize ethics: Study reveals risk perception gap

Both human and AI groups agreed that the most critical challenge facing society in the age of AI is data privacy and security. For humans, this concern stems from the increasing vulnerability of personal information to breaches, surveillance, and unauthorized use. With AI systems heavily reliant on large datasets, often containing sensitive user data, ensuring privacy and security has become foundational to public trust and the ethical use of AI technologies.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 03-05-2025 18:23 IST | Created: 03-05-2025 18:23 IST
AI values jobs, humans prioritize ethics: Study reveals risk perception gap
Representative Image. Credit: ChatGPT

Artificial intelligence is becoming deeply embedded in nearly every sector of modern life, from healthcare to finance to governance. But as it expands its footprint, so too does public anxiety over its societal impact. What are the most pressing challenges AI poses to society? And do humans and AI systems agree on which issues deserve the most attention?

A new study published in AI titled “Evaluating the Societal Impact of AI: A Comparative Analysis of Human and AI Platforms Using the Analytic Hierarchy Process” confronts these questions head-on by directly comparing human and AI prioritizations of six major AI-related societal challenges.

The research uses the Analytic Hierarchy Process (AHP), a well-established decision-making method that allows structured comparisons of complex issues. Thirty-eight human participants, including academics and researchers across Europe and the U.S., were asked to rank six societal challenges using pairwise comparisons. The same was requested from four prominent AI platforms: ChatGPT, Perplexity, Gemini, and DedaAI. What emerged was not only a methodological innovation but a startling divergence in how people and algorithms perceive risk and importance in AI’s societal trajectory.

Which challenges matter most and who thinks so?

Both human and AI groups agreed that the most critical challenge facing society in the age of AI is data privacy and security. For humans, this concern stems from the increasing vulnerability of personal information to breaches, surveillance, and unauthorized use. With AI systems heavily reliant on large datasets, often containing sensitive user data, ensuring privacy and security has become foundational to public trust and the ethical use of AI technologies.

After data security, however, the paths of human and machine judgments diverged. Human respondents identified ethical and moral considerations as the second most important challenge. Their concerns centered on bias, fairness, transparency, and the broader question of whether AI can be aligned with core human values. This priority reflects growing awareness of the ethical stakes involved when algorithms influence decisions in healthcare, hiring, policing, or even warfare.

AI platforms, in contrast, placed economic disruption in second place. They appeared more attuned to automation’s impact on labor markets, recognizing that AI could lead to job losses, wage depression, and rising inequality. Economic displacement has indeed been one of the most visible AI impacts, but its elevation by AI systems over ethical concerns indicates a machine-centric lens that prioritizes functional and structural risks over value-laden ones.

On other fronts, the discrepancy widened. Humans ranked regulation and governance third, recognizing the urgency of establishing laws and accountability mechanisms to prevent AI misuse. AI systems, however, ranked governance last, highlighting a critical oversight in algorithmic self-assessment - most AI platforms do not inherently recognize the need for their own regulation unless explicitly prompted to do so.

Social and cultural resistance, as well as resource and infrastructure limitations, consistently ranked lower for both groups. Still, humans placed more weight on cultural resistance, showing greater sensitivity to the psychological and societal friction triggered by rapid technological change. AI systems, on the other hand, were more likely to view these frictions as temporary obstacles rather than enduring ethical or political challenges.

Can machines judge their own impact on society?

One of the study’s most compelling insights stems from what it reveals about AI’s self-assessment capacity. When asked to prioritize challenges, AI platforms responded with logical consistency and high reliability in their rankings. Their internal consistency ratios and Euclidean distances - a measure of how stable and coherent their judgments were - were on par or better than those of the human participants.

However, while the AI platforms proved technically competent at ranking issues, they showed blind spots in subjective and value-based concerns. For example, ethical and moral considerations were ranked third instead of second, despite being a core concern in human-centered AI debates. The AI systems also underestimated the need for regulation - a telling omission, especially given the risks posed by unrestricted AI development, including misuse in misinformation, surveillance, and autonomous weapons.

This disconnect raises an important question: should AI systems be used in policy formulation or ethical governance if they undervalue the very structures that could contain them? The study suggests that human-AI hybrid systems, where human intuition and ethical frameworks complement AI’s processing power, may offer a more balanced approach to risk prioritization.

It also points to the need for transparency and explainability in AI decision-making. AI’s tendency to prioritize economic or technical challenges may reflect training data that emphasizes measurable risks over normative ones. As a result, AI risk assessments may fail to capture the full spectrum of societal implications unless carefully engineered to do so.

How can this evaluation shape future AI governance?

The methodology used in this study offers a replicable model for comparing human and machine decision-making using numerical values rather than purely qualitative analysis. By applying the AHP’s pairwise comparison and prioritization system, the researchers translated complex ethical and social debates into actionable data. This allows policymakers, researchers, and the public to understand where alignment or conflict exists between human concerns and algorithmic reasoning.

One of the key takeaways is the need for human oversight in AI governance structures. If AI platforms deprioritize regulation and ethics, relying on them exclusively for societal decision-making could lead to systems that are efficient but ethically blind. Hybrid models, where AI handles data-intensive tasks and humans provide value-based judgment, may offer a more robust path forward.

Moreover, the findings suggest that future AI platforms should be designed with embedded ethical prioritization protocols. This includes training on diverse datasets that reflect global human values, as well as incorporating mechanisms for real-time feedback and adjustment when biases or blind spots emerge.

Future research should expand the range of participants and AI platforms involved, and consider additional societal challenges such as environmental sustainability, geopolitical conflict, and mental health impacts. It would also be valuable to explore how AI systems interact when exposed to each other’s assessments, simulating a multi-agent environment that mimics real-world negotiation and governance contexts.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback