One-way AI alignment no longer works in generative AI world: Here's why
The authors argue that generative AI introduces a new class of alignment risks because interaction itself becomes a mechanism of influence. Humans adapt their behavior in response to AI outputs, recommendations, and conversational cues. Over time, these interactions shape beliefs, decision-making patterns, and social norms. At the same time, AI systems adapt based on feedback, usage patterns, and evolving data environments.
A growing body of research argues that artificial intelligence (AI) alignment must be treated as a dynamic relationship rather than a static technical problem. AI systems do not simply execute instructions. They shape human behavior, perceptions, and decisions over time, creating feedback loops that alter both human values and machine behavior. This evolving interaction has exposed the limits of traditional alignment methods that focus only on model optimization or post hoc safeguards.
The study “Human-AI Interaction Alignment: Designing, Evaluating, and Evolving Value-Centered AI for Reciprocal Human-AI Futures,” published in the Extended Abstracts of the 2026 CHI Conference on Human Factors in Computing Systems, responds directly to this shift. The paper calls for a fundamental rethinking of alignment as a bidirectional, value-centered process grounded in human–AI interaction, design, and long-term co-adaptation.
From static alignment to reciprocal human–AI adaptation
Current alignment models were built for systems that operate in relatively controlled or narrow contexts. As AI systems become general-purpose, adaptive, and embedded in daily workflows, alignment can no longer be treated as a fixed target. Human values are not static, and neither are AI systems that learn, update, and interact continuously.
The authors argue that generative AI introduces a new class of alignment risks because interaction itself becomes a mechanism of influence. Humans adapt their behavior in response to AI outputs, recommendations, and conversational cues. Over time, these interactions shape beliefs, decision-making patterns, and social norms. At the same time, AI systems adapt based on feedback, usage patterns, and evolving data environments.
This mutual adaptation exposes a blind spot in traditional alignment frameworks. By focusing primarily on steering AI toward predefined objectives, existing approaches fail to account for how AI systems reshape human agency and value formation. The study positions bidirectional human-AI alignment as a response to this challenge, treating alignment as an ongoing process of co-adaptation rather than a one-time technical fix.
While machine learning communities have explored reward modeling, preference learning, and instruction following, the authors emphasize that alignment also depends on interaction design, user experience, and participatory methods traditionally associated with human-computer interaction research.
By reframing alignment as an interactional phenomenon, the study bridges a long-standing gap between AI-centered alignment research and human-centered design traditions.
Value-centered design reframes alignment as a human responsibility
The authors argue that alignment cannot be reduced to technical compliance with abstract objectives. Instead, it must embed human and societal values such as fairness, agency, responsibility, trust, and accountability directly into how AI systems are designed, evaluated, and used.
This framing challenges the assumption that values can be fully specified upfront. Human values are contextual, pluralistic, and often contested. Alignment, therefore, requires mechanisms that allow humans to engage critically with AI systems, reflect on their behavior, and recalibrate their use over time.
The study highlights the role of interactive alignment mechanisms that allow users to steer, question, and co-create with AI systems during use. Rather than treating users as passive recipients of AI output, the paper positions them as active participants in alignment. Interfaces, explanations, feedback tools, and participatory workflows become central alignment instruments rather than auxiliary features.
Value-centered alignment, as the authors note, also requires evaluation beyond technical accuracy or task performance. Alignment must be assessed at multiple levels, including individual experience, community impact, and broader societal consequences. Trust, well-being, and long-term influence on human behavior are presented as alignment outcomes that cannot be captured through conventional benchmarks alone.
By foregrounding values and interaction, the paper redefines alignment as a shared responsibility between system designers, users, and institutions. AI systems are not simply aligned for humans, but aligned with humans through sustained engagement.
Evaluating alignment in dynamic, real-world contexts
The authors argue that alignment evaluation must evolve alongside AI systems and their users. Static evaluation methods are insufficient for systems that adapt over time and operate across diverse contexts.
The paper outlines the need for dynamic evaluation frameworks that track alignment longitudinally rather than at isolated points. These frameworks must account for changes in user behavior, shifting expectations, and emerging social effects as AI systems become embedded in everyday life.
The authors highlight several challenges in evaluating bidirectional alignment. First, alignment outcomes often emerge gradually and indirectly, making them difficult to measure through short-term experiments. Second, alignment involves trade-offs between competing values, requiring evaluation methods that surface tensions rather than masking them. Third, alignment impacts vary across users and communities, demanding pluralistic and inclusive evaluation approaches.
To address these challenges, the study calls for interdisciplinary methods that combine technical metrics with qualitative and participatory evaluation techniques. Human-computer interaction research methods such as user studies, ethnography, participatory design, and longitudinal field deployment are positioned as essential tools for alignment research.
As AI systems influence public discourse, labor practices, education, and governance, alignment must be assessed not only in individual interactions but also in aggregate effects. The authors argue that alignment research must expand its scope to include social, economic, and cultural dimensions.
This emphasis reflects a broader shift in AI research toward accountability and impact assessment, but the study distinguishes itself by grounding evaluation in interaction rather than abstract model behavior.
Building an interdisciplinary alignment community
The paper also outlines a concrete effort to institutionalize bidirectional alignment research within the human-computer interaction community. The study serves as the foundation for a dedicated workshop at CHI 2026, building on earlier initiatives at CHI 2025 and ICLR 2025 that demonstrated strong community demand for interdisciplinary dialogue.
The workshop is designed to bring together researchers from HCI, AI, social sciences, psychology, and industry to develop shared frameworks, methods, and research agendas. The authors emphasize that alignment challenges cannot be addressed within disciplinary silos. Technical solutions without human insight risk missing real-world complexity, while human-centered approaches without technical grounding risk limited scalability.
The workshop structure reflects the paper’s alignment philosophy. Interactive sessions, collaborative activities, and participatory knowledge creation are central components. Rather than positioning alignment as a solved problem, the workshop treats it as an evolving research frontier that benefits from diverse perspectives and ongoing collaboration.
Furthermore, the study highlights accessibility and inclusion as core alignment concerns. Ensuring that alignment research accounts for diverse cognitive, cultural, and physical experiences is presented as essential to building AI systems that serve society equitably.
- FIRST PUBLISHED IN:
- Devdiscourse

