Beyond tech fixes: AI governance requires transdisciplinary ethical wisdom


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 24-02-2026 19:01 IST | Created: 24-02-2026 19:01 IST
Beyond tech fixes: AI governance requires transdisciplinary ethical wisdom
Representative Image. Credit: ChatGPT

The rapid expansion of artificial intelligence (AI) has triggered parallel calls for stronger AI ethics and more integrated, cross-disciplinary research. However, despite their shared focus on complex global challenges such as climate change, these two fields rarely intersect at a deeper theoretical level. Scholars debate sustainability, governance, and responsibility, but the epistemological link between ethical judgment and research practice remains largely unexplored.

In Transdisciplinary Skills and AI Ethics: Toward a Techné-Based Lifeworld Extension, published in AI & Society, the author confronts that gap directly. The study argues that both AI ethics and transdisciplinary research are grounded in a common but overlooked knowledge base shaped by embodied skill and lived experience. The author proposes that recovering this foundation is essential for navigating the environmental and political pressures of the Anthropocene.

Rethinking the lifeworld beyond science–society divides

Transdisciplinary research has long been promoted as a response to complex global challenges, particularly climate change. It seeks to integrate knowledge from multiple academic disciplines while engaging non-academic actors such as policymakers, civil society, and industry. At the same time, AI ethics has expanded rapidly in response to concerns about algorithmic bias, environmental costs of computing, democratic accountability, and sustainability.

The author argues that although both fields deal with normative and practical challenges, they often rely on an implicit dualism that separates science from society. This divide is especially visible in the influential Handbook of Transdisciplinary Research, which frames the “lifeworld” primarily as a social sphere external to academic science. In that model, science addresses complex problems originating in society and collaborates across boundaries to deliver solutions.

According to the author, this framing is incomplete. He distinguishes between two meanings of lifeworld. The first refers to society in contrast to science, a domain of practical problems and stakeholder perspectives. The second, more fundamental meaning describes the lifeworld as the embodied, experiential ground from which all knowledge, including scientific knowledge, arises. Drawing on Edmund Husserl’s phenomenology, The author argues that science does not stand outside ordinary life but is grounded in it.

If science itself is rooted in lived practice, then transdisciplinary integration cannot be reduced to coordination between academic expertise and social actors. Instead, it must be understood as a deeper process shaped by shared perception, embodied competence, and cultural techniques. In this view, the divide between science and society is not a fixed boundary but a conceptual artifact.

Reviving Techné to connect research and ethical wisdom

The author recovers the Aristotelian concept of techné. Often translated as craft, skill, or technique, techné refers to embodied competence that mediates between abstract theory and practical action. The author argues that modern transdisciplinary theory has largely overlooked this concept, focusing instead on systems integration, problem-solving phases, and iterative collaboration.

In Aristotle’s framework, knowledge develops gradually from sensory perception to memory, practical skill, theoretical understanding, and ultimately first principles. Scientific knowledge depends on earlier stages of embodied experience and skilled practice. The author contends that this gradualist model offers a corrective to modern narratives that separate theoretical science from practical know-how.

By neglecting techné, contemporary discussions risk reinforcing a narrow image of science as detached and purely abstract. The author maintains that scientific inquiry, including AI research, relies on skilled bodily actions, instrument use, measurement practices, and tacit understanding. Even advanced computational modeling depends on human competencies embedded in specific research cultures.

The study also links techné to ethical wisdom, particularly the Aristotelian concept of phronesis, or practical wisdom. Ethical judgment, in this view, is not merely the application of universal principles but the skillful navigation of concrete situations. The author argues that transdisciplinary skills closely resemble ethical wisdom because both require context-sensitive deliberation, iterative learning, and responsibility for long-term consequences.

This argument gains urgency in the context of environmental ethics. Climate change cannot be addressed through isolated disciplinary methods or purely technical fixes. It demands normative reflection intertwined with practical expertise. The author suggests that there is an ethical demand for transdisciplinary skills, just as there is a methodological need for ethical wisdom in complex research settings.

AI and the planetary polycrisis: Beyond solutionism

AI is frequently portrayed either as a powerful tool for sustainability or as an environmental and social risk. Discussions about “sustainable AI” have intensified, especially in light of energy-intensive data centers and large language models.

According to the study, both optimistic and alarmist narratives risk oversimplification. When AI is framed solely as a technical solution to climate change, it reinforces a form of solutionism that overlooks deeper cultural and epistemic conditions. Conversely, presenting AI only as a societal problem external to science misrepresents its embeddedness in research practices and knowledge production.

The author proposes instead that AI be understood as a cultural technique. Like earlier technologies of measurement and visualization, AI mediates perception and shapes how reality is represented. Climate change, for example, is not directly visible in its entirety. It becomes intelligible through instruments, satellite imagery, chemical analysis, and computational models. AI increasingly participates in this mediation, transforming vast datasets into patterns and projections.

This technological mediation influences not only scientific understanding but also ethical orientation. The author warns that AI-oriented science may produce AI-oriented people, meaning research cultures centered heavily on computational methods may narrow the scope of ethical reflection. If planetary crises are defined primarily through algorithmic models, then political and moral responses may also become shaped by what AI does best.

The study calls for a lifeworld-oriented research paradigm. Such a paradigm would acknowledge that both science and ethics are grounded in shared, embodied practices. It would treat AI neither as an autonomous savior nor as an external threat, but as a technique embedded in human cultural life. In this framework, responsibility for AI’s environmental and social impacts cannot be delegated to technical design alone.

The author develops a heuristic model that integrates three elements: cultural embodiment, techné, and implicit knowledge.

  • Cultural embodiment refers to socially shared ways of perceiving and acting in the world.
  • Techné captures practical competence and tool use.
  • Implicit knowledge, based on Michael Polanyi, emphasizes tacit understanding that cannot be fully codified.

Together, these elements form the epistemic foundation of both transdisciplinary research and ethical judgment.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback