Operationalising digital ethics vital to building trust in AI-driven workflows
Despite the proliferation of codes of ethics and corporate commitments, the “principles-to-practice gap” persists because many frameworks lack mechanisms to guide everyday decisions at the project level.
A new paper published in AI & Society reveals a major step in making digital ethics actionable within corporate environments. The authors, affiliated with Merck and Witten/Herdecke University in Germany, present the Merck Digital Ethics Check (MDEC) - a semi-automated ethics evaluation tool designed to integrate ethical oversight into day-to-day data analytics work.
The study, “Operationalising Digital Ethics: Establishment of an Ethics Evaluation Tool for Data Analytics”, underscores that while organizations often adopt broad ethical guidelines for AI and data use, the real challenge lies in bridging the gap between aspirational principles and practical implementation. The MDEC initiative demonstrates how embedding ethical processes within existing workflows can help companies safeguard fairness, accountability, and transparency without slowing down innovation.
Closing the principles-to-practice gap in data ethics
The authors state that the surge in data analytics and AI applications across industries has amplified concerns around privacy, algorithmic bias, and transparency. Despite the proliferation of codes of ethics and corporate commitments, the “principles-to-practice gap” persists because many frameworks lack mechanisms to guide everyday decisions at the project level.
MDEC was developed by Merck’s Analytics Center of Excellence as a response to this challenge. The tool is integrated directly into the analytics project lifecycle, prompting teams to reflect on potential ethical risks from the early stages of development. Instead of functioning as a static checklist, MDEC combines a structured questionnaire with guided self-assessment, encouraging data science teams to identify and discuss risks that might not be captured by automated prompts.
According to the author, this approach strengthens ethics culture within the organization by empowering employees to engage with ethical considerations rather than treating them as external compliance burdens. By embedding MDEC into routine workflows, Merck aims to reduce bureaucratic resistance and foster consistent application of ethical standards across projects.
How the MDEC model works inside corporate workflows
The study details the operational design of MDEC, describing it as a semi-automated, human-in-the-loop system that balances efficiency with oversight.
- Workflow Integration: MDEC is embedded in the standard project lifecycle so that teams encounter it as part of their usual work processes. This reduces friction and ensures consistent usage.
- Structured Questionnaires: Teams begin with a guided questionnaire that flags potential risks in areas such as data privacy, model fairness, and algorithmic transparency.
- Team-Led Self-Assessment: Projects then undergo collaborative discussions within teams to address complex scenarios that cannot be resolved by automated checks alone.
- Escalation Pathways: Cases with unresolved or significant ethical concerns are escalated to Merck’s Digital Ethics Office for expert review and guidance.
- Training and Empowerment: Regular training sessions ensure employees understand the importance of the process and feel equipped to raise and address ethical issues.
The paper argues that this hybrid model preserves human judgment and accountability while streamlining oversight. It avoids the pitfalls of purely automated screening systems, which can overlook nuanced ethical dilemmas, and sidesteps the inefficiencies of overbearing manual review processes.
Lessons learned and broader implications for responsible innovation
Through the MDEC deployment, the authors highlight several key lessons for organizations aiming to operationalize digital ethics:
- Practical Fit Matters: Embedding ethics checks into existing workflows improves adoption rates and mitigates the risk of “ethics fatigue” among employees.
- Forms Are Not Enough: Automated questionnaires alone cannot capture the complexity of real-world data ethics challenges; guided discussion and reflection are indispensable.
- Training Strengthens Oversight: Educating teams ensures they understand both the technical and ethical stakes, enhancing the quality of self-assessments.
- Empowerment Over Paternalism: Allowing teams to lead assessments and escalate only genuinely critical cases builds trust and a proactive ethics culture.
The paper also places the MDEC model within the global discourse on responsible AI and data innovation, noting that regulators and policymakers increasingly expect companies to demonstrate practical mechanisms for ensuring compliance with ethical and legal standards. The authors suggest that tools like MDEC could inform the development of future guidelines for industry-wide best practices.
- FIRST PUBLISHED IN:
- Devdiscourse

