The AI dilemma: How organisations struggle to balance ethics, innovation and control

Implementing AI is not merely a technical challenge - it is a structural transformation that affects decision-making, workflows, and employee roles. Organisations face a control vs. flexibility dilemma when deploying AI, as strict regulations and governance frameworks must coexist with the need for adaptable, innovative AI applications. While regulations ensure accountability and mitigate risks, excessive bureaucratic controls can stifle innovation and hinder AI adoption in dynamic environments.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 18-03-2025 22:42 IST | Created: 18-03-2025 22:42 IST
The AI dilemma: How organisations struggle to balance ethics, innovation and control
Representative Image. Credit: ChatGPT

Artificial intelligence (AI) is introducing new efficiencies across industries, but its adoption also presents profound challenges, particularly in ensuring socially sustainable AI implementation.

A recent study "Organisational tensions in introducing socially sustainable AI" by Lanne, Nieminen, and Leikas explores the organisational tensions that arise when AI is introduced into public and private organisations. By examining AI practitioners’ experiences, the research, published in AI & Society, identifies three key categories of tensions - values-related tensions, implementation tensions, and impact-related tensions - that influence how AI is perceived, integrated, and governed within organisations. 

AI ethics vs innovation: The challenge of balancing organisational values

The first set of tensions revolves around the values that shape AI adoption within organisations. Different stakeholders - executives, employees, policymakers, and the public - hold diverse and sometimes conflicting views on what AI should prioritise. Some organisations deploy AI with the primary goal of improving efficiency, reducing costs, and enhancing productivity, while others focus on fairness, transparency, and social responsibility. This profitability vs. well-being tension raises critical ethical concerns, particularly in industries where AI affects vulnerable populations, such as healthcare, finance, and public services.

A major challenge lies in reconciling individual interests with broader societal benefits. For example, AI-driven decision-making in healthcare may optimise operational efficiency but could also introduce biases that compromise patient equity. Similarly, AI’s use in the financial sector raises concerns about algorithmic discrimination, as certain demographic groups may be unfairly profiled or excluded. The study also highlights discrepancies between stated values and hidden influences, where organisations claim to uphold ethical AI principles but fail to integrate them into real-world decision-making. Addressing these tensions requires transparent governance, cross-disciplinary dialogue, and mechanisms to ensure that AI aligns with societal values rather than purely commercial interests.

AI implementation challenges: Navigating control, flexibility, and expertise

Implementing AI is not merely a technical challenge - it is a structural transformation that affects decision-making, workflows, and employee roles. Organisations face a control vs. flexibility dilemma when deploying AI, as strict regulations and governance frameworks must coexist with the need for adaptable, innovative AI applications. While regulations ensure accountability and mitigate risks, excessive bureaucratic controls can stifle innovation and hinder AI adoption in dynamic environments.

Another critical tension is the divide between technical expertise and ethical oversight. AI implementation often requires highly specialised technical skills, yet many organisations lack personnel with expertise in both AI ethics and technology. As a result, AI adoption is often siloed, with technologists focused on functionality and business leaders prioritising profitability, while ethical considerations remain secondary. This fragmented approach increases the risk of deploying AI solutions without adequate safeguards against bias, privacy violations, or unintended social consequences.

Additionally, the human vs. machine debate continues to shape AI integration. While AI can automate repetitive tasks and enhance decision-making, over-reliance on automation can erode human agency and accountability. Employees may resist AI adoption due to concerns about job displacement, loss of professional judgment, and reduced autonomy. Organisations must navigate these tensions by fostering inclusive AI adoption strategies, ensuring that AI augments human expertise rather than replacing it, and providing employees with opportunities to reskill and adapt to AI-enhanced work environments.

AI’s impact on society: Ethical risks, bias, and sustainable AI development

Beyond internal organisational challenges, AI’s impact on society at large is a major concern. One of the most pressing tensions is the polarisation vs. unification debate -  whether AI will create greater social equity or reinforce existing inequalities. AI-driven decision-making can provide more accurate and data-driven solutions, yet it also risks embedding systemic biases that disproportionately affect marginalised communities. For instance, AI-powered hiring tools, loan approvals, and law enforcement technologies have been criticised for reflecting and perpetuating social prejudices, raising serious ethical and legal questions.

The study also explores the tension between facilitation vs. confusion in AI’s role in decision-making. While AI can simplify complex tasks, the opacity of AI models can make it difficult for users to understand, trust, and challenge AI-generated outcomes. The ‘black box’ problem in AI contributes to ethical dilemmas, as individuals affected by AI decisions may struggle to contest or appeal them. Ensuring AI transparency and interpretability is critical for fostering public trust and responsible AI adoption.

Another long-term challenge is AI’s environmental footprint. The energy-intensive nature of AI training and deployment presents sustainability concerns, particularly in the context of corporate sustainability initiatives. Organisations must consider AI’s ecological impact alongside its social and economic implications, ensuring that AI development aligns with broader sustainability goals rather than contributing to unsustainable technological expansion.

Path to socially sustainable AI

Understanding and addressing these organisational tensions is a crucial step toward socially sustainable AI. Organisations must move beyond high-level ethical guidelines and implement concrete strategies that balance efficiency, fairness, and accountability. This involves:

  • Developing robust AI governance frameworks that integrate ethics, compliance, and risk assessment into AI decision-making.
  • Fostering interdisciplinary collaboration between technologists, ethicists, policymakers, and industry leaders to ensure AI is designed with a holistic understanding of its societal impact.
  • Enhancing AI literacy and training within organisations, ensuring that employees at all levels understand AI’s implications and can participate in responsible AI adoption.
  • Prioritising transparency and accountability, requiring AI systems to provide clear, explainable decision-making processes.
  • Engaging stakeholders and impacted communities in AI discussions to address concerns, gather diverse perspectives, and build trust in AI technologies.

By identifying and managing these tensions, organisations can mitigate AI’s risks while maximising its benefits. Rather than viewing AI implementation as a purely technical or economic challenge, businesses and policymakers must adopt a human-centered, sustainability-driven approach. Only then can AI truly serve as a force for social good, fostering inclusivity, fairness, and long-term resilience in a rapidly evolving technological landscape.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback