Corporate responsibility faces digital reckoning amid rapid technological change
In digital ecosystems, where multiple firms collaborate through platforms, identifying who is accountable for harm becomes difficult. When a ride-sharing platform fails to protect its drivers, is responsibility borne by the platform, the contractors, or both? Similarly, peer-to-peer marketplaces raise questions about whether accountability lies with the firm enabling transactions or with individual participants.
With businesses undergoing rapid digital transformation, traditional notions of corporate responsibility (CR) are being challenged in unprecedented ways. A new study examines how digitalization reshapes the ethical obligations of companies in the evolving digital economy.
Published in the California Management Review, the study "Corporate Responsibility Meets the Digital Economy" details how digital technologies, from artificial intelligence to platform-driven ecosystems, are forcing businesses, policymakers, and scholars to rethink fundamental assumptions about responsibility, stakeholders, and ethics.
What are companies now responsible for in the digital era?
The study asks a fundamental question: Responsible for what? According to the authors, digitalization alters the landscape of corporate responsibility in several ways. It makes existing issues appear in new forms, intensifies old ethical concerns, and, in some cases, helps solve problems that have long troubled businesses.
For example, while concerns about privacy have always existed, big data and advanced analytics have amplified risks, enabling unprecedented levels of consumer tracking and manipulation. Digital marketing, powered by algorithms, can tailor content and pricing dynamically, benefiting some consumers while exploiting others’ vulnerabilities. Similarly, algorithmic management systems, increasingly used to monitor employees and suppliers, promise consistency but raise ethical issues around surveillance, bias, and respect for individual autonomy.
The study also reveals that digitalization intensifies environmental and social challenges. Ultra-fast fashion models, fueled by real-time data, exacerbate environmental harm. Massive data centers and AI model training significantly increase energy consumption. At the same time, new issues such as misinformation, mental health impacts of social media, and deepfake technologies emerge as pressing corporate responsibilities. These developments underscore that the scope of CR now extends far beyond traditional domains, requiring companies to address societal and psychological impacts alongside operational concerns.
Despite these challenges, digitalization offers opportunities to solve some CR issues. Enhanced transparency enabled by data and real-time communication allows companies to trace supply chains, detect unethical practices, and strengthen their sustainability efforts. Artificial intelligence, when used responsibly, can support progress toward global goals such as reducing emissions, improving health outcomes, and achieving social sustainability targets.
Who are the stakeholders in a digital economy?
The study raises another key question: Responsible toward whom? Traditionally, corporate responsibility has focused on human stakeholders, customers, employees, suppliers, and communities. However, digitalization shifts stakeholder dynamics by amplifying the power, legitimacy, and urgency of certain groups while diminishing others.
Social media, for example, allows marginalized groups to amplify their voices and exert pressure on companies. Activists can now influence corporate behavior through digital campaigns, increasing stakeholder salience in ways that were impossible a decade ago. At the same time, companies face new risks from misinformation and the rapid spread of false narratives online, which can unfairly damage reputations or push firms in undesirable directions.
Beyond human stakeholders, the study opens a provocative discussion on whether non-human entities, such as algorithms and robots, might eventually be considered stakeholders. While controversial, this question gains traction as artificial intelligence becomes increasingly autonomous. The possibility that machines could one day be regarded as moral patients with rights challenges long-held assumptions about who, or what, deserves ethical consideration in corporate decision-making.
Who bears responsibility in an era of blurred boundaries?
The third and most disruptive question raised by the authors is: Who is responsible? The authors argue that digitalization blurs traditional boundaries of responsibility, creating complex challenges for businesses and policymakers.
In digital ecosystems, where multiple firms collaborate through platforms, identifying who is accountable for harm becomes difficult. When a ride-sharing platform fails to protect its drivers, is responsibility borne by the platform, the contractors, or both? Similarly, peer-to-peer marketplaces raise questions about whether accountability lies with the firm enabling transactions or with individual participants.
The rise of autonomous technologies further complicates responsibility. Self-driving cars, AI decision-makers, and autonomous robots act with increasing independence, raising scenarios where no human actor can be clearly identified as accountable. This “responsibility gap” forces companies and regulators to reconsider how liability should be assigned when harm occurs. Should responsibility remain with the developer, the user, or, eventually, the machine itself?
Moreover, the digital economy disrupts the idea of clearly defined corporate entities. Companies now operate within interconnected networks where actions and outcomes arise from the interactions of multiple parties. This fragmentation of responsibility makes it harder to pinpoint accountability, a challenge that is likely to grow as technologies evolve.
Implications for business, policy and society
According to the authors, these shifts have profound implications for multiple audiences. For businesses, managers must develop digital literacy to navigate the ethical impacts of technology, anticipate how CR issues evolve, and establish clear responsibility boundaries with partners and AI systems. Digital ethics cannot be siloed; it must be integrated into overall strategy and operations.
For policymakers, the research warns that regulations must adapt to cover new contexts. Initiatives like the EU’s AI Act and Platform Work Directive reflect early efforts to address algorithmic management, AI liability, and the treatment of gig workers. However, as technology outpaces legislation, governments will need to develop stricter frameworks to close responsibility gaps and protect stakeholders.
For CR advocates and scholars, the study calls for a reevaluation of fundamental concepts. The emergence of machine actors, new stakeholder groups, and blurred accountability lines requires fresh theoretical approaches. Scholars must examine whether traditional moral and legal frameworks can still guide corporate ethics in a world where machines play a central role.
- READ MORE ON:
- corporate responsibility in the digital economy
- digital transformation and ethics
- AI and corporate responsibility
- corporate social responsibility and AI
- corporate sustainability and digitalization
- policy and regulation for AI ethics Ask ChatGPT
- sharing economy ethical issues
- digital economy
- FIRST PUBLISHED IN:
- Devdiscourse

