India Charts Techno-Legal Path for Responsible AI Ahead of AI Impact Summit 2026
Experts stressed the importance of robust data privacy and consent mechanisms across the entire AI lifecycle—from data collection and model training to inference and deployment.
- Country:
- India
The Office of the Principal Scientific Adviser (OPSA) to the Government of India, in collaboration with the iSPIRT Foundation and the Centre for Responsible AI at IIT Madras, convened a high-level roundtable on “Techno-Legal Regulation for Responsible, Innovation-Aligned AI Governance” on 22 December 2026. The roundtable was organised as an official pre-summit engagement ahead of the India AI Impact Summit 2026 and was chaired by Prof. Ajay Kumar Sood, Principal Scientific Adviser (PSA) to the Government of India.
The closed-door consultation brought together senior government officials, academic leaders, industry practitioners, legal experts, and AI policy specialists to deliberate on India’s evolving approach to governing artificial intelligence through integrated technological and legal frameworks.
High-Level Participation from Government, Academia and Industry
The roundtable was attended by Dr. Preeti Banzal, Adviser and Scientist ‘G’, Office of the PSA; Ms. Kavita Bhatia, Scientist ‘G’ and Group Coordinator, Ministry of Electronics and Information Technology (MeitY); Mr. Hari Subramanian, Volunteer at iSPIRT Foundation and Co-founder & CEO of Niti AI; Prof. Balaraman Ravindran, Head, Centre for Responsible AI, IIT Madras; Prof. Mayank Vatsa, Professor, IIT Jodhpur; Ms. Jhalak Kakkar, Director, Centre for Communication Governance, National Law University Delhi; and Mr. Abilash Soundararajan, Founder & CEO, PrivaSapien, among other senior stakeholders and subject-matter experts.
The diversity of participants reflected the interdisciplinary nature of AI governance, spanning public policy, computer science, data protection, constitutional law, cybersecurity, and emerging digital markets.
India’s Techno-Legal Vision for AI Governance
Setting the context, Dr. Preeti Banzal outlined India’s broader approach to techno-legal regulation, emphasising that governance frameworks must be grounded in practical implementation mechanisms rather than purely normative guidelines. She highlighted the need for India to demonstrate exemplary pathways for AI governance that combine enabling policy structures, institutional capacity building, and international cooperation, positioning India as a credible voice in global AI discussions.
In his keynote address, Prof. Ajay Kumar Sood underscored India’s readiness to adopt a techno-legal approach to AI governance—one that embeds legal and regulatory principles directly into AI system design and deployment. He emphasised that accountability, transparency, data protection, and cybersecurity must be built by design, rather than enforced retrospectively through compliance checks.
Prof. Sood encouraged participants to explore all plausible pathways for constructing a techno-legal governance framework that balances innovation with safeguards, noting that such an approach is particularly important given the non-deterministic and adaptive nature of modern AI systems.
Key Challenges: Privacy, Performance and Equity
Co-moderators Mr. Hari Subramanian and Prof. Balaraman Ravindran guided discussions on core technical and regulatory challenges, including data protection, information leakage risks, differential privacy, model accuracy, and system throughput. They highlighted the inherent trade-offs between privacy-preserving techniques and system performance, stressing the need for context-sensitive governance metrics.
The discussions also emphasised broader considerations such as equitable access to AI, data sovereignty, and the economic and strategic implications of AI adoption for India. Participants noted that governance frameworks must account for India’s scale and diversity, ensuring that responsible AI practices do not inadvertently exclude smaller enterprises or marginalised communities.
Consent, DEPA and Compliance-by-Design
Experts stressed the importance of robust data privacy and consent mechanisms across the entire AI lifecycle—from data collection and model training to inference and deployment. Strong convergence with India’s Data Empowerment and Protection Architecture (DEPA) was identified as a key enabler for trustworthy data sharing and user-centric consent management.
The roundtable also highlighted the need for compliance-by-design architectures, which would allow AI systems developed in India to scale globally while adhering to diverse regulatory regimes. Such architectures were seen as critical for positioning Indian AI solutions as both innovative and trustworthy in international markets.
Addressing Non-Deterministic AI and AI-Generated Content
Participants deliberated on regulatory responses to non-deterministic AI systems, which pose unique challenges for accountability and explainability. Discussions also covered governance issues related to AI-generated content, including copyright, attribution, and liability, with participants acknowledging the difficulty of operationalising legal principles in rapidly evolving technical environments.
A recurring theme was the need to balance AI model robustness with technical, economic, and socio-social trade-offs, ensuring that governance solutions remain practical, accessible, and consumable for end users and developers alike.
Towards Standardised Evaluation and Policy Translation
The roundtable underscored the urgency of developing a standardised evaluation framework for responsible AI that spans the full lifecycle of AI systems. Participants emphasised that insights from technical evaluations must be translated into effective policy levers, enabling regulators to respond proactively to emerging risks without stifling innovation.
Embedding safety, accountability, and governance mechanisms directly into AI technology stacks was seen as essential for mitigating risks while promoting inclusive and equitable access to AI capabilities.
Way Forward: White Paper and Global Leadership
The roundtable concluded with a vote of thanks by Dr. Preeti Banzal, who noted that the deliberations would directly inform the Safe and Trusted AI Chakra of the India AI Impact Summit 2026. She announced that the Office of the PSA will release an explanatory white paper on Techno-Legal Regulation for AI Governance, incorporating the recommendations and insights generated during the discussions.
The outcomes of the roundtable are expected to strengthen India’s efforts to build a pro-innovation, trustworthy AI ecosystem, while reinforcing the country’s role as a thought leader in shaping global AI governance norms.
ALSO READ
U.S.-China AI Chip Sales: Lawmakers Demand Transparency from Commerce Department
Assam Administration Expels Declared Foreigners, Orders Immediate Departure
Naomi Osaka Departs Evolve: What's Next for the Tennis Star?
Justice Department begins releasing long-awaited files tied to Jeffrey Epstein's sex trafficking investigation, reports AP.
Justice Department's Epstein Files: The Unseen Enigma

