Posted on March 05, 2026 By Dr. Manisha Nene

Key Insights from the SEAP Event on Deciphering Responsible AI
Artificial Intelligence is rapidly moving from experimentation to enterprise deployment. As organizations integrate AI into critical decision-making systems, the conversation is shifting from “Can we build it?” to “How do we govern it responsibly?”
At a recent SEAP event on Deciphering Responsible AI, industry leaders, policymakers, and technology practitioners gathered to discuss how India is approaching responsible AI and what it means for enterprises navigating the next wave of digital transformation.
The session featured Dr. Manisha Nene, former Director of the School of Mathematical Sciences and Computer Engineering at the Defense Institute of Advanced Technology (DIAT), DRDO, and Mr. Avinash Agarwal, Indian Telecommunication Service (ITS) officer and contributor to India’s AI governance guidelines.
The event brought together CXOs, technology leaders, and policy experts to explore the intersection of AI innovation, cybersecurity, governance, and regulatory readiness.
Why AI Governance Is Becoming Critical
Digital systems today influence economic decisions, business outcomes, and even public trust. With AI increasingly embedded into enterprise workflows, governance can no longer be an afterthought.
As highlighted during the discussion, risks associated with AI systems include:
- Financial losses resulting from automated decisions
- Reputational risks arising from biased or incorrect AI outputs
- Cybersecurity vulnerabilities in AI-enabled systems
- Data privacy concerns and regulatory exposure
- Misuse of digital platforms through synthetic content or automated manipulation
Lessons from the evolution of cybersecurity governance show that institutions often build regulatory frameworks only after experiencing significant incidents. AI governance offers an opportunity to be proactive rather than reactive.
AI as Strategic Infrastructure
One of the most important themes from the session was the idea that Artificial Intelligence should be treated as strategic national infrastructure, rather than merely an IT capability.
India’s AI Governance Framework, released in November 2025 by the Office of the Principal Scientific Adviser, reflects this strategic positioning.
The framework aligns with India’s long-term vision of becoming a technology-driven economy by the centenary of independence, positioning AI as a force multiplier across sectors including finance, healthcare, manufacturing, and digital public services.
Rather than limiting innovation, the framework aims to ensure that technological progress and responsible governance evolve together.
India’s Principles-Based Governance Model
India’s AI governance philosophy is built on principles rather than rigid regulation, enabling innovation while encouraging accountability.
The framework outlines seven guiding principles, often described as the operational DNA for responsible AI systems.

1. Trust
AI systems must establish trust between users, organizations, and institutions.
2. Human-Centric Design
Human oversight remains essential, particularly for high-risk decision-making systems.
3. Innovation
The framework encourages responsible innovation and capability development.
4. Fairness and Equity
AI systems must reduce bias and ensure inclusive outcomes.
5. Accountability
Developers and deployers of AI systems must remain responsible for outcomes.
6. Explainability
AI-driven decisions should be traceable and explainable.
7. Safety and Sustainability
AI systems must protect societal interests and long-term technological sustainability.
Together, these principles provide a balanced foundation for scaling AI responsibly.
India’s Position in the Global AI Governance Landscape
Global approaches to AI governance vary significantly.
The European Union has adopted a highly structured regulatory framework through the EU AI Act, introducing strict compliance requirements across sectors.
The United States, in contrast, operates through a fragmented regulatory environment, with multiple state-level and agency-level policies.
India’s model takes a pragmatic middle path, emphasizing:
- Sector-led regulation
- Risk-based governance
- Innovation-friendly policy frameworks
This approach enables flexibility while ensuring safeguards against misuse.
Enterprise Implications: Moving from “Wait and Watch” to “Compliance by Design”
One of the strongest messages for business leaders was the need to shift from reactive compliance to proactive governance.
Organizations implementing AI should begin integrating governance mechanisms early in their AI journey.
Key enterprise actions include:
AI Risk Mapping
Identify and classify AI systems based on their impact and risk levels.
Bias and Fairness Audits
Evaluate AI models to detect demographic bias or unintended outcomes.
Explainability Frameworks
Ensure that AI decisions can be understood and justified.
Governance Integration
Embed AI governance into enterprise risk management frameworks.
Board-Level Oversight
AI governance should be treated as a strategic leadership responsibility, not just a technical function.
AI Sovereignty and Indigenous Innovation
The discussion also highlighted the growing importance of AI sovereignty.
Many organizations currently depend on large global AI platforms for foundational models and APIs. While these tools provide powerful capabilities, they also raise important concerns related to:
- Data sovereignty
- Dependence on foreign platforms
- Intellectual property risks
- Cultural representation in AI models
This is driving renewed focus on indigenous AI development and domain-specific models, especially for sensitive sectors.
In many enterprise use cases, smaller specialized AI models (SLMs) may offer better control, efficiency, and governance than large general-purpose models.
Managing Emerging Risks: Deepfakes and Synthetic Content
Another emerging governance challenge is the rapid rise of AI-generated content and deepfakes.
Synthetic media can be misused for misinformation, fraud, or reputational damage. Policymakers and industry stakeholders are increasingly exploring mechanisms such as:
- Digital watermarking of AI-generated content
- Blockchain-based provenance tracking
- Disclosure requirements for synthetic media
- Detection technologies for deepfakes
These safeguards are expected to play an important role in maintaining trust in digital ecosystems.
Key Takeaways for CXOs and Technology Leaders
The discussion concluded with several strategic insights for organizations preparing for the AI era.
AI governance must move beyond IT teams and become a board-level priority.
Leaders should focus on:
- Building governance frameworks early
- Embedding accountability across AI development and deployment
- Investing in transparency and explainability
- Participating in policy discussions shaping the future of AI regulation
Organizations that move early will likely shape industry standards rather than simply follow them.
Looking Ahead
Artificial Intelligence is transforming industries, economies, and societies at an unprecedented pace.
India’s emerging governance framework reflects a balanced approach that encourages innovation while safeguarding societal interests.
For enterprises, the message is clear:
AI governance is no longer optional. It is becoming a strategic capability that will define long-term competitiveness and trust.
Speaker Profiles
Dr. Manisha Nene
Dr. Manisha Nene is a distinguished academic and technology leader with over two decades of experience across academia, research, and industry.
She has served as Director of the School of Mathematical Sciences and Computer Engineering and Head of the Department of Computer Science & Engineering at the Defence Institute of Advanced Technology (DIAT), DRDO, under the Ministry of Defence.
Dr. Nene has guided multiple doctoral researchers and more than seventy postgraduate theses in areas including Cybersecurity and Artificial Intelligence. Her research contributions include over 160 international publications and several Indian patents.
She has been actively involved in national technology initiatives, including participation in DRDO’s National Task Force for Quantum Computing, and has contributed to large-scale AI and cybersecurity training programs across India.
Dr. Nene also represents India in international technology forums and has contributed to discussions on Responsible AI, Digital Public Infrastructure, and emerging technology governance.
Mr. Avinash Agarwal
Mr. Avinash Agarwal is an Indian Telecommunication Service (ITS) officer from the 1992 batch with more than three decades of experience in telecommunications, information technology, broadcasting, and AI governance within the Government of India.
He recently served as Deputy Director General (International Relations) in the Department of Telecommunications, where he led India’s engagement with international organizations including the International Telecommunication Union (ITU) and other multilateral technology bodies.
Mr. Agarwal was also an invitee member of the drafting committee for the India AI Governance Guidelines released by MeitY in November 2025.
His work spans technology policy, AI governance, telecom standardization, and digital infrastructure, and he actively contributes to international standards development through roles in various ITU study groups.
