IAS/UPSC Coaching Institute  

 Editorial 2:Model conduct

Context

India must expand access to AI infrastructure and systematically upskill its workforce to remain competitive.


Introduction

India’s approach to Artificial Intelligence regulation remains piecemeal, relying on existing IT, financial, and data protection laws rather than a unified AI safety framework. While this limits intrusive surveillance, it leaves gaps in consumer protection, especially for psychological harms. As AI adoption deepens, India must balance innovation, capacity-building, and responsible governance.

 

India’s Current AI Regulatory Approach

  • India relies on due diligence under the IT Act and Rules, along with financial regulation and data protection norms.
  • This approach manages adjacent risks but does not establish a clear state duty of care for AI consumer safety, especially psychological harm.
  • Regulation remains fragmented and incomplete, banking largely on existing laws rather than a dedicated AI safety framework.

 

Comparative Global Developments

  • China’s draft AI rules target emotionally interactive services, mandating warnings against excessive use and intervention in extreme emotional states.
  • While justified in addressing psychological dependence, such rules risk intrusive monitoring by incentivising deeper user surveillance.
  • India’s stance is less intrusive, but also less comprehensive, as it avoids defining clear safety obligations.

 

Sectoral and Institutional Measures in India

  • MeitY has acted through IT Rules to curb deepfakesfraud, and mandate labelling of synthetically generated content, largely in a reactive manner.
  • RBI has introduced expectations on model risk in credit and developed the FREE-AI framework.
  • SEBI has pushed for clear accountability in how regulated entities deploy AI tools.

 

Building Capacity While Regulating Use

  • India lags behind the U.S. and China in building frontier AI models, despite having a large adoption ecosystem.
  • “regulate first, build later” approach risks deepening foreign dependency due to limited domestic capacity.
  • Priority areas include access to computeworkforce upskillingpublic procurement, and research-to-industry translation.
  • Regulation should focus more assertively on downstream, high-risk uses—through incident reporting and product accountability—without stifling upstream innovation or mandating intrusive emotional surveillance.

 

Conclusion

India should avoid the trap of overregulation without capability. Strengthening compute accessworkforce upskilling, and domestic frontier models must go hand in hand with downstream accountability for high-risk AI uses. By emphasising incident reportingproduct safety, and clear duty of care, India can protect users without choking innovation or deepening foreign dependence.