IAS/UPSC Coaching Institute  

Editorial 1: Digital child abuse, the danger of AI-based exploitation

Context

The Indian government should change current laws to deal with new threats.

 

Introduction

Recently, the UK’s Department for Science, Innovation, and Technology, along with the AI Security Institute (formerly AI Safety Institute), released the first International AI Safety Report 2025 (updated on February 18, 2025). The report warns about the growing risk of AI being used to create, store, and share child sexual abuse material (CSAM).

 

AI and the Rising Threat of CSAM: Global and Indian Perspectives

  • United Kingdom: First to introduce laws targeting AI tools that generate child sexual abuse material (CSAM).
  • CSAM: Includes audio, video, and images depicting sexually explicit content involving children.
  • World Economic Forum (2023): Warned that generative AI can create realistic images, especially of children.
  • Internet Watch Foundation (Oct 2024): Reported the increasing spread of CSAM on the open web.
  • Government of India: Should update laws to tackle emerging threats and ensure long-term protection.

 

U.K.’s New AI-Centric Approach to Tackling CSAM

Key Provisions of the Upcoming U.K. Legislation

  • Ban on AI tools: Illegal to possess, create, or distribute AI tools that generate child sexual abuse material (CSAM).
  • Prohibition of paedophile manuals: Criminalizes possession of manuals that guide individuals in using AI toolsto create CSAM.
  • Shift in legal approach: Moves from an ‘accused-centric’ and ‘act-centric’ model to a ‘tool-centric’ approach.

 

Comparison of Existing and Proposed Laws

Aspect

Existing Laws

Proposed U.K. Law

Focus

Who committed the act and what was done

Tool/medium used for the crime

Relevant Laws

Protection of Children Act 1978: Criminalizes taking, distributing, and possessing indecent images of children. 
Coroners and Justice Act 2009: Criminalizes possession of prohibited child images, including non-photographic materials.

Outlaws possession and use of AI tools that generate CSAM

Prevention Stage

Punishes offenders after the crime is committed

Allows authorities to intervene at the preparation stage

Scope of CSAM

Restricted to actual child images

Covers AI-generated CSAM, closing legal loopholes

Impact on Mental Health

Indirect approach to curbing CSAM spread

Prevents initial rippling effectson children’s mental health

 

Significance of the Proposed Law

  • Stronger deterrence: Criminalizing AI tools prevents misuse at the source.
  • Proactive enforcement: Enables authorities to act before the crime occurs.
  • Mental health protection: Reduces the psychological impact of CSAM exposure.
  • Legislative clarity: Addresses AI-generated CSAM, previously unregulated.

 

On whether India is future ready

Rising Cybercrimes Against Children and Legal Gaps in AI-Generated CSAM

  • Increase in Cybercrimes: The NCRB Report 2022 shows a significant rise in cybercrimes against children compared to the previous year.
  • Incidents of Child Pornography: The National Cyber Crime Reporting Portal (NCRP) recorded 1.94 lakh cases of child pornography as of April 2024 under the Cyber Crime Prevention against Women and Children (CCPWC) scheme.
  • Collaboration with NCMEC, USA: Since 2019, the NCRB has partnered with the National Centre for Missing and Exploited Children (NCMEC), USA, receiving 69.05 lakh cyber tip-line reports shared with States and Union Territories as of March 2024.
  • Serious Violation of Child Rights: These figures highlight CSAM as a grave threat to a child’s right to life and dignity in India.

 

Existing Legal Framework for CSAM in India

Law/Section

Key Provisions

Scope

Section 67B, IT Act 2000

Punishes those who publish or transmit material depicting children in sexually explicit acts online.

Covers electronic transmission but not AI-generated content.

Sections 13, 14, 15, POCSO Act 2012

Prohibits using children for pornography, storing child pornography, and using children for sexual gratification.

Protects real children but does not explicitly address AI-generated CSAM.

Section 294, Bharatiya Nyaya Sanhita

Penalizes the sale, distribution, or public exhibition of obscene materials.

Focuses on general obscenity laws, not AI-generated images.

Section 295, Bharatiya Nyaya Sanhita

Makes it illegal to sell, distribute, or exhibit obscene objects to children.

Does not specifically cover AI-generated CSAM.

 

Legal Gaps in Addressing AI-Generated CSAM

  • No Specific Laws for AI-Generated CSAM: Current laws only address real child abuse material but do not criminalize AI-created CSAM.
  • Limited Regulation of AI Tools: Unlike the proposed U.K. law, Indian laws do not ban AI tools used to generate CSAM.
  • Enforcement Challenges: Without a clear legal framework, authorities struggle to act against AI-based CSAM before it spreads.

 

Strengthening India’s Legal Framework to Combat CSAM

Key Legislative and Policy Reforms Needed

  1. Expand the Definition of CSAM
    • As per the NHRC Advisory (Oct 2023), replace the term ‘child pornography’ in the POCSO Act with ‘CSAM’ to cover a broader range of content.
  2. Clarify ‘Sexually Explicit’ Under IT Act
    • Define ‘sexually explicit’ under Section 67B of the IT Act to facilitate real-time identification and blocking of CSAM.
  3. Regulate Online Intermediaries
    • Amend the IT Act to explicitly include Virtual Private Networks (VPNs), Virtual Private Servers (VPS), and Cloud Services as ‘intermediaries’.
    • Impose statutory liability on these entities to comply with CSAM-related laws.
  4. Integrate Emerging Technological Risks
    • Introduce statutory amendments to address AI-generated CSAM and other threats from new technologies.
  5. Adopt International Standards
    • Support the UN Draft Convention on ‘Countering the Use of Information and Communications Technology for Criminal Purposes’ at the UN General Assembly.

 

Conclusion

The Ministry of Electronics and Information Technology has proposed the Digital India Act 2023 to replace the outdated IT Act. Since this law is still in progress, it should take inspiration from the U.K.'s new legislation and include specific rules to tackle AI-generated CSAM.