India’s Deepfake Crackdown Signals Tougher AI Rules on Fake Content

In what marks one of the first formal steps towards regulating the use of artificial intelligence (AI) in India, the Union government has released draft amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, seeking to establish legal guardrails around synthetically generated information, including deepfakes.

The electronics and information technology ministry (MeitY), which notified the draft, has invited feedback on the proposed changes by November 6.

The government said the amendments are aimed at ensuring an “open, safe, trusted and accountable internet”, amid the rapid rise of generative AI tools and the growing risk of misuse through synthetic content that can mislead, impersonate or manipulate.

The ministry noted that the proliferation of deepfake videos and AI-generated content has significantly increased the potential for harm, from spreading misinformation and manipulating elections to impersonating individuals or creating non-consensual intimate imagery.

Recognising these risks and following public consultations and parliamentary discussions, MeitY has proposed strengthening the due diligence obligations of intermediaries, especially social media intermediaries, significant social media intermediaries and platforms that facilitate the creation or modification of synthetic media.

Synthetic Information Defined

For the first time, the draft amendments introduce a definition of “synthetically generated information”, described as information “artificially or algorithmically created, generated, modified or altered using a computer resource, in a manner that reasonably appears to be authentic or true”.

The rules clarify that references to “information” used for unlawful acts under the IT Rules will now explicitly include such synthetically generated content.

To ensure transparency and accountability, the amendments propose mandatory labelling and metadata embedding for all synthetic content.

Intermediaries that enable the creation or modification of AI-generated media will be required to ensure that such content carries a permanent, unique metadata or identifier that cannot be removed or altered.

This label must be prominently displayed or made audible, covering at least 10% of the visual display area or, in the case of audio content, the initial 10% of its duration, so users can immediately identify that the content is synthetic.

Significant social media intermediaries will also face enhanced responsibilities. They must require users to declare whether any uploaded content is synthetically generated and deploy reasonable and proportionate technical measures to verify these declarations. If the information is confirmed to be synthetic, the platform must clearly display an appropriate label or notice indicating that it is such.

Failure to act against synthetically generated information in violation of these rules will be considered a lapse in due diligence.

The amendments further specify that intermediaries acting in good faith to remove or disable access to synthetic information based on grievances will continue to enjoy protection under Section 79(2) of the IT Act, which grants them exemption from liability for third-party content.

The Rationale

MeitY said the rationale behind the proposed legal framework stems from recent incidents of deepfake media being weaponised to damage reputations, spread falsehoods, influence elections or commit fraud.

Policymakers worldwide have raised concerns about such fabricated content, which is increasingly indistinguishable from authentic material and threatens public trust in digital information ecosystems.

These issues have also been debated in both Houses of the Parliament in India, prompting the ministry to issue earlier advisories to intermediaries urging stronger controls on deepfakes.

The government said the proposed rules will establish a clear legal basis for labelling, traceability and accountability of AI-generated content. They are designed to balance user protection and innovation by mandating transparency without stifling technological advancement.

If adopted, the amendments would make India one of the first countries to codify rules specifically addressing synthetic and AI-generated information.

The framework aims to empower users to distinguish authentic content from manipulated or fabricated material while ensuring that intermediaries and platforms hosting such information remain accountable.

MeitY has invited stakeholders and the public to submit comments or suggestions on the draft rules by email to itrules.consultation@meity.gov.in in MS Word or PDF format by November 6.

Akif Khan, a VP analyst at Gartner, said the law would be a step in the right direction, particularly in requiring social media platforms to label the content users post on their platforms.

However, he noted that the stated use of “reasonable and appropriate technical measures” for doing so is open to interpretation.

Furthermore, the traditional challenges of jurisdiction and applicability remain, he said, questioning whether the law would extend to social media posts made by users in other countries that Indian citizens could potentially see.

“Those challenges will need to be resolved for the law to have the intended positive impact in the Indian context,” he added.
Who Keeps a Check?
MeitY’s review also recommended stronger safeguards for senior-level accountability, precise identification of unlawful content and periodic review of government directions.

The amendments specify that only a senior officer not below the rank of joint secretary (or a director or equivalent where no joint secretary is appointed) can issue removal notices to intermediaries. For police authorities, only a deputy inspector general of police (DIG) who has been specially authorised can issue such intimations.

These notices must provide a clear legal basis, specify the unlawful act and identify the exact content to be removed, replacing earlier broad references with “reasoned intimation” under Section 79(3)(b) of the IT Act, according to an official statement.

All notices under Rule 3(1)(d) will be reviewed monthly by an officer not below the rank of secretary of the appropriate government to ensure necessity and proportionality.

The amendments aim to balance citizens’ rights with state regulatory powers, ensuring transparent and precise enforcement.

In a LinkedIn post, Rakesh Maheshwari, former senior director and group coordinator of cyber laws, cyber security and data governance at MeitY, said the amendment brings in a much-needed clarity on who can issue notices for takedown.

Some checks and balances have also been put in place to ensure proper accountability and may also be followed up on, he added.

“I believe that the earlier notifications designating multiple officers by each agency/state governments and published by various government departments (including state governments) will now be reviewed,” he said.
Refinement Needed

Cyber law advocate Prashant Mali welcomed the amendments for addressing deepfake risks.

“The devil, as always, hides in the algorithm,” he noted, highlighting the need for clear benchmarks and careful implementation to balance innovation with regulation.

According to him, requiring a 10% visible label on AI-generated media, while aimed at transparency, could hinder aesthetic or artistic uses of generative AI.

He suggested adaptive watermarking, aligned with ISO/IEC 23053 and W3C provenance standards, as a more flexible alternative.

The proposal to mandate user declarations for all synthetic content, Mali observed, may lead to compliance fatigue, and rules should distinguish between AI-assisted edits and fully AI-generated fabrications.

He also cautioned on cross-jurisdictional traceability, noting that deepfakes do not respect borders, and India should align with Budapest Convention principles and pursue mutual legal assistance protocols for synthetic media offences.
If refined judiciously, he believes the rules could position India as a model jurisdiction for responsible AI governance.

The post India’s Deepfake Crackdown Signals Tougher AI Rules on Fake Content appeared first on Analytics India Magazine.

Follow us on Twitter, Facebook
0 0 votes
Article Rating
Subscribe
Notify of
guest
0 comments
Oldest
New Most Voted
Inline Feedbacks
View all comments

Latest stories

You might also like...