Clipped from: https://www.financialexpress.com/business/news/just-two-hours-new-it-rules-trim-deepfake-takedown-time/4138238/
Platforms are additionally required to communicate their rules, privacy policies and the consequences of non-compliance – such as account suspension, termination or reporting to law enforcement – at least once every three months, up from an annual disclosure requirement.
Platforms are also required to deploy technical measures to verify the accuracy of user declarations relating to such content.
The government on Tuesday notified the amended Information Technology (IT) Rules, 2026, rolling out a significantly tighter compliance regime for social media platforms to curb the spread of deepfakes, non-consensual imagery and other sensitive content across digital platforms such as X, Facebook, Instagram and Telegram.
The new rules will come into effect from February 20. The most consequential change is a sharp reduction in takedown timelines tied to user safety. Platforms are now required to remove non-consensual intimate imagery and deepfake content within two hours of receiving a complaint, compared with a 24-hour window under the earlier framework.
Separately, intermediaries must take down other unlawful content within three hours of receiving an order from a government authority or a court, down from the previous 36-hour limit.
Ministry of Electronics and Information Technology’s notification
The rules, notified by the Ministry of Electronics and Information Technology, reflect the government’s growing concern over the speed at which AI-generated impersonation content can spread and cause harm before remedial action is taken.
Officials have repeatedly said that delayed enforcement renders post-facto takedowns ineffective, particularly in cases involving sexual exploitation, fraud and political manipulation.
What else is required of platforms?
Alongside faster removals, the government has introduced a formal labelling regime for AI-generated content. Under the new rules, intermediaries must ensure that synthetically generated or altered audio, visual or video content that appears authentic is clearly labelled to distinguish it from real material.
Platforms are also required to deploy technical measures to verify the accuracy of user declarations relating to such content.
However, the final rules soften one of the most contested proposals from the draft issued in October last year. The government has dropped the requirement for large, fixed-size watermarks on AI-generated content. The draft amendment had proposed that visual labels should cover at least 10% of the display area, while audio content would carry an audible marker during the first 10% of its duration.
That proposal had drawn sharp pushback from technology companies and industry bodies, including the Internet and Mobile Association of India, which had said that rigid labelling prescriptions would be technically difficult to implement across formats and devices and could significantly disrupt user experience, particularly for audio and video.
In place of fixed watermarks, the notified rules now mandate that, where technically feasible, intermediaries embed permanent metadata or unique identifiers into AI-generated content. These identifiers are intended to function as a digital fingerprint, enabling authorities to trace the computer resource used to create or modify the information.
The amendments also introduce a clearer preventive obligation for platforms offering AI tools. Such intermediaries are required to deploy technical safeguards to prevent users from generating or sharing specified categories of harmful content, including child sexual abuse material, content related to explosives, and deepfakes designed to deceive users about a person’s identity.
User-facing accountability mechanisms have also been tightened. The time limit for grievance officers to resolve general user complaints has been reduced to seven days from the earlier 15-day period.
Platforms are additionally required to communicate their rules, privacy policies and the consequences of non-compliance – such as account suspension, termination or reporting to law enforcement – at least once every three months, up from an annual disclosure requirement.
Taken together, the amendments signal a shift towards faster enforcement and greater transparency, while moderating some of the more prescriptive elements that had raised industry concerns during the consultation phase.