Social media firms must remove ‘unlawful , hurtful’ content within three hours, says Centre

New Delhi, Feb 11: Social media companies must now remove or disable access to certain unlawful or harmful content within three hours of receiving a valid government direction, a formal grievance, or becoming aware of a clear violation, under amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, notified on Tuesday.

The new deadline — a sharp reduction from the previous 36-hour window under Rule 3(1)(d) — was not part of draft amendments released by the ministry of electronics and information technology (MeitY) in October 2025 and has been introduced only in the final notification. The amendments take effect on February 20, 2026.

The tighter timeline, requiring near-instant action from platforms such as Instagram, Facebook and YouTube, is expected to face pushback from industry. Legal experts warned the shift leaves little room for careful review, especially in cases involving subjective violations such as copyright disputes or fair use claims, and could force platforms to remove reported content first and assess it later, increasing the risk of over-takedown.

Google did not respond to HT’s queries by the time of publication, while Meta Platforms said it is reviewing the amendments internally. X did not respond.

A senior MeitY official, requesting anonymity, defended the compressed deadline. “Experience has shown us that intermediaries are capable of actually acting fairly fast. There have been cases when they have been able to act within minutes. So clearly they have the technical capacity to act fast,” the official said.

The government has also reworked how platforms identify AI-generated or “synthetically generated” content, dropping rigid technical requirements proposed in October’s draft.

The draft had mandated visible watermarks covering at least 10% of a screen or audio tags during the first 10% of a clip—a fixed-size requirement that has now been removed.

Instead of prescribing exact dimensions, the final rules require platforms to use “reasonable and proportionate” technical measures to ensure AI content is “clearly and prominently displayed with an appropriate label or notice, indicating that the content is synthetically generated.”

“If I see a video, I should know that something is AI generated,” another MeitY official said.

The official warned that intermediaries might lose safe harbour protection under Section 79 of the IT Act, 2000, if they fail to follow due diligence obligations. Failures that could jeopardise this protection include ignoring lawful takedown orders, missing mandated deadlines such as the three-hour window, or failing to label or act against unlawful synthetic content.

The definition of synthetically generated information has been tightened to exclude routine editing activities. Colour correction, cropping, formatting, noise reduction and basic processing are now carved out and will not be treated as AI-generated media — a concern flagged by the Internet and Mobile Association of India (IAMAI), whose members include Google, Meta, Snapchat and WhatsApp, in its October submissions to MeitY.

Dhruv Garg, partner at policy and business advisory Indian Governance & Policy Project, said: “The narrowing of the definition of ‘synthetically generated information’ to exclude routine or good-faith editing, enhancement, accessibility improvements, and document preparation significantly reduces the risk of overreach and unintended impact on ordinary digital activity, creative expression, and assistive technologies.”

Asked whether platforms currently have the technical ability to reliably detect AI-generated content, one of the officials quoted above said companies already deploy sophisticated systems to manage complex online harms and have the resources to build similar capabilities for synthetic media.

MeitY, separately, remains locked in a dispute with social media platform X, owned by Elon Musk, over concerns that its AI chatbot Grok generated sexually explicit and abusive images targeting women users. Governments in Europe and Asia have criticised the tool’s safeguards and opened inquiries, while X has partly restricted the feature to paying users amid widespread backlash. The government is weighing legal action over the continued circulation of objectionable content on the platform.

Leave a Reply

Your email address will not be published. Required fields are marked *