The proposals go against limits laid out by Supreme Court in Shreya Singhal case and bypass safeguards against government censorship stipulated in Section 69A of the IT Act. If government is serious about online safety, it may consider enacting a law addressing harms caused by misinformation
Ministry of Electronics & Information Technology’s proposal also circumvents existing safeguards on the government’s power to block online content. (File/Representational)
The Ministry of Electronics & Information Technology (MeitY) has proposed a regulation that would require internet companies to remove content flagged as “fake” by the Indian Government or face legal liability. The proposal would require companies like Facebook and YouTube to “make reasonable efforts” to remove content identified as “fake” or “false” by the government’s Press Information Bureau (PIB) or any Union Government department. Even though the phrase “make reasonable efforts” may not necessitate the proactive removal of all content declared “false”, under the current proposal, internet platforms would be required to remove content once informed of its “fake” nature by the government. Failure to comply would result in the loss of crucial statutory immunity (or “safe harbour”), leaving the platform at risk of being sued for the content.
Imagine that a citizen’s social media post about the poor condition of a national highway goes viral. The PIB or maybe even the Ministry of Road Transport and Highways declares the post is “fake” and notifies internet platforms, resulting in the post’s removal. Granting government bodies the authority to remove “fake” content in this manner raises serious constitutional concerns as it restricts free speech beyond what is constitutionally permitted and bypasses the safeguards against government censorship set out in Section 69A of the IT Act.
Under Article 19(2) of the Constitution, the government can restrict speech for several reasons, including ensuring the security of India, maintaining public order, decency, or morality, and preventing defamation. In 2015, the Supreme Court in Shreya Singhal v Union expressly ruled that government orders directing the removal of content must be limited to the grounds outlined in Article 19(2). Crucially, the Constitution does not permit the government to restrict speech solely on the ground that it is “false”. There are certain situations where the government does punish falsehood (for example, fraud or perjury). But as the scholar, Robert Post notes, within the realm of “public discourse”, “speakers and their audiences are regarded as presumptively autonomous, and the rule of caveat emptor reigns”. Simply put, when it comes to issues of public interest in a democracy, you always have the right to make up your mind about what is “true” and “false” – for better or for worse. Indeed, leaders of the Opposition, members of civil society, journalists, and citizens are constitutionally and democratically obligated to question the government’s interpretation of “truth”. Thus, if the government wanted to pass a law restricting “false information that may directly lead to violence”, it could do so, as such a law would be justified on the ground of “public order”. But to remove content merely because the government decrees it “false” would restrict free expression without any constitutional justification, violating citizens’ rights to receive all information on public issues, whether true or false.
MeitY’s proposal also circumvents existing safeguards on the government’s power to block online content. Under Section 69A of the IT Act, the government can only block content online for reasons consistent with Article 19(2) and must follow a specific procedure when doing so. Currently, two writ petitions are pending in the Karnataka and Delhi High Courts arguing that the government regularly flouts these procedures and that stronger protections are needed for users. For instance, both petitions contend that a user must be given a hearing before their content is removed by the government. However, if the current proposal is accepted, the limited substantive and procedural safeguards of Section 69A, and the outcomes of the writ petitions, would become irrelevant. The government would be able to remove content by unilaterally determining it to be “false”.
Some may argue that internet platforms only risk liability if the content they refuse to remove is unlawful, and platforms should not be hosting unlawful content in the first place. However, the current proposal incorrectly equates “falsehood” with unlawfulness. Even if the PIB identifies “false” content that is also unlawful (for example, content that threatens public order), the current proposal lacks any process to scrutinise the government’s determination. Such an approach is incompatible with the rule of law, which is founded on checking government power through meaningful safeguards.
If the government is serious about online safety, it may consider enacting a law specifically addressing the harms caused by misinformation in their relevant contexts (for example, health or election misinformation). However, it must demonstrate why content removal is a necessary and proportionate response to the alleged harms of misinformation (for example, why not focus on increasing media literacy, or hold those spreading misinformation accountable?). If it is restricting free expression, it must do so on grounds specified in the Constitution and must establish safeguards to ensure that the use of government power can be scrutinised.
The writer is project officer, Centre for Communication Governance, National Law University Delhi