Lawyer and public policy researcher Smriti Parsheera discusses questions on accountability, free speech and what privacy means on social media.
Smriti Parsheera during the Explained.Live session of The Indian Express
At a time when WhatsApp and Twitter are in a tussle with the government over the new IT Rules, lawyer and public policy researcher Smriti Parsheera, speaking before a nationwide audience, discusses questions on accountability, free speech and what privacy means on social media. Edited excerpts:
On why government is trying to regulate the space online
The world over, including in India, there is this concept, the idea of intermediaries being exempt, the safe harbour being provided to them, as long as they are not actively engaged in the transmission that is taking place on their network. What the Indian government is now trying to do with these rules is try to change the balance of what the established legal position on that has been. And while it is hard to fault them on the basis of why this is being done, I think there is much to fault in the manner in which this is being done. So that’s really the background of why states want to intervene, where this is coming from, and I think on the other part of it, there is obviously a geopolitical angle to all of this.
Whether there is global consensus on this
India is not alone in asserting digital sovereignty. This is happening across Europe. Germany already has a law which does a lot of what India is seeking to do through these intermediary rules. Europe is in the process of debating a new Digital Services Act and digital markets… But we are far from a settled position on this. I think everywhere, this will be litigated because there are key issues of freedom of speech and expression, so it is a few years before we reach a kind of a settled global position on any of this.https://images.indianexpress.com/2020/08/1×1.png
On putting liability on social media companies, which are not creating content
As per the safe harbour principle, which we now also adopt in the Information Technology Act —Section 79 deals with that — the point is that they are not. Subject to complying with certain conditions, they are not liable for third party conduct or using services that they provide, as long as they are not editing this content, they are not airing or abetting an unlawful activity happening through their platform, and when they have knowledge received on behalf of the government, or on behalf of a court order, then they do something to take down that comment, or they restrict access to that comment.
So there are circumstances in which that safe harbour will be broken. So as long as you meet the rules and requirement of this area, the primary law says that they would be entitled to an exemption to liability. But it says that in addition to not doing all of these things, they need to exercise due diligence, and they need to abide by the guidelines framed by the government. And these new guidelines are those which the government expects them to abide by and rely upon in order to continue to gain the benefits of intermediary exemption.
So there are certain obligations, and some of these already existed in the 2011 version which have been tweaked and strengthened. So the broad starting obligation is that for everyone who is a user of these intermediaries, they have an obligation to convey to them that these are the do’s and don’ts on my platform so you can’t do intellectual property breaches… So actually some of that is also problematic. To take an example, they say you can’t put out a comment which is patently false and put out with the intent to harm someone. And the intermediary is saying it is typical to find this in terms and conditions, but if someone starts thinking and enforcing this and applying their mind in every scenario, then of course it becomes problematic on what you want the intermediary to do, and when will something be offensive, and how is something determined to be patently false, and how will they know the intent of the person behind putting out that patently false message.
So for all intermediaries, there is a requirement of a grievance redressal mechanism to be created… and there is definitely a mechanism to access that.
On the necessity of a grievance redressal mechanism
The whole reason we had intermediary guidelines and this idea of safe harbour came about in Indian law, is related to a case that took place in 2004 where this company called baazi.com, a subsidiary of eBay, had their CEO who was prosecuted directly in his individual capacity as the director for an offence which involved an explicit sexual MMS clip being sold on that platform. So this was a person and the grievance against him was that they didn’t exercise due diligence, they didn’t have the right filtering mechanism… And this person felt that the CEO as well as the general manager of the company should be individually liable for it. This was, of course, subsequently turned down by later courts, but the fact that this happened — and the fact that an employee of a company who is really not in a position to assess all content that goes up and verify it and take that down… So there is a requirement for having systems in place, but to have individual liability for employees… problems come with that.
On the risk in a private company like Twitter filtering out information
This is the classic problem that in a lot of legal texts is called the “chilling effect”. What is the chilling effect on speech when you have certain regulations around what speech is permitted and not permitted? Then the tendencies of people making that speech is to avoid that zone of ambiguity altogether and self-censor and not say things which could potentially be taken down, potentially lead you through any kind of trouble.
Once these systems are in place and there is sanction to do it for xyz things, there is this problem of mission creep. This notion of mission creep is that when a technology is set up for a certain purpose, you often end up saying that why not use it for that purpose also.
On where the user fits in all this
As a user, you have very little agency in a lot of these transactions taking place between these intermediaries. So there are two ways the user needs to and could have a voice. One is about consultation and debate around any of these rules before they came into place
The second part is in our capacity as individuals who are users of these services vis-a-vis the companies. The bigger problem with privacy both vis-a-vis the company as well as the government is that a lot of the time the harms from privacy violations are invisible. Today I don’t realise what is the problem if this data is shared or if the data goes out somewhere. The only time someone thinks about it is when there are data breach incidents. But for the rest of it, a lot of the harms from privacy are invisible because I don’t realise the manner in which my thought process is being re-shaped, where the secondary uses of data are happening and what could happen in the future based on my data — because not all decisions about me are based on my data, they are based on data of people who are profiled in a similar manner as me…
People come with this mindset that I have nothing to hide, so why should I care about this? But the point is that everyone has a password on their emails, everyone has certain boundaries around information they regard as private. And if you think about the level of intrusion which is possible, if there is a system from which all traffic between all users in the country is flowing, which the government can access through a centralised monitoring system, you would worry a little more about it.
On how users can understand if their privacy is being protected
I wouldn’t say you have no redressal, but we don’t have an effective redressal. So the provision that does exist is Section 43(A) of the Information Technology Act, which basically says that if there is a corporate who takes your sensitive personal information and fails to take reasonable security measures to protect that information, and there is harm caused to you, they are entitled to give you compensation for that. But in practice, it is not something most users know about, or implemented very often.
Of course, this is all supposed to be redressed through the Personal Data Protection Bill which we have been debating for a while.Smriti Parsheera speaks with Apurva Vishwanath of The Indian Express
On the data protection legislation
One of the important things that came out of the right to privacy judgment was that the right to privacy has two aspects. It gives you protection against the state and the state is not supposed to breach your privacy in a manner unless it’s fair, just and reasonable. So it is not absolute protection, but there is a framework around that. But you need to go to a court, you need to claim a constitutional breach of the right, which is again, in theory available to everyone but in practice not really accessible for everyone to do that. But there is also an obligation on the state to make sure that third parties are not able to violate my privacy. To put in place a legal framework, which protects me, that’s really where this Data Protection Bill is emanating from.
There is one set of provisions which apply to all of data fiduciary. The Justice Srikrishna Committee used this term ‘data fiduciary’ where they say… we want the person deciding what to do with the data to be called a fiduciary. So they cast a whole set of obligations with the fiduciaries which start with the basic requirements like notice to the person on what the data is being used for, take only as much data as is necessary for your specific purpose. So there is a whole set of rights being given to individuals that I have the ability to access my data and find out what each entity has about me; if there is a problem in it I have a right to ask for correction of it, I have a right to be forgotten where there is a balancing to be done between my privacy and the public interest of that data. But there is a clause even within this law which talks about significant data fiduciary. And again it talks about social media intermediaries being included within this, which have an enhanced set of obligations. So, before you start doing your processing of certain kinds of data, you’ll need to have frameworks that talk about what kind of processing you will need to do, how will this be done, you will need a third party independent audit of all of these processes. One doesn’t know what those limits would be, but it is possible since these two policy processes are happening in parallel that it might be the same as the intermediary rules.
On whether weakening of encryption, as demanded, can result in government using it to track activists and political opponents
This whole reciprocity of how things work on the Internet is really important for countries to realise. Any obligation that you want on foreign entities will also be expected at some point to be replicated on your entity in other jurisdictions. India might be thinking that it has more foreign companies than Indian companies abroad, but it is a very valid point that you should be able to reciprocate with similar obligations being imposed on your companies and misuse and tracking. That is a legitimate concern and that can happen.
On what is wrong in government trying to control social media
Intermediaries should not be the judges of speech, they should not become the gatekeepers of speech, and even if they are, we need to be much more accountable, but I think the need to regulate that and the need to have some controls around that does not have to necessarily be conflated with the question of intermediary liability and the question of safe harbour. You can have legal requirements— like for example the data protection requirement is imposing obligations on significant intermediaries , but it’s not linking a breach of that to the fact that now we will hold you liable for the fake news that took place, or for the underlying crime that was committed…