Logo

Logo

Government should not regulate social media

The burden of proof should lie on the government – this ensures that content removal is not a norm, but an exception.

Government should not regulate social media

(Representational Image: iStock)

The Indian government appears to have made up its mind about the need to regulate content on social media. In a recent Supreme Court filing, it has submitted that new social media regulations will be notified by early next year. Broadly, the goal of these regulations will be to make social media platforms more liable for the content they host. Another goal will be to enforce traceability of content, ostensibly to enable accountability. According to the filing, the government is worried about “unimaginable disruption to the democratic polity” from unregulated social media content. If the proposed regulations go through, India will join a burgeoning list of countries that regulate social media, including recent regulators such as Australia, Singapore, and Europe, and old hands such as China.

Most agree that obviously illegal content should be regulated; child pornography, direct and specific threats, violent extremist videos, and the like. The need to regulate anything that is not obviously illegal, however, is unclear and must be scrutinized. The government argues that unregulated social media promotes misinformation, hate speech, defamation, threats to public order, terrorist incitement, bullying, and antinational activities. While this may be true, it is unclear why content regulation is the answer, if the content is not obviously illegal. Also, social media is simply a platform for expression; making the platform liable for content, even if it is obviously illegal, makes only as much sense as making a transporter, restauranteur, or a cellphone network provider liable for content of any discussion that used their infrastructure.

There are also challenges to regulating content that is not obviously illegal. Consider misinformation or fake news, for example. Who decides what is fake and what isn’t? Even if some content is fake, who decides whether it was satire or intended to create harm? Even if it was intentioned to create harm, who decides when it is permitted (freedom of expression) or prohibited? Besides, how exactly does one implicate social media for fake news. Even before the advent of social media, fake news was regularly peddled and relished (e.g., the Ganesh idol drinking milk, the Bangalore Nale Ba woman, Amala and Kamala; the list is endless). Social media, at most, increases the scale and speed of spread of misinformation. Analogous challenges exist while regulating hate speech as well. Where should an elected official draw a line between permitted and prohibited speech?

Advertisement

As our recent experience with movies such as Padmavat, Lipstick Under My Burka, and Article 15 show, we do not know how to define or apply rules consistently. Even more questions surround regulation of extremist and anti-national content. Even if one could unambiguously determine what such content was, how much should one regulate it? If such content is banned on one platform, the content will simply move to different, perhaps lesser known platforms. Fracturing of extremist communication will make it harder to track and counter such communication or gather intelligence, perhaps making the public less secure overall. Any social media regulation that must be done should not be done by the government.

Collective experience over many industries (e.g., banking, healthcare, insurance, oil, etc.) has shown that government regulations stifle innovation and create monopolies. The high cost of complying with regulations inhibits competition since it discourages upstarts from entering a market. It also disproportionately harms existing smaller players. For a country that desperately needs more startups, innovation, entrepreneurship, and investment, social media regulations will trigger exits and discourage new investments. This will be especially true if government insists on traceability. A vast number of businesses today are built upon end-to-end encryption. Most of them would prefer to exit the Indian market (or not enter it), instead of relaxing end-to-end guarantees provided by their products.

Besides, weakening encryption contradicts the principle of data minimization endorsed in government’s data protection bill. Another big reason why government should not attempt to regulate is that it does not know how to. Unlike traditional publishing where regulation can be ex ante, social media regulation must necessarily be ex post due to scale. Government does not have the technological wherewithal to detect and remove objectionable social media content at the requisite scale or speed. Yet another reason for the government to not regulate is that it will trigger overreaction. Large social media platforms will be happy to comply to avoid liability (especially since the state now becomes partially responsible for the definition of objectionable content). However, an overenthusiastic compliance can severely limit freedom of expression and suppress dissent or disfavored speech, which, in turn, may lead to more “unimaginable harm to democratic polity” than the worstcase scenario government painted in its Supreme Court filing.

Regulation of social media content should be best left to the tech companies themselves. There are several reasons. First, they have an obligation. It could be argued that they are monetizing a public resource – data of citizens. So, like licensee of the broadcast spectrum, who must abide by some public interest obligations in exchange for the opportunity to monetize a public resource, social media companies have an obligation to the public to limit spread of misinformation, extremism, hate speech, etc. Second, these companies have a strong incentive – survival.

Customers will automatically gravitate away from a platform where a large amount of content is objectionable or untrustworthy. Threat of regulations also provides an incentive. Third, only they have the resources and the background to address the problem. Ex post detection and removal of objectionable content will require development of sophisticated tools and technologies. Fourth, they have been at it already. Youtube, which employs 10,000 people globally for monitoring and removal of objectionable content, took down 8 million videos in 2018 during a three-month period, 81 per cent of which were removed automatically, and threequarters of those clips never received a single view. Facebook, which employs over 30,000 people for detection and removal, removed over 15 million pieces of violent content during a threemonth period in 2018, over 99 per cent of which was done automatically. Government also has a role to play.

It should encourage social media companies to define and periodically update content standards and enforcement guidelines. Ideally, this is done through an independent body with participation from the different stakeholders, including civil society and law enforcement. Finalized standards and guidelines should be made public for transparency. This body should also publish compliance data periodically, both for transparency and to encourage compliance. In addition, government should make social media platforms liable for obviously illegal content, if it is not removed within a certain period of being reported. There should also be a transparent and rapid redressal mechanism to be used in case of disagreements. The burden of proof should lie on the government – this ensures that content removal is not a norm, but an exception.

Finally, government should largely focus on addressing the systemic problems in the society – communalism, casteism, sexism, extremism, poor law and order, etc. Online discussions simply mirror what is already happening in the society. A strong enforcement of rule of law will allow greater freedom of expression online and weaken the need for regulation. Freedom of expression on social media is integral to a healthy, thriving democracy. We will be stronger by enabling and cultivating it, not curtaining it.

(The writer is a professor of engineering at the University of Illinois at Urbana- Champaign. He often writes about issues at the intersection of technology, policy, and society)

Advertisement