Logo

Logo

Discipline or digital exclusion?

A degree of regulation is vital to ensure social media platforms are not used to spew hate or disinformation, says ASHIT KUMAR SRIVASTAVA.

Discipline or digital exclusion?

(Photo: Getty Images)

The recent induction of the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules 2021(hereafter IT Rules, 2021) in the cyber law ecosystem of India has brought about quite a stir, especially knowing that the liability-aspect of the IT Rules has just been augmented in comparison to the Intermediary Guidelines 2011.

Under the new law, it is the responsibility of the Social Media Intermediary (hereafter SMI) and even the Significant Social Media Intermediary (hereafter SSMI) to ensure it notifies its user to not use its platform for hosting, publishing, transmit(ing), updating or sharing any information enumerated under rule 3; the means of notifying the user would be privacy policy, rules and regulation and user agreement. Interestingly, if the SMI or SSMI fails in complying with the rules, the liability for content posted on the platform would be that of the SMI or SSMI, as if the content was posted by the platform itself as per the relevant laws (refer to rule 7).

Now this fixation of liability on the platforms has been seen as a threat to free speech.

Advertisement

Several of the petitions filed in High Courts across India are alleging that it creates a chilling effect on free speech. If analogically dismantled, the logic behind this argument is based on the fact that if MNCs running social media are faced with a threat of liability, they will turn trigger-happy and expunge any speech which is even remotely close to the content prohibited from posting under the rules and this in turn would lead to digital exclusion. Rule 3 (1) (c) quite clearly says that if the user continues to not comply with the rules and regulations, the intermediary has the right to terminate access of the user.

In the present digital age, with digital platforms becoming a means of expression, association and even carrying on a profession, it will be a severe threat to be excluded from digital platforms. In a way digital exclusion has the capacity of leading to social exclusion.

Seen from this perspective, the core question which comes to the fore is what content is prohibited under rule 3, especially because several of the petitions have challenged this rule on the ground that it is nebulous, meaning that the grounds could be used vaguely and thus provisions made for regulation would turn into prohibition.

As already discussed, MNCs would be trigger happy to expunge any content even remotely close to the content mentioned under rule 3.

In all possibility they would also modify their A.I. driven moderation to expunge such content. This automated exclusion could be the death nail to free speech and has to be articulately reconciled with the laws.

Many of the petitions have raised the concern that too much pressure on social media platforms for differentiating between good and bad content may lead to expunging of innocent content, which was remotely controversial.

A report taken out by Human Rights Council styled as ‘Report of the special rapporteur on the promotion and protection of the right to freedom of opinion and expression’ in para 17 highlights the fact that there are legitimate state concerns pertaining to privacy and national security for regulating online content.

But as demand for quick response for removal may risk a new form of prior restraint, the complex question of freedom should be adjudicated by public institutions and not by private bodies.

Though these are helpful insights, they cannot be the guiding principle, as it needs to be understood that the sheer size of social media platforms institutionally demands self-censorship, as it is next to impossible to keep track of billions of accounts on such platforms and monitor the content that is posted.

Additionally, digital exclusion cannot be utilized as a threat or even as a bargaining tool for defeating the purpose of any law. Presently, all platforms have their community guidelines for governing content posted on them but these are mostly a reflection of their country’s ideology.

For example, if a social media platform is of American origin, its community guidelines are reflective of the free speech culture of America.

Any social media platform delivering services in a country has to respect the sovereignty and integrity of that nation. Interestingly, a law like this already exists in Germany by the name of the Network Enforcement Act. Referred to as NetzDG, the German law was made with the objective of removal of hate speech and content related to offences endangering the State.

Thus, regulating laws needs to be seen from the perspective of objectivity.

It is a known fact that many social media platforms today are being used for spewing fake news, misinformation and disinformation which are harmful to public order. Therefore, it is time that a sense of regulation is embedded within their algorithm.

The writer is Assistant Professor of Law, National Law University, Jabalpur.

Advertisement