Logo

Logo

Content moderation

The pressure to do more, given the proliferation of malicious, agenda-driven users of social media and online platforms, is growing. This is especially true of developing nations which have borne the brunt of transnational and domestic (mis)information flow as a potent tool of fifth-generation warfare (5GW) ~ from promoting enmity between communities to encouraging the use of violence, and blatant defamation.

Content moderation

Photo: IANS

Technology companies, under government and public pressure, have in the past few years become more active than ever in trying to stop terrorists, supremacists, conspiracy theorists, and other hateful individuals, organisations, and movements from exploiting their platforms.

But the pressure to do more, given the proliferation of malicious, agenda-driven users of social media and online platforms, is growing. This is especially true of developing nations which have borne the brunt of transnational and domestic (mis)information flow as a potent tool of fifth-generation warfare (5GW) ~ from promoting enmity between communities to encouraging the use of violence, and blatant defamation.

Even where these countries have robust democratic and rule-of-law traditions, justice-delivery systems are decrepit and take an inordinately long time to deliver verdicts. This leaves individuals and institutions that have been attacked online and are following due process to bring the perpetrators to book becoming the victims twice over ~ the process of seeking justice given its time-consuming and expensive nature is also a punishment.

Advertisement

Against this backdrop, a recent article by Daniel Byman, Foreign Policy Editor of Lawfare, asks: If tech companies decide to act more aggressively, what can they do?

Much of the debate on the issue centres around whether to remove offensive content or leave it up, ignoring the many options in between. Byman, however, presents a range of options for technology platforms, discussing how they work in practice, their advantages, and their limits and risks.

According to him, the actions companies can take fall into three categories. First, they can remove content including that which is dangerous and/or offensive, delete individual posts, and de-platform users or even entire communities.

Secondly, they can try to reshape distribution ~reducing the visibility of offensive posts, downranking (or at least not promoting) certain types of content, and using warning labels ~ and otherwise, try to reduce or limit engagement with certain material but allow it to stay on their platforms. Thirdly, tech companies can try to reshape the dialogue on their platforms, empowering moderators and users in ways that make offensive content less likely to spread.

Tensions and new problems will emerge from these efforts. As the author himself concedes, the question of censoring speech will remain even if certain content remains up but is not amplified or is otherwise seen as limited. Companies also have incentives to remove too much content (and, in rarer cases, too little) to avoid criticism. It would, therefore, help if process transparency is expanded so that users, legislators, and policymakers can better judge the effectiveness of company efforts.

“Some toxic users will go elsewhere, spreading their hate on more obscure platforms. Despite these limits and trade-offs, companies can tailor their approaches to offer users a more vibrant and less toxic user experience,” adds Byman. But the core question remains: Who will moderate the moderators? National governments need to ensure sovereign jurisdictions are respected by tech companies. But, as content moderation cannot be left to heavy-handed government censorship and its use of discretionary powers, serious efforts to establish independent.

Advertisement