India’s Plan to Curb Hate Speech Could Mean More Censorship

The Indian government proposes rules requiring encrypted message services like WhatsApp to decrypt data, threatening the security of users globally.
Two WhatsApp ambassadors work on setting up truck in India
Two WhatsApp ambassadors work on setting up truck in India. Proposed government rules could force the messaging service to decrypt encrypted text.Dhiraj Singh/Bloomberg/Getty Images

New rules proposed by the Indian government to rein in tech giants and combat fake news could have a profoundly chilling effect on free speech and privacy online. The proposed changes involve Section 79 of the IT Act, a safe harbor protection for internet “intermediaries” that’s akin to Section 230 of the Communications Decency Act in the US. Current law protects intermediaries such as internet service providers and social media platforms from liability for the actions of their users until they are made aware of a particular post; intermediaries also must only censor content when directed by a court.

The proposed amendments attempt to curb the spread of misinformation on platforms like Facebook and Twitter by effectively forcing internet companies to censor a broad swath of user content. They also require secure messaging services like WhatsApp to decrypt encrypted data for government use, which could affect the security of users around the globe. The rules also would require internet companies to notify users of their privacy policies monthly.

Even before the rules go into effect, internet companies have begun self-censoring content in response to the proposed change. On Thursday, Netflix and eight other streaming services voluntarily agreed to ban unlawful content from their platforms. According to BuzzFeed News, Netflix’s decision to self-regulate was “an attempt to avoid official government censorship.” Netflix did not respond to a request for comment.

Under the new rules, platforms would be required to deploy automated tools to ensure that information or content deemed “unlawful” by government standards never appears online. The Indian government has yet to define what it considers unlawful, but critics warn that it could create incentives for internet companies to flag, and potentially remove, more content than necessary, to avoid publishing something illegal. The unlawful definition likely would encompass everything prohibited under Indian law, which includes hate speech against certain protected groups, defamation, child abuse, and depictions of rape, among many others. Efforts to automatically flag content that could potentially fall under any of these categories will likely identify a lot of legal, and unobjectionable, material.

In a statement, India’s Internet Freedom Foundation described the proposal as “a tremendous expansion in the power of the government over ordinary citizens eerily reminiscent of China’s blocking and breaking of user encryption to surveil its citizens.” Mozilla policy adviser Amba Kak said much of the same in a January 2 post. The proposal “calls into play numerous fundamental rights and freedoms guaranteed by the Indian constitution,” Kak wrote. “Whittling down intermediary liability protections and undermining end-to-end encryption are blunt and disproportionate tools that fail to strike the right balance."

Though computers can often identify what—or who—is in an image, automated content moderation is still a long, long way away. Tumblr’s and Facebook’s NSFW content-flagging systems still erroneously flag ancient art and other random objects as pornography. Identifying whether or not a particular image or tweet is guilty of a more nuanced sin, like defamation or hate speech, is even more hellishly complicated. Humans can barely manage it without setting off a scandal-ridden news cycle; machine learning doesn’t stand a chance.

India’s proposed rules include provisions aimed at containing these inevitable cracks in the wall of censorship. If unlawful content somehow makes it past a platform’s filter, the company has 24 hours to remove it, and must keep a record of the removal for 180 days. The proposed amendment also requires companies to turn over information about the creators or senders of content at the behest of government agencies, which would force end-to-end encrypted apps like WhatsApp or Signal to build a backdoor, compromising the security of users around the globe in the name of Indian government surveillance.

According to reports by India’s Economic Times, government officials say the push to weaken encryption services is in response to recent criticism of secure messaging app WhatsApp, which is owned by Facebook. Misinformation ran rampant across the massively popular platform last year, exacerbating tensions between castes and fanning violence. Fake news shared through WhatsApp has been cited as a primary motivator for numerous instances of mob violence, lynchings, and other crimes. “If there are heinous crimes being committed of people being lynched, the government’s investigating agencies would like to know who the people behind this are,” an anonymous senior government official told the Economic Times. “Heinous crimes cannot be allowed to happen in the garb of (social media) platforms saying that they are encrypted.”

Government officials elsewhere have used similar arguments to justify encryption-busting tactics. Most recently, Australia’s Parliament passed sweeping legislation giving authorities the ability to demand companies create backdoors in secure messaging services, much to the horror of privacy and human rights activists.

India’s Ministry of Electronics and Information Technology met with representatives from Facebook, Twitter, WhatsApp, and other tech companies in December to discuss the proposed changes. In a statement, a Twitter spokesperson said the company looks forward to "continuing engagement" with the Indian government. "Our hope is that after this robust public consultation process, any changes to the Intermediary Guidelines in India strike a careful balance that protects important values such as freedom of expression,” the statement continued.1 The ministry, Facebook, and WhatsApp did not respond immediately to requests for comment. The period for public and industry comment on the proposed amendments ends on January 31, leaving a decision from the ministry as the most likely next step.

1 UPDATED, 10:10 AM: This article has been updated to include a statement from a Twitter spokesperson.


More Great WIRED Stories