FILE PHOTO: Durov said that platform moderators will be using AI to identify and remove “problematic content” from the platform’s search feature. | Photo Credit: Reuters

How has Telegram changed its stance on content moderation policy?

On September 24, Telegram CEO Pavel Durov announced definitive changes to Telegram’s privacy policy, saying they will now provide authorities with user data including phone numbers and IP addresses in response to valid legal requests

by · The Hindu

The story so far:

In the first week of September, Telegram quietly edited out language from its FAQ page that stated private chats were protected and that they “do not process any requests related to them.” The answer to a section titled, “There’s illegal content on Telegram. How do I take it down?” has now been altered to include directions on how to report illegal content and messages. 

It also turned off a ‘People Nearby’ feature that helped users find and message other users in the vicinity, and replaced it with a “Businesses Nearby” feature instead, allowing “legitimate, verified businesses” to display products and accept payments.

On September 24, Telegram CEO Pavel Durov announced definitive changes to Telegram’s privacy policy, saying they will now provide authorities with user data including phone numbers and IP addresses, in response to valid legal requests. The move, Durov said, intends to “deter criminals from abusing” the platform’s search function that was being abused to “sell illegal goods,” he added.

In the past, Telegram had agreed to supply information on terror suspects as per policy, but now it covers criminal activities in general. The company will disclose if it provided user information to authorities, in its quarterly transparency reports. 

Additionally, Durov said that platform moderators will be using AI to identify and remove “problematic content” from the platform’s search feature.

But this is possibly just the beginning of a series of changes that the app will make. 

How do other end-to-end messaging apps moderate content?

In 2021, post the January 6 riots at the U.S. Capitol, employees of the Signal app internally raised concerns that the app wasn’t doing all they could to stave off abuse, with reports showing a surge of users in Telegram and Signal. 

Signal is owned by a non-profit and doesn’t sell ads or user data, or even collect demographic or personal details around users - other than phone numbers. Given that all groups and direct messages on the platform are encrypted, the company has taken a similar approach to Telegram in content moderation by saying they don’t want to find out how the app is being used, and that knowing would be the antithesis to its encrypted nature. 

But Telegram offers more features than Signal that make the app a go-to place for antisocial elements. For example, Telegram enables mass communication, allowing groups to have up to 200,000 members and making the platform a hotbed for those exchanging child sex abuse media, terror-related content, and misinformation. 

By comparison, rival end-to-end encryption messaging apps like Signal and Meta-owned WhatsApp both allow up to 1,000 people in a group. 

Signal also doesn’t advertise these groups within the app, but Telegram has a search feature so users can simply look for a specific hashtag or a term to find a publicly visible forum, which makes it child’s play to find groups posting hateful content.

The deluge of users to these apps and others like Parler put them under fire from several activists, but Telegram was closer to the threshold of amplifying these groups even more. Despite being marketed as a messaging app, these features make Telegram behave more like a social media platform.

So, while it isn’t that Signal followed any specific content moderation policies, it is more that Telegram’s features raised a greater number of red flags.

Meanwhile, WhatsApp’s claim of end-to-end encryption is taken as a mild joke. The messaging app is known to hand over metadata to law enforcement agencies, while Meta has a long history of being hungry for user data. Regardless of the user’s privacy settings, WhatsApp collects user metadata.

The app has at least 1,000 content moderators who are able to view some messages if the recipient reports them. WhatsApp disclosed in its terms of service that once an account is reported, it “receives the most recent messages” of the group or user in question as well as “information on your recent interactions with the reported user.” 

While the clause doesn’t mention it, this could include the user’s IP address, mobile phone, phone number, profile photos, and linked Facebook and Instagram accounts.

What are the obligations imposed on intermediaries operating in India? 

Intermediaries such as social media or messaging platforms operating in India are expected to comply with national regulations and respond promptly to complaints regarding unlawful content.

However, there is a provision that may give tech or social media platform executives a safe harbour of sorts, in the face of legal action.

Section 79 in The Information Technology Act, 2000, states that “no person providing any service as a network service provider shall be liable under this Act, rules or regulations made thereunder for any third party information or data made available by him if he proves that the offence or contravention was committed without his knowledge or that he had exercised all due diligence to prevent the commission of such offence or contravention.”

In simpler language, a person who is providing a social media or messaging platform shall not be liable under the Act if they prove that they were not aware of offending third-party content being made available on their platform, or if they did their best to stop such offences from taking place.

This means that an individual such as Telegram CEO Pavel Durov could potentially defend himself in India by saying that he is not responsible for unlawful content posted by others on the network that he provides as a service. However, Durov would be obliged to quickly remove such content once it comes to his notice, and Telegram must have preventive measures in place.

The IT Act also gives the government power to notify the intermediary that unlawful content is live. Intermediaries must quickly respond by disabling access to the content.

This can be useful when one is, for example, trying to get explicit deepfakes or highly personal leaked media removed from digital platforms, as tech companies are mandated to act quickly. 

At the same time, there are concerns about censorship and governments unduly pressuring tech companies to remove content critical of it.

In compliance with India’s IT regulations, Telegram has a designated grievance officer, in order to deal with “public content which is not in accordance with the applicable IT regulations,” per its website.

Other intermediaries such as Meta and Google also have grievance officers whom Indian users can contact.

Published - October 02, 2024 09:00 am IST