When regulation meets the algorithm

India Tightens Digital Control as Online Blocking Orders Surge to 24,300 Amid AI Deepfake Trend

by · TFIPOST.com

India’s digital regulation has intensified sharply. In 2025, government blocking orders reached 24,300. This marks a major jump from 12,600 in 2024. Earlier, the figure stood at around 6,000 annually in 2023. Consequently, the data points to a rapid expansion in state intervention in online content.

Officials link this surge primarily to the rise of AI-generated material. In particular, deepfakes have emerged as a major concern. These tools now produce highly realistic fake videos, audio, and images. As a result, monitoring online content has become more complex and far more urgent.

A large share of blocking orders target major platforms. Specifically, around 60 per cent go to X, formerly Twitter. Meanwhile, Facebook and Instagram together account for 25 per cent. In addition, YouTube receives about 5 per cent of the orders. The remaining actions apply to other platforms.

AI-driven misinformation accelerates regulatory response

Generative AI has significantly changed the speed and scale of misinformation. For instance, content now spreads faster and reaches wider audiences within minutes. Therefore, authorities have been forced to respond more quickly than before.

The government acts under Section 69A of the Information Technology Act, 2000. This law allows blocking of online content on grounds such as national security, sovereignty, public order, and prevention of offences. Moreover, a Blocking Committee reviews and approves these requests.

However, the operational structure has also shifted. Earlier, the committee met once a week. Now, it meets several times a week through virtual sessions. Consequently, officials say this change reflects the rising volume of urgent cases linked to AI content and sensitive posts.

Emergency powers increasingly relied upon

In addition, authorities are using emergency provisions under Section 69A more frequently. This mechanism allows immediate blocking without prior committee approval. Nevertheless, such decisions must still be reviewed within 48 hours.

Officials argue that this approach enables faster action against viral misinformation. At the same time, state agencies continue to send urgent requests that require immediate intervention.

The highest spike in blocking orders occurred during Operation Sindoor in May 2025. Since then, the numbers have remained consistently high. Notably, more than half of all requests come from the Ministry of Home Affairs and the Ministry of External Affairs.

Deepfakes raise political and security concerns

Meanwhile, the issue has also entered the political sphere. Congress MP Shashi Tharoor recently flagged deepfake videos using his identity. In these cases, AI-generated voiceovers were placed over real footage to distort statements.

He stated that the content appeared to originate from outside India. Subsequently, authorities took steps to block the material within the country. This incident highlights how quickly manipulated content can spread across platforms.

Transparency concerns and potential expansion of powers

Despite rising enforcement, transparency remains limited. In many cases, the government does not release full details of blocked URLs. Furthermore, this lack of disclosure extends even to Parliament and RTI responses, raising questions over oversight.

At the same time, policy discussions are underway to expand blocking authority. Reports suggest that four additional ministries may be empowered under Section 69A. These include Home Affairs, External Affairs, Defence, and Information and Broadcasting. If implemented, this move would significantly widen state control over online content regulation.

Overall, India’s expanding digital crackdown reflects a broader global trend. Increasingly, governments are responding to AI-driven misinformation with faster and stricter regulation. As a result, tech platforms now face mounting pressure to adapt to a rapidly tightening digital governance framework.