Every month, WhatsApp bans lakhs of Indian users who are reported for being scammers or violating the platform's privacy policies. In its latest India Monthly Report, the Meta-owned instant messaging platform revealed that it banned around 71 lakh Indian accounts between April 1, 2024, and April 30, 2024, to curb misuse and maintain platform integrity. The company has further assured that it will continue implementing more bans if users continue to violate its rules.
WhatsApp banned a total of 7,182,000 accounts between April 1 and April 30. Amongst these, 1,302,000 accounts were proactively banned before any reports from users. This proactive stance is part of WhatsApp's broader strategy to prevent abuse before it occurs. The company uses advanced machine learning and data analytics to identify suspicious behaviour patterns indicative of abuse.
Notably, in April 2024, WhatsApp received 10,554 user reports on various topics, including account support, ban appeals, product support, and safety concerns. However, only six accounts were actioned based on these reports, reflecting the stringent criteria for account action.
The bans on Indian accounts align with WhatsApp's efforts to comply with the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, which mandate the publication of compliance reports detailing the actions taken in response to user grievances and law violations. The latest June 2024 report highlights WhatsApp's rigorous steps against harmful behaviour, leveraging user complaints and its in-house sophisticated detection mechanisms.
WhatsApp bans user accounts to maintain a safe and secure environment for its users. Some of the major reasons for banning these accounts include:
Violation of Terms of Service: This includes accounts that engage in spam, scams, misinformation, and harmful content.
Legal Violations: Any activity from accounts that breaches local laws results in an immediate ban.
User Reports: WhatsApp takes action based on reports from users who encounter abusive or inappropriate behaviour.
According to WhatsApp, it uses a multi-faceted approach to detect and prevent abuse. This approach tackles potential issues at different stages of a user's account lifecycle.
Frailty WhastApp has set a mechanism to detect and block suspicious registrations during account creation. This helps WhatsApp prevent bad actors from entering the platform in the first place.
WhatsApp also uses ita algorithms to constantly scan message activity for patterns that could indicate harmful behaviour. This includes spam messages, threats, or the spread of misinformation.
WhatsApp notes that it takes user feedback seriously and it also plays a crucial role in scanning the accounts. When users report or block contacts, it feeds into WhatsApp's detection system. This allows WhatsApp to take further investigation and potentially lead to account bans.
A dedicated team of analysts at WhastAllt also continually examines complex or unusual cases to improve the system's effectiveness. By refining the algorithms and identifying new patterns of abuse, WhatsApp strives to stay ahead of evolving threats.