Published Mar 09, 2026 | 8:49 AM ⚊ Updated Mar 09, 2026 | 8:49 AM
Social media. (iStock)
Synopsis: Recently, the Karnataka government announced the plan to restrict social media usage for children under 16 years, and Andhra Pradesh announced restrictions for children under 13. While these governments and the Union government cite tackling digital addiction as the end goal, there is no proper study that proves causative links between social media and digital addiction.
On 6 March, Karnataka’s Chief Minister Siddaramaiah announced that the state would restrict social media platforms for children under the age of 16 to prevent “adverse effects of increasing mobile usage on children”.This happened merely two weeks after discussions were reported as being in “preliminary” stages.
On the same day, Andhra Pradesh Chief Minister N Chandrababu Naidu said that the state, which was also considering similar restrictions, would enact them for children under 13 within 90 days.
The Union government, too, is considering age-based restrictions that could be “graded”, marking a shift from its stated position a year ago, after the 2026 Economic Survey of India highlighted the need to tackle “digital addiction”.
India is not alone; according to Tech Policy Press, around 40 countries have similar proposals in various stages. Perhaps, many are seeking to emulate Australia’s efforts — which only went into effect in December 2025, and as such, its effects remain unknown.
Further details about Karnataka’s and Andhra Pradesh’s respective approaches aren’t known at this time. Open questions about legality and constitutionality aside, it is concerning that both seem to have been announced without a broad-based public consultation on the trade-offs involved and on whether such an intervention is even desirable.
The phrasing of the respective announcements suggests that even if consultations, whether broad or selective, were to take place, their outcome may be considered a foregone conclusion, with both state governments already explicitly stating their intention to impose restrictions.
What makes sweeping restrictions or bans particularly fraught is that evidence on whether social media usage is outright harmful or beneficial is mixed at best.
Even academics studying Australia’s ban suggest that a cause-and-effect relationship between social media use and adolescent mental and physical health has not been established, contrary to the tone of some parts of public discourse on the topic.
Multiple studies/advisories, such as those by the American Psychological Association (2023), the US Surgeon General (2023), cite both benefits and harms. Others from the Royal Society (2023) and researchers at the Oxford Internet Institute (2023) did not establish a clear causal link between social media usage and screen time.
Recent studies attempting to investigate longitudinal effects across large sample sets in the UK (2025) and Australia (2025) showed a range of effects, including positive outcomes, based on whether usage was light, moderate or heavy, and were also unable to establish a causal link to mental health difficulties.
Localised interventions such as device restrictions in schools/classrooms suggest that these might improve learning outcomes. Even here, there is a debate on the extent of benefits and trade-offs. These are, however, very different in scope from the blanket restrictions being considered.
This is, perhaps, why analogies comparing social media usage to tobacco consumption are unhelpful as they strip the conversation of nuance, and show why a good-faith public conversation, instead of selective consultations are necessary.
That evidence is mixed does not imply that social media platforms themselves are absolved of their roles when it comes to their design choices, always seeking to maximise engagement, nor the inadequate functioning of their monitoring, reviewing and grievance redressal mechanisms.
Even as the question of evidence remains unanswered, there are several practical implementation challenges with implications for a range of actors across society. Determining that a user is under a certain age limit requires identifying everyone on whichever platforms are classified as “social media” for restrictions, and could even extend to the rest of the internet through an expansion of scope or attempts to limit circumvention.
This determination requires “age assurance,” which can happen based on some combination of identification (ID checks, self declarations), or, essentially, guessed through estimation (based on photos/selfies), or inference (usage and/or language patterns). These methods are either relatively easily circumventable or not reliable enough.
Ironically, these measures require greater collection of information and profiling of users. Because of such risks, in early March, around 400 computer scientists published an open letter calling for a moratorium on the implementation of age assurance measures until there is clear “scientific evidence” that the benefits of such restrictions outweigh the risks.
Even the classification of “social media” is non-trivial, as the term can be interpreted in different ways. India defines social media intermediaries so broadly that it could include any services that allow communication between users. Restricted from a subset of platforms, users are likely to seek alternatives, which may be even less responsibly moderated than some of the popular social media platforms, exposing them to greater risk. Or they could use VPNs/TOR to bypass them.
Will we attempt to restrict them, too? Thus, continually expanding the scope of restrictions and the kinds of services covered and expanding the scope of age-gating? This is a question that the UK is currently grappling with as part of a broader consultation.
We are likely to see an escalatory cycle between evasion and enforcement that may not only result in porous restrictions, but will also have a disproportionate impact on everyone, with grave implications for privacy and expression as restriction and identification burdens increase.
Identification schemes may also facilitate mass surveillance, a concern raised by civil society voices in many jurisdictions.
In India, we could see Aadhaar-based authentication sneaking into everyone’s internet usage. Such developments would be even more concerning in places with authoritarian-leaning governments and weak institutions, where dissenting voices tend to be surveilled and targeted.
In the Indian context, we have very few replicable longitudinal studies that uncover the correlation versus causation question of harms attributed to social media, if any. Therefore, these regulation conversations are taking place without adequate, robust data points to support them, and an evaluation of the trade-offs involved.
Note that this does not imply that there are no issues, ranging from cyberbullying, trolling, exposure to inappropriate content, being contacted by malicious actors, and so on. If implemented as is, the effects could be mixed and unpredictable, with some feeling relief, and others feeling lost, losing important resources and connection.
Which is why there needs to be a comprehensive typology of harms and benefits, and an assessment of which kinds of harms are emergent due to real-time communication versus which are scaled versions of existing social problems, and the nature and strength of institutional support mechanisms across state and private actors.
Different risks are likely to require context-specific responses ranging from changes to platform design assumptions, access to safe spaces/counselling for minors and parents, support for caregivers, equipping educational institutions, improving basic state capacity and response, and so on.
Additionally, many operating assumptions regarding implementation may also be upended by the prevalence of shared device usage in India. Given that social media platforms are one (albeit sizeable) of many possible arenas of risk, the narrow focus on them to the exclusion of others (such as the emerging use cases involving intense, personal conversations with generative-AI-powered chatbots) suggests the absence of a long-term and cohesive approach to improve outcomes for society.
Ultimately, an access/technological-layer restriction, such as a social media ban, as a response to more deep-seated issues, might reduce their visibility and shift the locus of the problems instead of meaningfully addressing them.
Instead of population-scale experiments, there is a need to invest resources into understanding the nature of problems more systematically and have a broad-based public conversation with a range of stakeholders.
This is also where we must hold governments more accountable. The expanding surface of risk because of technology diffusion across society is not a new problem, and will continue to remain a challenge.
Active conversations have been ongoing for nearly a decade. Instead of a graded response to a complex set of interconnected issues to advance our understanding of them, coupled with the longer-term, hard work of improving current systems and building resistant institutions of care, we are getting blunt responses embodying the politician’s fallacy.
Doing something because something needs to be done, irrespective of whether it is likely to be an effective response or not.
(Views are personal.)