Regulating the Revolution: Three paths to AI regulation India can learn from

AI has left policymakers racing to formulate policies around the rapidly evolving technology very few truly understand.

BySidharth Sreekumar

Published Mar 18, 2024 | 11:22 AMUpdatedMar 18, 2024 | 11:22 AM

Representational image. (iStock)

Last week, India became the newest country in line to tame the AI revolution unleashed on the world. MeitY has declared that AI models or programs lacking thorough testing or reliability in India must only be accessible on the Internet if they receive government approval.

This has been met with criticism from AI experts, who say this step goes too far and will only slow down AI innovation in the country and leave it lagging behind its global peers.

While this might be true, it is also undisputable that this new technology has left many governments unaware and left policymakers in the unenviable position of racing to formulate policies around a technology that is still rapidly evolving. Very few people in the world truly understand. So, with this in mind, for today, let’s instead focus on the different ways AI is being regulated around the world.

Related: Kerala’s first AI teacher

EU and the risk-based approach

The European Union (EU) has been at the forefront of AI regulation and began drafting the AI Act in 2018 before any other government had woken up to the risks of ChatGPT and its ilk. The proposed EU Artificial Intelligence Act, which came into force just this week after a total parliament vote, will categorise AI models based on four risk levels:

Low, Limited, High and Unacceptable – with increasing regulatory requirements for each risk level.

Low risk: AI systems deemed low risk, such as spam filters or AI-enabled video games, can be freely built and used within the EU.

Limited risk: Areas needing transparency about AI usage, such as chatbots and content creation, must be clearly labelled and identifiable as AI-generated.

High risk: Use of AI in any area that could critically impact personal or public life, such as transport, exam scoring, law enforcement, law or medicine, would fall under this category. They must undergo strict assessment and approval processes before any release to the public.

Unacceptable risk: Any AI system that could potentially impact humans’ safety, livelihood, or rights would be classified under this category and deemed strictly banned. This would include social scoring models and systems that could encourage or aid dangerous behaviour.

This risk-based approach focuses more on the technology’s end impact and aims at ensuring public well-being. It is less interested in building and training the models, as it also assumes this to keep evolving with time. By focusing instead on an application level, the hope is that these regulations will be robust enough to keep pace with the rapid technological changes. Because of this, the EU AI Act is considered a foundational touchstone on how other nations could also look at AI regulation, with countries like Brazil looking to follow in the EU’s footsteps.

The state-focused approach of China

China’s approach, meanwhile, delicately balances state control with public good. While it hasn’t adopted a sweeping regulatory framework like the EU, it has been making strides in addressing the impact of AI through focused policies for a while now.

In 2022, it rolled out the Internet Information Service Algorithmic Recommendation Management regulations, which aim to bring transparency to how users are provided ‘personalised’ recommendations from social media services such as Facebook and TikTok. While it foregrounds user rights, it also aims to control news dissemination.

In 2023, it put in place its Interim Measures for the Management of Generative Artificial Intelligence Services, which put in place guardrails for any firm looking to develop generative AI models in the country. While the framework addresses apparent issues such as intellectual property rights, transparency, and discriminatory biases, it also stipulates that the model creators ensure that the AI adheres to the state values around socialism and government support.

Given that the country also has a long history of utilising AI for surveillance and social scoring systems (which would likely be classed as a high or unacceptable risk in the EU), the likely regulatory approach will be one that will prioritise the state ambitions (political and economic) above all else.

The ‘laissez-faire’ approach of the US

The country at the epicentre of this new AI wave seems to be taking a more hands-off approach to AI regulations. There are no comprehensive national laws around AI, and they are unlikely to be formed, given the country’s fractious nature of politics. The US stock market is also riding high on the AI wave to boot, and it’s unlikely any policymaker would want to rock the boat right now with additional regulatory overhead.

The closest thing to the country’s current regulation is the AI Executive Order issued by President Biden in October 2023 titled Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. The order directs government agencies to support responsible AI development in their respective industries and domains without laying out specific action plans or timelines.

Overall, AI regulatory frameworks are still in their infancy around the world. Still, the approaches taken by the EU, China and the US hint at three distinctly different ways of thinking about who or what we safeguard from AI. The EU prioritises risks to the individual, China to the state, and the US wants to safeguard innovation and businesses. The merits and demerits of each approach are still unclear, but rest assured that many countries, including India, are closely watching to see which path to adopt.

[The writer (he/him) advocates ethical technological advancement, and is interested in exploring the confluence of technology and societal impact. He is currently a Senior Product Manager at the Economist Intelligence Unit.]