As India mulls artificial intelligence regulation, it must look for culturally aware AI systems

Regulation fosters accountability and transparency in AI-driven decisions, promoting public trust and fair competition.

ByNikhil Naren

Published Jun 22, 2024 | 3:00 PM Updated Jun 22, 2024 | 3:34 PM

Representational image. (iStock)

As the world witnesses the strides in artificial intelligence (AI) governance and regulation, the pursuit of responsible and ethical AI practices has taken centre stage, urging nations to establish comprehensive regulatory structures. The narrative gradually shifts as leaders, tech visionaries, and policymakers recognise the urgency to steer AI development towards safety, equity, and accountability.

AI regulation is crucial for countries like India to ensure ethical standards, protect privacy, and enhance security. As AI integrates into various sectors, it is vital to prevent biases and discrimination, safeguard personal data, and protect systems from cyber threats. Regulation fosters accountability and transparency in AI-driven decisions, promoting public trust and fair competition. It also prevents misuse of AI technologies, such as in surveillance and deepfakes, ensuring public interest is upheld.

In the wake of the Bletchley Park Declaration last year, where global leaders convened to address the risks posed by cutting-edge AI technologies, India’s stance for actively considering regulatory frameworks has gained momentum. Prime Minister Narendra Modi’s call for a global ethical AI framework marks a pivotal shift, reflecting the nation’s inclination toward a risk-based approach to AI governance. Initiatives like NITI Aayog’s publications on Responsible AI for All underscore India’s commitment to navigating the ethical complexities of AI. I hope that with the current government’s third consecutive term, focused advancements around the development of AI and ethical regulation will continue.

Also read: Regulating the revolution

Beware of faultlines

The global consensus on the imperative for AI regulation resonates deeply in India’s technological landscape. AI has the potential to amplify the negative aspects of social media. AI will exacerbate these issues if unregulated, enabling individualised targeting and disseminating entirely false information. Hence, the evolving discourse on AI in India reflects a crucial juncture where the nation stands poised to shape its AI future through pragmatic and principled regulation. AI regulation spans concerns from privacy and ethics to transparency, accountability, and intellectual property rights.

The global community’s response to deepfake technology is noteworthy. China’s laws mandating the disclosure of deepfake technology aim to combat misinformation and protect individuals from manipulated content. Similarly, the European Union’s (EU) efforts, such as the Tackling Deepfakes in European Policy and the proposed Artificial Intelligence Act, signify a concerted effort to address ethical concerns and combat the misuse of AI-generated content.

In critical industries like healthcare and finance, regulators must begin employing different strategies to ensure responsible AI use. The EU’s proposed AI regulations categorise AI systems into risk levels with corresponding compliance standards, emphasising transparency and accountability. Laws requiring companies to notify employees of AI implementation prioritise transparency and ease the transition into AI-driven workflows. Initiatives like IBM’s- AI Fairness 360 toolkit further demonstrate the commitment to reducing biases in AI systems and promoting equality.

Also read: Data in age of AI

Data consent and privacy

The Digital Personal Data Protection Act (DPDPA), which includes specific provisions explicitly addressing the role of AI in data processing, establishes legal accountability and rights for AI systems. This expanded classification not only acknowledges the capabilities of AI but also enables transparent handling of data consent. Moreover, the objective of the DPDPA is to safeguard the rights of individuals whose data is processed, which is especially crucial in the context of Generative AI. As the DPDPA places responsibility on Data Fiduciaries for compliance, contracts between companies and Data Processors will determine how they ensure compliance and address issues related to specific AI usage. Further, the impending Rules under the DPDPA and industry measures will likely influence how AI capabilities align with prescribed norms.

However, challenges persist in regulating AI in India. AI’s evolving nature necessitates continuous adaptation of legal frameworks to address emerging issues such as bias mitigation, data security, and the ethical implications of AI-driven decisions. Moreover, effective implementation and enforcement of these regulations will be critical to ensuring their impact on safeguarding individuals’ rights and fostering responsible AI innovation in India.

Presently, the unregulated use of AI in India prompts a critical evaluation of the multifaceted repercussions of this lack of oversight. This scenario necessitates urgent measures to reskill and upskill the workforce to align with the evolving technological landscape. The lack of clarity regarding ownership and attribution of AI-generated work hampers incentives for creators and innovators and can also mislead the masses.

Ethical practices

Furthermore, the vulnerability of unregulated AI systems poses significant national security risks, heightening susceptibility to cyber-attacks and other malicious activities. India’s diverse cultural and societal landscape necessitates nuanced regulations. Unlike China & the EU, where the regulation has a risk-based approach, India, a developing country, must continue prioritising developing and implementing comprehensive regulatory frameworks that safeguard individual rights and foster research, development, and innovation. Further, an unregulated AI landscape in India might inadvertently disrespect cultural norms or sensitivities, emphasising the need for culturally aware AI systems and appropriate regulations to govern them.

Ethical and responsible AI practices are paramount to realising AI’s full potential while mitigating the associated risks. Through pragmatic and principled regulation, India can shape a future where AI supports and enhances human well-being.

(Nikhil Naren is a British Chevening Scholar and Assistant Professor at Jindal Global Law School and Of Counsel, Scriboard, New Delhi. Views are personal.)

(South First is now on WhatsApp and Telegram)