India’s booming medical AI sector risks safety without crucial clinical validation, warns study

Tech cities like Bengaluru are mushrooming with startups coming up with medical AI innovations but a new study finds that they are not being clinically validated and this is extremely risky.

Published Nov 13, 2024 | 7:00 AMUpdated Nov 13, 2024 | 7:00 AM

AI in medical sector

Imagine this: a cutting-edge Artificial Intelligence (AI) system promises to diagnose your condition in seconds, guiding doctors to treatment options tailored to your unique needs. But what if this system hasn’t been tested adequately to ensure it works safely in real-world scenarios?

This isn’t a dystopian what-if. It’s the reality in India’s rapidly growing medical AI sector. While AI startups mushroom in tech hubs like Bengaluru, their innovations often lack rigorous clinical validation, raising concerns about their safety and effectiveness.

“AI can revolutionise healthcare, but we cannot let excitement overshadow the need for caution,” said Dr Denny John, Professor at the Faculty of Life and Allied Health Sciences at Ramaiah University of Applied Sciences whose recent paper the published in International Journal of Technology Assessment in Health Care, calls for a framework to clinically validate digital health technologies in India.

The study synthesises insights from 17 experts spanning diverse domains, including clinical practice, health technology assessment (HTA), and digital health innovation.

“The goal was to identify systemic prerequisites and design an actionable framework that aligns with India’s unique healthcare landscape,” Professor John noted.

Researchers from India, including Karnataka’s Ramaiah University of Applied Sciences, Amrita Institute of Medical Sciences and Research in Kochi, and other researchers from Kolkata and Kerala, have proposed a Digital Health Technology-Health Technology Assessment (DHT-HTA) framework.

The framework aims to streamline the evaluation of digital innovations in healthcare. The study presents a roadmap for implementing this framework in a country grappling with rapid advancements in health technology.

Promise vs. reality

Artificial Intelligence (AI) in healthcare is a double-edged sword. AI-powered tools can assist in diagnosing diseases, predicting patient outcomes, and even personalising treatments, but their reliability is not guaranteed.

Dr John, explaining why clinical validation is important, said AI technologies must satisfy stringent regulations for approval as medical devices, as the decision support provided is optimised and personalised continuously in real-time, according to the patient’s phenotype.

“Most AI technologies in India are validated using internal datasets. They often perform well in controlled environments but fail to deliver consistent results when deployed in clinical settings,” he explained.

This failure stems from inadequate testing on diverse patient populations. AI tools rely heavily on the quality and diversity of their training data. If the datasets are not representative of real-world conditions, the algorithms are prone to errors, such as misdiagnoses or biased outcomes.

He said a recent literature review reported that most studies assessing AI did not include the recommended design features for the robust validation of AI. “If drugs have to go through phase 1, 2 and 3 clinical trials before they are accepted for treatment and now it is mandatory for medical devices also to be clinically validated, then why not medical AI in India,” Dr John asked.

How do you clinically validate AI devices?

The process of clinical validation, according to doctors, involves rigorously testing AI algorithms against large and diverse datasets, comparing their performance to other standard tools and ensuring that the technology functions reliably across different patient populations.

Dr John that by conducting comprehensive research, one can address concerns related to algorithmic bias, data privacy, and potential unintended consequences, thereby mitigating the risks associated with AI adoption.

Also Read: Why a Telugu-speaking grandmother woke up speaking in fluent American accent?

Bengaluru’s medical AI paradox

As India’s tech capital, Bengaluru is home to a bustling ecosystem of medical AI startups. These companies attract global attention for their innovative approaches. Yet, only a few focus on the critical step of clinical validation.

This paradox, the authors said, is troubling because the city also boasts world-class healthcare institutions that could spearhead collaborative efforts to validate these technologies. The absence of such initiatives raises questions about priorities in the innovation cycle.

The study highlighted notable companies leveraging AI to innovate within India’s healthcare landscape. One such company, mentioned in the study, is Niramai, based in Bengaluru, which developed “Thermalytix,” an AI-powered solution for early breast cancer detection using breast thermography and infrared imaging.

This technology claims to detect tumours up to five years earlier than traditional methods like mammography. Despite its promise, Niramai’s approach has not gained full acceptance within the medical community, with the Society of Breast Imaging India explicitly stating it does not support breast thermography as a primary or adjunctive diagnostic tool.

The lack of long-term clinical trials and validation underscores the need for rigorous evaluation frameworks like the proposed DHT-HTA to ensure safety and efficacy.

Lack of validation— a public health risk

Another example, mentioned in the study, is Matra Technology, a startup incubated at IIT-Bombay, which developed a mobile-based AI platform to reduce pregnancy risks. This innovation represents the growing use of AI in addressing critical health challenges in India, particularly in maternal and child health.

However, as the study pointed out, many such technologies enter the market with limited real-world validation, raising concerns about their reliability and scalability.

The inclusion of such examples in the study reinforces the urgent need for a comprehensive assessment framework to guide the deployment of digital health technologies and ensure their alignment with evidence-based healthcare practices.

The lack of validation isn’t just a technical flaw—it’s a public health risk. “The innovation ecosystem in Bengaluru is thriving. However, the lack of validation protocols is a glaring gap. Startups must collaborate with healthcare providers and regulators to ensure their tools are safe for patients,” Prof John insisted.

Also Read: What is ‘petticoat or saree cancer’?

Why validation matters to the common man?

If you’ve ever wondered whether a medical test is accurate, consider this: a test based on AI is only as good as its testing. Unlike traditional medical devices that undergo years of scrutiny, AI technologies often enter the market with limited validation.

The implications are significant for the below groups:

  • Patients: Incorrect diagnoses can lead to emotional distress, unnecessary treatments, or worse, missed critical conditions.
  • Doctors: Unreliable AI tools erode trust, making clinicians hesitant to adopt innovations.
  • Startups: Without validation, Indian companies face barriers to scaling globally and risk losing credibility.

The authors of the paper emphasise that clinical validation is not just a technical necessity but a moral imperative. “Patient safety must always come first. Without validation, we risk turning innovation into a liability,” the authors explained to South First.

Building trust in medical AI

The authors proposed a Health Technology Assessment (HTA) framework to ensure AI systems meet global benchmarks for safety and effectiveness. This includes rigorous external validation on diverse patient groups, addressing algorithmic biases, and adhering to ethical guidelines.

Experts believe that bridging the validation gap will require collaboration between startups, healthcare institutions, and policymakers.

They propose a roadmap for bridging the validation gap:

  • Upgrade infrastructure: Improve IT and healthcare systems, especially in rural areas, to support validation efforts.
  • Train healthcare professionals: equip doctors and healthcare workers to assess and use AI tools effectively.
  • Enforce data privacy: Strengthen laws to protect patient information.
  • Increase public funding: Allocate more resources for evaluating digital health tools.
  • Promote stakeholder collaboration: Engage government, private sectors and academia in building a validation framework.
  • Customise frameworks: Adapt global best practices to suit India’s unique healthcare needs.

What can you do?

As patients, doctors, and stakeholders in healthcare, ask critical questions:

  • Has this AI tool been clinically validated?
  • Were the tests conducted on diverse patient populations?
  • What safeguards are in place for errors?

Also Read: Winter winds and allergy woes

(Edited by Sumavarsha Kandula)

Follow us