Published Feb 16, 2026 | 8:00 AM ⚊ Updated Feb 16, 2026 | 8:00 AM
Artificial intelligence
Synopsis: At the Strategic Design Workshop on AI Engineering for the Disability Sector held in Bengaluru ahead of the IndiaAI Impact Summit 2026, experts said artificial intelligence could play a transformative role if designed differently. The focus was to explore how AI systems can be designed to address extreme use cases, using disability inclusion as the primary design lens.
When Dipesh Sutariya, co-founder and CEO of EnAble India, talks about “native intelligence”, he is not referring to algorithms or computing power. He is talking about everyday ingenuity — the quiet, practical problem-solving that persons with disabilities and their families develop simply to navigate daily life.
Take something as simple as identifying a toothbrush in a shared household. A person who is blind may tie a rubber band around the handle or make a small scratch at the base so it can be recognised by touch.
Or consider a child who struggles with fine motor movement and cannot hold a pencil — until someone places a potato in the child’s hand and inserts the pencil into it, allowing them to write.
“These are not high-tech solutions,” Sutariya told South First. “But they are deeply intelligent ones. The exclusion happens because such knowledge remains local and invisible.”
According to him, that invisibility is where many accessibility gaps begin — and where artificial intelligence could play a transformative role if designed differently.
Across communities, families often face similar accessibility challenges but solve them in isolation. One parent may discover a simple workaround for their child, while another, somewhere else, struggles with the same issue without knowing that a solution already exists.
Sutariya believes AI can help bridge this gap by functioning as a discovery layer that captures and shares grassroots knowledge rather than replacing human innovation.
“A parent should be able to ask, in their own language and through voice, ‘My child cannot hold a spoon, what can I do?’ and receive practical, lived solutions that others have already discovered,” he said.
When AI enables access to this kind of native intelligence, he added, it moves beyond abstract technology and becomes a tool that directly improves everyday life.
These ideas were discussed at a Strategic Design Workshop on AI Engineering for the Disability Sector held in Bengaluru ahead of the IndiaAI Impact Summit 2026.
Organised by EnAble India in collaboration with the International Institute of Information Technology (IIIT) Bangalore, the workshop brought together policymakers, industry leaders, researchers, and disability advocates.
The focus was to explore how AI systems can be designed to address extreme use cases, using disability inclusion as the primary design lens. Speakers highlighted that inclusive AI is not only about accessibility or social responsibility but also about building systems that are reliable and scalable.
Dr Amit Sheth, Founder of the International AI Research Organisation (IAIRO), emphasised this perspective, stating, “If we build AI that can navigate the ‘sparse data’ and ‘high stakes’ of a disabled life in India, we haven’t just built an accessibility tool. We have built the most robust, compact, and trustworthy sovereign AI in the world. The disability context is not our edge case; it is our training ground for excellence.”
Despite increasing investments in skilling programmes and inclusive employment initiatives, many people with disabilities remain excluded from workforce participation.
Experts at the workshop identified a “labour market gap” caused by mismatches between skills, workplace expectations, and accessible environments.
Sutariya pointed out that existing inclusion and skilling programmes often operate in isolation, forcing individuals to navigate multiple disconnected systems.
“Today, the responsibility of navigating training, support systems, certifications, and employment pathways is placed entirely on the individual,” he said. “For persons with disabilities, this fragmentation becomes a major barrier.”
If AI becomes what Sutariya describes as an orchestration layer, it could help connect these fragmented systems.
Instead of individuals repeatedly adapting to rigid processes, AI could analyse a person’s abilities, support needs, and context, and guide them through appropriate training and employment pathways.
The results, he said, would be measurable and outcome-driven. These could include higher completion rates in training programmes, smoother transitions from skilling to employment, fewer drop-offs caused by accessibility barriers, and increased confidence among employers hiring persons with disabilities.
“In essence, AI can help move inclusion from a series of disconnected efforts to a coordinated system,” he said.
Experts at the workshop also argued that disability should not be treated as a secondary consideration in technology design. Instead, it should serve as the baseline.
They noted that disability often shows us where systems break first. When technology works for people with visual, mobility, or cognitive challenges, it usually becomes simpler and easier for everyone to use. In fact, many features we now see as everyday tools — such as voice assistants and predictive text — were first created to support accessibility needs.
For India, this approach holds particular relevance due to the country’s linguistic, cultural, and socio-economic diversity. AI designed for an “average user” risks excluding large sections of the population.
“If AI is designed with disability as the baseline, it becomes robust enough to handle real-world complexity,” Sutariya said. “That is when it becomes truly ready for India’s future.”
Dr Satendra Singh, Director Professor of Physiology at the University College of Medical Sciences, Delhi, warned about what he terms “automated ableism,” referring to disability bias that can get built into AI systems.
He explained to South First that this happens “when automated technologies treat a person with a disability less fairly than a non-disabled person in comparable situations.”
Dr Singh pointed out that many AI models are trained on data shaped largely by non-disabled experiences, which can leave out the realities of persons with disabilities.
As a result, he noted, AI can end up reflecting and strengthening existing social inequalities instead of helping reduce them.
“Recognising and measuring explicit disability bias is essential, not only to expose the social harm they can cause, but also to help developers and users understand how automation can silently reinforce exclusion and inequality rather than challenge it,” he added.
His views highlight the need for more inclusive data and thoughtful design in AI development, so that emerging technologies become more accessible and truly serve diverse communities.
The workshop included keynote sessions, panel discussions, and multi-stakeholder working groups involving government representatives, researchers, industry partners, and persons with disabilities as co-designers.
The discussions resulted in a requirement-level blueprint intended to guide AI system design at the upcoming IndiaAI Impact Summit 2026.
Prof Amit Prakash of IIIT Bangalore highlighted the importance of collaborative design, noting that multi-stakeholder engagement is essential to developing technologies that create lasting social impact.
The workshop ultimately reflected a broader shift in India’s approach to AI — one that links policy, infrastructure, technology, and lived experience.
By treating disability inclusion as a starting point rather than an afterthought, experts believe India can build AI systems that are not only inclusive but also stronger and more resilient for the entire population.
(Edited by Muhammed Fazil.)