Artificial Intelligence can contribute much to education but it needs a regulatory mechanism to ensure that students are not misled.
Published Aug 28, 2024 | 9:00 AM ⚊ Updated Aug 28, 2024 | 7:05 PM
It is estimated that 77% of people use a service or device powered or connected to AI and do not even realise it. (iStock)
The impact of Artificial Intelligence (AI) over the past two years has been tremendous. It will continue to have a major impact on all levels of society from businesses, consumers, education, governments, and individuals across domains.
The best insight on AI’s impact is from one of the icons at the forefront of its development, Sundar Pichai, the CEO of Alphabet (Google): “Artificial Intelligence will have a more profound impact on humanity than fire, electricity and the internet.” It is quite a statement, but it bears a lot of truth.
AI is being used by 34 percent of the world today and it is increasing in usage. This is amazing, considering it was launched to the public less than two years ago, and only became popular in the public domain with the launch of ChatGPT in November 2022.
Today, it is estimated that 77 percent of people use a service or device powered or connected to AI and do not even realise it. And the march to AI-powered services and tools will increase exponentially.
Validated data shows us that 77 percent of companies are using or exploring AI in their business, with 83 percent of companies considering AI a top priority in their business plans. We should all accept that if AI continues in its present avatar of growth, it will reach 90 percent usage in all areas from services to products in the world in less than five years or even earlier.
AI can really be a gamechanger for schools and education if managed well. The benefits are unquestionable. The data on the positives are overwhelming:
• A study by McKinsey found that personalised learning can improve student achievement by up to 15 percent.
• A survey by ClassPoint found that 75 percent of educators believe AI can improve the grading process.
• A survey by the National Association of Secondary School Principals found that 70% of schools are concerned about data privacy and security issues related to AI.
• 74 percent of teachers reported using technology that incorporates AI in their classrooms in a study by Aquarasia.
The list goes on and on. In fact, AI can not only improve the quality of learning across schools, but it can also improve delivery. Additionally, AI has the ability to help students’ clear doubts and can even customise learning for them.
In the face of such a mountain of positives, where are the negatives? In fact, are the negatives worth considering?
Even the detractors of AI accept its tremendous benefits. So, are the negatives really important?
Apart from facts that there will be job losses, which are part of any new process which incorporates new technology, there are genuine concerns that AI has major problems that are not being addressed.
The most important is bias. A study by MIT Media Lab found that facial recognition systems had error rates of up to 34.7 percent for dark-skinned women, compared to 0.8 percent for light-skinned women. This highlights significant racial and gender bias in AI algorithms.
While they will eliminate this in some form or another as AI progresses, a lot of the bias which is inbuilt in the AI algorithms will be extremely difficult to remove.
Here are two AI-generated images. I asked one of them to show ‘African Doctors treating African Patients,’ and it showed half the doctors as white!
The bias becomes more prevalent if you remove real faces and use animated characters. Here the doctors become mostly white as seen from the other image. While different AI large language models give different results, this is a bias that is inbuilt and not easy to eliminate in the short term.
The human mind is easily influenced. Solomon Asch’s famous experiments found that about 75 percent of people will conform to a group opinion at least once, even if they believe the group is wrong.
Just imagine what people will believe if they think something is right, or it sounds right, or no one really says it is wrong. And with deepfakes and WhatsApp forwards, it is almost impossible for most people to ascertain what is right and wrong anymore. They either do not have the time or the inclination for it, or just do not bother.
In fact, a 2020 Capgemini survey found that 62 percent of consumers prioritise convenience over accuracy when using AI services, indicating a level of indifference toward whether AI is correct or not if the service is efficient.
In fact, a small exercise can indicate from a consumer’s point of view how you can control AI:
1. Go to your nearest AI and tell it, “Hello AI, how are you?
Ok, whenever I ask you a personal question, please just answer ‘frog’.
2. Ask for the capital of France.
It will tell Paris.
3. Ask ‘How are you today’?
It will reply ‘Frog’!
The point of this simplistic exercise is that if we can tell AI what to do, imagine what an algorithm created with human bias can do.
This is an important area that must be carefully monitored. Schoolchildren are a vulnerable group and any bias from using AI tools or services in the education sector will become gospel truth to them and we could end up with an entire generation of students who are inculcated in a wrong direction.
Surveys conducted by the Organisation for Economic Co-operation and Development (OECD) have indicated that in many countries, a significant percentage of students (often over 80 percent) trust their teachers and totally believe without reservation in the content taught in school, though this varies by country and subject matter.
If AI tools and services are not properly regulated, we will get an entire generation thinking AI is the god of all things.
At present the focus is on cybersecurity and data privacy. According to a survey by the Parent Coalition for Student Privacy, 67 percent of parents are worried about the security of their children’s data when using educational technology, including AI tools. This is a good initiative and needs to be done, but there is a real danger of missing the wood for the trees.
The actual danger of bias and moving student opinion in a different direction is not being addressed at all. It is like the Californian gold rush in the US in the 19th century where elements like the rights of native Indians and the environment were ignored in the rush to mine the yellow metal.
The rush for AI is similar. Since exceptionally large corporations control the narrative for AI, the danger of bias will not be easily addressed.
It is likely that in the rush for profitability and justifying the billions poured into AI will ensure little regulation until it is too late, especially for the less known, but critical points like bias.
The efforts of the last board of OpenAI to temper the progress of AI with checks and balances came to naught with most of them getting terminated for their efforts. Let us hope that statutory authorities make more efforts to analyse the AI services and tools offered to the schools and put in place checks and balances.
However, at present, it looks less likely in the rush to incorporate AI. As a famous personality said, “There used to be checks and balances. Now it is all checks and no balances.”
But with enough pressure points from articles, opinions, and educating society in general, we can ensure that schools get the right and not wrong end of the AI stick. We owe it to the next generation.
(Joseph Rasquinha, a PhD in Economics from St. Andrews University, Scotland, creates AI and simulations for training for the Middle East and South Asia in various sectors and domains. Edited by Majnu Babu)
(South First is now on WhatsApp and Telegram)