Need to lobby for the right and responsible implementation of such a powerful technology that carries the risk of serious misuse.
Last week, news broke that facial recognition technology will be deployed at the Delhi and Bengaluru airports to automatically process passenger entry.
The mode of implementation would be the new Digi Yatra app that allows passengers to self-register on it so that they can avail some of the seamless aspects of check-in the facial recognition technology will enable. At the time of writing, this is still an optional service and is not yet mandatory, but with all such technology, one can assume it will only be a matter of time.
By itself, this news would be of high importance as it holds the potential to vastly change the game for both civilians and security forces. However, since this also comes against the backdrop of an increasing push for digitisation by the central government, which at the same time has shown a lack of clarity and transparency on the ambit of citizens’ data privacy and protection, it is urgent to understand the scope and implications of this new service.
At present, a lot of the risks might be limited as it is an opt-in service at the airports and passengers need to self-submit, but looking forward, there are five key questions to be asked.
A key question when it comes to any personal identifying data should be about its storage and security. While the Bangalore International Airport Limited (BIAL) has clarified that the data will be kept secure and deleted after 24 hours following travel, no such information is currently available regarding the Delhi airport.
Facial recognition technology works in the same way as any other identifying technology works — by trying to find unique patterns to identify each individual.
Similar to how a fingerprint reader will identify individuals based on their unique ridge patterns, facial recognition technology looks to map biometric features on a person’s face to determine a unique individual. These include factors such as distance between the eyes, the distance between forehead and chin, and dozens of such feature mappings.
Because of this, facial recognition technology is highly dependent on the training images used to teach the algorithm how to detect faces. This is mostly done by feeding the algorithm millions of different images and teaching it to identify the differences in each, thereby creating a large learning set based on which it can read and identify newer images presented to it.
Hence, it is quite unlikely that passenger images would not be retained for further improving the technology algorithms. Due to this, it is imperative that the data storage and retention practices be strongly codified and transparently communicated to the public.
Following the same logic, it is important to also understand who has access to the airport facial recognition data.
Given that the Digi Yatra app is being developed by a Portuguese company, Vision Box, it will be crucial to clarify what is the level of access to the data both the company and people involved in the project have. Would they be allowed to monetise this information and perhaps sell it to private players, thereby becoming a facial data provider for others to implement similar technologies at malls, offices, etc.?
Even within the airport authorities and government agencies, clear rules and guidelines should be instituted to determine access to this data.
The first two questions eventually lead to the most troubling query — how will this information be connected and utilised with the already burgeoning digitisation drive of the government? Because as such information is collected and collated, we are running the risk of turning into a surveillance state.
RTIs filed by activists have revealed the use of facial recognition technology by Delhi Police in trying to identify people involved in the Northeast Delhi riots and the Kisan Rally, and that they consider an 80 percent match as being accurate enough.
Unlike a fingerprint — which is read by pressing against a scanner and thereby providing a clear image — facial recognition often is done from images taken from a distance and depends on varying lighting sources, camera angles, skin pigmentation, motion blur, and other real-world factors that aren’t easily accounted for. This often increases the error rates for accurate detection and can introduce the risk of faulty detection.
However, in practice, this has been found to be even more problematic as real-world biases often creep into the training data fed to such algorithms.
Studies in the US have found that the poorest accuracy was found in subjects who were female, black, and 18–30 years old. In fact, error rates for darker-skinned females were found to be 34 percent more than for light-skinned males, clearly reflecting existing societal biases that would have gone into training these algorithms.
In the context of India, this acquires an equal if not more dire risk given the vast racial and pigmentation diversity in the country. Poorly trained algorithms could wreak havoc on minority and under-represented communities.
Given these risks, it is imperative there be a clear demarcation between the use and interconnectivity of facial recognition data.
Given the risk of false identification or misidentification, we also need clear guidelines on how an individual would be able to prove their identities in alternate ways.
Will people be expected to carry additional documentation to verify themselves just in case? What happens if the system mistakes them for someone else and denies them boarding? Who does the person turn to complain about problems and will that process be as seamless as purported facial recognition technology?
And finally, who is accountable for any mistakes or misuse of the airport facial recognition system? Given the number of stakeholders involved in this project — the airport authorities, the airlines, the third-party software developer, and both the Central and state governments — there is a high chance that all accountability will be avoided and any issues would turn into a legal nightmare.
Furthermore, this uncertain quagmire makes the system ripe for misuse by parties with little to no risk of consequence. Hence, before such systems are further rolled out, we need strong laws and clear accountability defined.
Technology like this is not without its benefits though, and will definitely aid in improving the congestion and security of our airports.
It would also be a losing battle to try and prevent such technology from being implemented as it is being adopted the world over.
However, the focus for us should be lobbying for the right and responsible implementation of such a powerful technology that carries the risk of serious misuse.
(Sidharth Sreekumar (he/him) is an advocate of ethical technological advancement, and is interested in exploring the confluence of technology and societal impact. He is currently a Senior Product Manager at the Economist Intelligence Unit. These are the personal views of the author)