Menu Close

Stalking-horse report on facial recognition in Indian airports draws arrows

A report out of India about responsible AI has been quickly criticized for being incomplete.

The 74-page report in question, Responsible AI for All by the Indian government think tank NITI Aayog, is categorized as a starting place for discussion for all those with a stake in AI’s development. The paper, published last month, has succeeded.

It leans hard on Digi Yatra, India’s growing airport digitalization project. Digi Yatra, which began deployments in three airports this month, is meant to deliver paperless travel and contactless check-in and boarding in a way that is commonplace and responsible.

NITI Aayog, literally the National Institution for Transforming India policy commission, does well to not put too heavy a thumb on the scale. The authors clearly see much to like in facial recognition in and out of transportation.

They briefly note design-based risks, and the rights challenges the government and industry face using face biometrics systems.

But they do not go far enough for some in pushing the idea around.

Executives at the Internet Freedom Foundation, an indigenous digital liberties advocacy group, posted what they felt are soft spots in policy and implementation considerations. (They also had problems with Digi Yatra not discussed here.)

The foundation found NITI Aayog failed to assess the harms that police could create by using facial recognition software. Law enforcement roles for face biometrics are “the most harmful” of the technology, which will be integrated into the 30 or so facial recognition projects that police are already running.

Very much related, the discussion report has suggestions to prevent function creep regardless of who is wielding the algorithms.

The report also misses explainability, which increasingly is seen as bedrock attribute of trust in AI. The foundation notes that everyone from national government leaders to police officers on the street need to know how deep learning works in order to best interpret face match results.

Emotion recognition also was shorted, foundation executives felt. There is little consensus on how and how well it works. It is not sensible to argue that this role will not be folded into facial recognition of government systems soon.

Another rights advocate in India, the Software Freedom Law Center also chimed in quickly on NITI Aayog’s invitation to comment. Executives in this group came down hard on how they felt the report overlooked the importance of explicit informed consent. A report out of India about responsible AI has been quickly criticized for being incomplete.

The 74-page report in question, Responsible AI for All by the Indian government think tank NITI Aayog, is categorized as a starting place for discussion for all those with a stake in AI’s development. The paper, published last month, has succeeded.

It leans hard on Digi Yatra, India’s growing airport digitalization project. Digi Yatra, which began deployments in three airports this month, is meant to deliver paperless travel and contactless check-in and boarding in a way that is commonplace and responsible.

NITI Aayog, literally the National Institution for Transforming India policy commission, does well to not put too heavy a thumb on the scale. The authors clearly see much to like in facial recognition in and out of transportation.

They briefly note design-based risks, and the rights challenges the government and industry face using face biometrics systems.

But they do not go far enough for some in pushing the idea around.

Executives at the Internet Freedom Foundation, an indigenous digital liberties advocacy group, posted what they felt are soft spots in policy and implementation considerations. (They also had problems with Digi Yatra not discussed here.)

The foundation found NITI Aayog failed to assess the harms that police could create by using facial recognition software. Law enforcement roles for face biometrics are “the most harmful” of the technology, which will be integrated into the 30 or so facial recognition projects that police are already running.

Very much related, the discussion report has suggestions to prevent function creep regardless of who is wielding the algorithms.

The report also misses explainability, which increasingly is seen as bedrock attribute of trust in AI. The foundation notes that everyone from national government leaders to police officers on the street need to know how deep learning works in order to best interpret face match results.

Emotion recognition also was shorted, foundation executives felt. There is little consensus on how and how well it works. It is not sensible to argue that this role will not be folded into facial recognition of government systems soon.

Another rights advocate in India, the Software Freedom Law Center also chimed in quickly on NITI Aayog’s invitation to comment. Executives in this group came down hard on how they felt the report overlooked the importance of explicit informed consent.  Read More  Biometric Update 

Generated by Feedzy

Disclaimer

Innov8 is owned and operated by Rolling Rock Ventures. The information on this website is for general information purposes only. Any information obtained from this website should be reviewed with appropriate parties if there is any concern about the details reported herein. Innov8 is not responsible for its contents, accuracies, and any inaccuracies. Nothing on this site should be construed as professional advice for any individual or situation. This website includes information and content from external sites that is attributed accordingly and is not the intellectual property of Innov8. All feeds ("RSS Feed") and/or their contents contain material which is derived in whole or in part from material supplied by third parties and is protected by national and international copyright and trademark laws. The Site processes all information automatically using automated software without any human intervention or screening. Therefore, the Site is not responsible for any (part) of this content. The copyright of the feeds', including pictures and graphics, and its content belongs to its author or publisher.  Views and statements expressed in the content do not necessarily reflect those of Innov8 or its staff. Care and due diligence has been taken to maintain the accuracy of the information provided on this website. However, neither Innov8 nor the owners, attorneys, management, editorial team or any writers or employees are responsible for its content, errors or any consequences arising from use of the information provided on this website. The Site may modify, suspend, or discontinue any aspect of the RSS Feed at any time, including, without limitation, the availability of any Site content.  The User agrees that all RSS Feeds and news articles are for personal use only and that the User may not resell, lease, license, assign, redistribute or otherwise transfer any portion of the RSS Feed without attribution to the Site and to its originating author. The Site does not represent or warrant that every action taken with regard to your account and related activities in connection with the RSS Feed, including, without limitation, the Site Content, will be lawful in any particular jurisdiction. It is incumbent upon the user to know the laws that pertain to you in your jurisdiction and act lawfully at all times when using the RSS Feed, including, without limitation, the Site Content.  

Close Bitnami banner
Bitnami