A noted data-impact research publisher has identified what it generously describes as a regulatory misalignment in efforts to regulate biometrics and other AI-based software.
The Ada Lovelace Institute says that framers of the European Union’s proposed AI Act are depending on technical standards to “provide the detailed guidance necessary for compliance” with the legislation’s goal to protect human rights.
Technical standards and standards for bolstering rights like privacy likely will inform each other in the final language. But the organizations crafting technical rules “seem to lack the expertise and legitimacy to make decisions about interpreting human rights law,” according to the institute.
Researchers outline some policy strategies that could make for the implementation of the AI Act. (The legislation is expected to be voted on by a critical European Parliament committee this month.)
It is possible, they say, that key rights issues might just get a gloss or be overlooked altogether. The report notes that biometric ID systems – broadly considered high-risk operations – are fuzzily addressed as requiring an “appropriate level of accuracy.”
In another portion of the proposed AI Act, the level of risk following a mitigation process “must be ‘acceptable.’ “
Meanwhile, even aligning biometric processes with the technical requirements of the EU’s GDPR, passed in 2016, remains a challenge.
And there is too little direction in other passages, such as where much is left to private contractors. They could be the ones deciding what is an acceptable measure of risk to rights when considering development and deployment of biometric surveillance software.
The institute advises lawmakers to get civil society organization involved now and deeply. They also should be “exploring institutional innovations to fill the regulatory gap.” A noted data-impact research publisher has identified what it generously describes as a regulatory misalignment in efforts to regulate biometrics and other AI-based software.
The Ada Lovelace Institute says that framers of the European Union’s proposed AI Act are depending on technical standards to “provide the detailed guidance necessary for compliance” with the legislation’s goal to protect human rights.
Technical standards and standards for bolstering rights like privacy likely will inform each other in the final language. But the organizations crafting technical rules “seem to lack the expertise and legitimacy to make decisions about interpreting human rights law,” according to the institute.
Researchers outline some policy strategies that could make for the implementation of the AI Act. (The legislation is expected to be voted on by a critical European Parliament committee this month.)
It is possible, they say, that key rights issues might just get a gloss or be overlooked altogether. The report notes that biometric ID systems – broadly considered high-risk operations – are fuzzily addressed as requiring an “appropriate level of accuracy.”
In another portion of the proposed AI Act, the level of risk following a mitigation process “must be ‘acceptable.’ “
Meanwhile, even aligning biometric processes with the technical requirements of the EU’s GDPR, passed in 2016, remains a challenge.
And there is too little direction in other passages, such as where much is left to private contractors. They could be the ones deciding what is an acceptable measure of risk to rights when considering development and deployment of biometric surveillance software.
The institute advises lawmakers to get civil society organization involved now and deeply. They also should be “exploring institutional innovations to fill the regulatory gap.” Read More