University of São Paulo: Facial recognition is a controversial topic in a bill to regulate AI

0

The proposed regulation of artificial intelligence in Brazil is a substitute text for bills 5,051/2019 , 21/2020 and 872/2021 , which concern the same topic. The new resolution was analyzed by a commission composed of jurists, among them Professor Juliano Maranhão from the Department of Philosophy and General Theory of Law at the Faculty of Law of USP.

“We did an analysis of the proposals of other international experiences of legislation, mainly the European one, which is the most robust. And then we put in a project that takes a risk-based approach,” he explains. The bill is being discussed in the Senate and was voted on in the Chamber of Deputies on an urgent basis.

Risk assessment
“Intervention and regulation of restrictions are greater depending on the degree of risk involved in the application of artificial intelligence”, comments the professor. The purpose of this action is to seek best practices to mitigate the high level of AI risk, depending on its use, while preserving the benefits of this technology.

One of the most discussed topics was facial recognition: “It is a very controversial topic, it was one of the topics we discussed the most because it falls into a category, in some legislation proposals, as an application of intolerable risk and, therefore, it should be banned”, puts Maranhão.

In the substitute text, the mention of facial recognition as intolerable occurs only in public space, in real time: “Such technology would bring a kind of ‘mass surveillance’, that is, even if there is no suspicion or anything against an individual who transits freely on the street, he would be recognized and his paths recorded. This can impede free movement in public spaces.” However, the classification of intolerable risk in this circumstance does not prevent the application of facial recognition in cases where it only fits as high risk, for example, by security forces to identify fugitives and suspects.

Initiative
To regulate these risks, it is necessary to be careful with governance: “Governance means a series of technical or organizational procedures in the people who operate this system to mitigate the risks of this technology”, says the professor. He also points out that artificial intelligence can reproduce structural discriminations present in society due to the database provided: “AI processing already, admittedly, has a lower quality to accurately identify blacks in relation to whites and women in relation to whites. to men. This had the following impact: in the first experience of use in the different States, in cases of fugitives and wanted people detected, in 80% of cases errors were black”.

The dynamic nature of technology is a problem when it comes to creating more precise regulation, because there is always something new that can make old positions obsolete: “We cannot, for example, put into law and better establish practices that are the best practices today. The technology will evolve, the details may change, so we do not detail what the best practices are because they may become irrelevant”, says Maranhão. For the expert, what is on the agenda is the obligation that organizations, whether those that develop or those that apply, are committed to best practices to deal with each risk. It is necessary to rely on self-regulation so that the different sectors that use artificial intelligence intensively can have specific codes of conduct,

There are organizations that discuss best practices for the use of AI, risk analysis and mitigation proposals. One example is Lawgorithm — created by professors from the Polytechnic School, the Institute of Mathematics and Statistics, the Faculty of Philosophy, Letters and Human Sciences and the Faculty of Law at USP.