European states should strengthen their legislation to protect fundamental rights in the face of artificial intelligence, which is now hardly questionable when it can cause errors and discrimination, according to a report published on Monday.
“Much of the interest is focused on its potential to support economic growth. How it can affect fundamental rights has received less attention,” writes the European Fundamental Rights Agency (FRA), based in Vienna, Austria, in this 100-page document.
Artificial intelligence (AI), a somewhat catch-all expression, refers to technologies that allow machines to imitate some form of real intelligence, to “learn” by analyzing their environment instead of executing simple instructions dictated by a human developer.
These software, which brings together a wide range of applications (voice assistants, voice and facial recognition systems, advanced robots, autonomous cars, etc.), are now used by public authorities as well as by the medical community, the private sector and education.
On average, 42% of European companies use AI. The Czech Republic (61%), Bulgaria (54%) and Lithuania (54%) are the countries where it is most prevalent.
Artificial intelligence is particularly popular among advertisers to target online consumers thanks to algorithms and “the coronavirus outbreak has accelerated its adoption,” the report says.
FRA investigators conducted approximately 90 interviews with public and private bodies in Estonia, Finland, France, Spain and the Netherlands.
“One of the risks is that people blindly adopt new technologies without assessing their impact before using them,” David Reichel, one of the authors of the text told AFP.
Artificial intelligence can thus violate privacy, by revealing a person‘s homosexuality in a database for example.
It can also lead to discrimination in relation to employment, if certain criteria exclude population categories on the basis of a surname or address.
When they receive an incorrect medical diagnosis or are denied a social benefit, European citizens do not always know that the decision was made automatically by a computer.
Therefore, they are not able to challenge it or to lodge a complaint, while errors can occur: artificial intelligence, created by the human being, is not infallible.
In a recent example, the Court of Appeal of Great Britain found that the facial recognition program used by Cardiff police could demonstrate racial or gender-based bias.
“Technology is changing faster than the law. We must now ensure that the future EU regulatory framework for artificial intelligence is unequivocally based on respect for human and fundamental rights,” says FRA Director Michael O’Flaherty.
By CCEiT (AFP)