“For the European Commission, regulatory temptation has been strongest”

Flames. Three years ago, I was part of a group of experts in artificial intelligence (AI) that produced a report on behalf of the European Commission. “Ethical rules for a credible AI”. An article from while we were finalizing this new York Times had headlined that the commission is preparing a second General Data Protection Regulation (GDPR) for AI algorithms. This publication interrupted the work of our group of experts: all companies were finding it difficult to implement the GDPR directives. The content of this article was disputed by some members of our group as well as representatives of the Commission new York Times. But three years later, the commission published a draft regulation of AI: the regulatory temptation was strongest.

The ultimate need to regulate AI is not up for debate. But the operational means to achieve this are not yet mature or even available for widespread deployment.

Articles reserved for our customers read also Regulating artificial intelligence: “The new model is increasingly based on the empowerment of companies”

The approach adopted in this regulatory project is, rightly, based on an approach focused on risk assessment, with more or less restrictive measures based on their severity. The first difficulty of this lesson is the definition of the level of risk. How to clearly differentiate between “unacceptable risk”, “high risk”, “AI with a specific need for transparency” and “no risk”? It is quite understandable that social scoring – The fact of attributing a social score to a person – falls under the category of “unacceptable risks”, which Europe wants to ban.

READ  Researchers Just Narrowed Down The Age of Earth's Interior Main

discord around killer robots

Still major risks such as killer robots – the first examples of which have just been documented by the United Nations – have long been controversial amid the need to develop legitimate defense weapons, simulating the act of killing a human, and series The complexity of being responsible for when something goes wrong. This enduring subject of discord has mobilized through petitions, communities of researchers as well as other actors of society. European states and industrialists argue that if they refrain from using this type of device, others will take care of it and then conflicts will be of unequal arms.

read also Spectrum of Autonomous Combat Drones

On the other hand, for high-risk situations, it is necessary to assess the compliance of AI systems with applicable laws as well as their robustness. So much for theory. But the problem lies in the implementation of these evaluation systems. This includes the auditability of AI systems. Today, there are no standards in this area. Some suggest operating through a manual compliance checklist, an unreliable method that leaves room for subjectivity. Others provide software to evaluate the behavior of these systems, which is far more efficient, but difficult to set up without considering the context of performance – standardization efforts have just begun.

You have 20.73% of this article to read. The rest is for subscribers only.

You May Also Like

About the Author: Abbott Hopkins

Analyst. Amateur problem solver. Wannabe internet expert. Coffee geek. Tv guru. Award-winning communicator. Food nerd.

Leave a Reply

Your email address will not be published. Required fields are marked *