AI applications are used in many industries. But using them can also be risky. EU rules are designed to ensure responsible use. But they could slow down the technology.

The currently discussed EU regulation for the use of artificial intelligence (AI) presents many companies with major hurdles. According to an investigation by the “appliedAI” platform, more than half of the AI ​​applications would fall into the so-called high-risk area according to the new rules, which would not make further use possible without considerable additional financial and human effort. The study is to be published by the Bavarian Digital Ministry on Tuesday and was previously available to the German Press Agency.

Specifically, the database research came to the conclusion that 18 percent of the 106 AI systems examined could be assigned to the high-risk class and 42 percent to the low-risk class. For about 40 percent of the applications examined, a classification is currently not possible without a doubt, it said. This means that there could be high requirements and certification obligations for almost 60 percent of all applications.

At the beginning of December, the EU countries laid down comprehensive rules for the use of artificial intelligence for the first time. The Council of the EU states announced at the time that the decision was intended to ensure that AI systems were secure and respected fundamental rights. Before the new rules actually apply, the EU states still have to agree on a line with the European Parliament.

Too much focus on risk?

The law is based on global standards. The higher the potential dangers of an application, the higher the requirements should be. There are heavy penalties for breaking the rules.

“The study shows that the draft of the EU’s AI regulation is too risk-based and still unclear in too many places. Such a set of rules does not work in practice and creates unnecessary hurdles for the economy,” said Bavaria’s Digital Minister Judith Gerlach (CSU). For the study, AI applications registered in a database by companies in Germany were evaluated in accordance with EU regulations.

appliedAI Managing Director Andreas Liebl advocated a revision of the rules for risk classification: “While we definitely need good regulations for the use of risky AI systems, we must not forget the advantages of these systems and focus one-sidedly on the risk.” In addition, any kind of uncertainty in combination with high penalties means that companies will make overcautious decisions and possibly exclude far too many applications.

Artificial intelligence usually refers to applications based on machine learning, in which software scours large amounts of data for matches and draws conclusions from them. They are already being used in many areas. For example, such programs can evaluate images from computer tomographs faster and with greater accuracy than humans. Self-driving cars also try to predict the behavior of other road users in this way. And chatbots or automatic playlists from streaming services also work with AI.

Communication from the EU states