A new legal EU framework regulating Artificial Intelligence

On April 21, 2021, the European Commission published a draft bill on dealing with Artificial Intelligence (AI). This is intended to set a global standard for the use of these technologies as the world's first legal framework for AI. If not implemented properly, the EU could be put at an international disadvantage and innovation by SMEs and start-ups could be hampered, or the use of AI could threaten fundamental rights.

Source: Shutterstock; 1204602874

I.  Important statements of the EU Commission's draft regulation

The draft is a draft regulation, so that it will be directly applicable in each Member State upon entry into force, without any further act of transposition.

  • Who is affected and what exactly is covered by the regulation?

In terms of subject matter, the regulation is intended to apply to AI-systems, the definition of which is very broad. The following  is covered and defined: "software developed using one or more of the techniques and approaches listed in an Annex I to the regulation that can produce outputs such as content, predictions, recommendations, or decisions for a human-specified set of predefined goals that affect environments with which they interact". The personal and territorial scope of protection covers providers and users - wherever the AI-application is placed on the market or used.

  • What exactly is the legal framework about?

The basic structure of the Commission's draft is “risk-based”. The greater the hazard potential of a particular AI application is, the more requirements and regulations need to be fulfilled. Four categories and "potential hazards" are envisaged, for each of which different rules apply. Unacceptable risk, high risk, limited risk, and minimal risk.

"Real-time" remote biometric identification for law enforcement purposes, as well as any application that manipulates human behaviour to circumvent user free will (e.g. voice assistant toys that entice children to engage in risky behaviour), fall into the category of unacceptable risks and will be prohibited.

The draft focuses primarily on rules for high risk applications. This risk includes the assessment of individuals, such as recruitment or credit assessment tools, as well as the assessment of examinations. Here, the Commission considers that there is a great risk to the health and safety of people or their fundamental rights. Therefore, there are strict requirements in the areas of the quality of the data sets used, technical documentation and record keeping, as well as transparency and provision of information to users, human supervision, and robustness, accuracy, and cybersecurity.

For limited risk AI systems: transparency obligations are imposed on vendors; for minimal risk systems: the systems can be developed in accordance with the law already in place without additional obligations.

II. First Conclusion

In addressing the challenge which the EU faces in creating laws for new technologies, it is operating in a field of tension. Innovation must be encouraged, but also citizens' rights must be respected. It can be considered positive that the EU Commission has recognized the technical developments and the social relevance and is addressing them at this point in time.

As the legislative process still requires the involvement of the European Parliament and the Council of the European Union, it remains to be seen if and what changes the draft regulation on AI will undergo. LexDellmeier will keep you updated on this topic.

For further information click here: