A fresh draft Act on Artificial Intelligence presented by the European Parliament. Definitions and types of classification of AI systems have been amended

traple.pl 2 years ago

On 11 May 2023, the combined committees of the European Parliament (IMCO and LIBE) concluded many months of negotiations and adopted a draft Parliament position on the draft Act on Artificial Intelligence (AI Act)[1]. Parliament's position makes quite a few changes in relation to the current version of the proposal, both presented by the Commission[2] in April 2021, as well as the 1 adopted by the EU Council[3] in December 2022. Below are the most crucial changes. These include the introduction of fresh definitions and the proposal to change the classification of AI systems, which will straight affect the obligations of suppliers, users (implementers), importers and distributors of AI systems.

Other definition of AI system

In its position, Parliament adopted a completely different approach to the definition of the AI strategy than in erstwhile projects. The position states that the AI strategy is to be ‘a machine-based strategy that is designed to operate with different levels of autonomy and that can, for clear or implicit purposes, make output data specified as predictions, recommendations or decisions that affect the real or virtual environment’. This means that there is simply a much broader definition than before. This is besides a very akin definition to that adopted by the OECD in 2019[4] and partially functioning in American law[5].

The definition of the general intent AI strategy introduced by the Council has besides changed. According to Parliament's position, this concept is intended to cover "an AI strategy that can be utilized and adapted to a wide scope of applications for which it has not been intentionally and specifically designed".

Amendments to prohibited AI systems

In its proposal, Parliament has importantly expanded the list of AI systems with unacceptable risks that should be banned from marketing, putting into service and use.

The directory of prohibited AI systems includes:

  • biometric categorisation which categorizes individuals according to delicate or protected characteristics or properties or on the basis of requests for these characteristics or characteristics;
  • creating or extending facial designation databases by undirected downloading facial images from the net or recordings from industrial cameras;
  • requesting the emotion of individuals in the areas of law enforcement, border management, workplace and educational institutions;
  • analysing recordings recorded in public space by means of biometric distant recognition systems operating ‘post fact’, unless they are subject to prior court authorisation and are strictly essential in connection with a circumstantial and serious crime. It should be noted that the list of offences justifying specified action has been importantly narrowed down over erstwhile AI Act projects.

In addition, Parliament has extended and clarified the scope of the systems already in place - i.e. subliminary systems, systems that exploit the weaknesses of circumstantial groups of persons, as well as the systems of point social evaluation (social coordination). The regulation relating to the usage of real-time biometric distant recognition systems in public space for law enforcement was besides substantially modified.

Furthermore, the ‘removed’ from the list of high-risk AI systems to the list of prohibited AI systems predicated on profiling, location or erstwhile criminal behaviour. This provision has besides received any modifications as to its circumstantial scope.

Changes in high-risk AI systems

Major changes have besides been made to high-risk AI systems and any bottlenecks have been classified as such. Compared to the erstwhile projects, it is no longer adequate that a circumstantial AI strategy will be a strategy that is categorised in Annex III to the AI Act, but for its designation as a advanced hazard strategy it will inactive gotta pose a crucial hazard of harm to health, safety, fundamental rights or the environment (to clarify these issues, the Commission will be required to issue appropriate recommendations). If, on the another hand, the supplier of the AI categorized in Annex III considers that its strategy does not pose a crucial risk, he will gotta supply an appropriate justification to the national supervisory authority.

The systems and categories listed in Annex III have besides been modified and extended. The scope of the existing classification has been extended and further clarified and added, among others, AI systems designed to:

  • requests for individual characteristics of individuals based on biometric or biometric data (including emotion designation systems);
  • use as safety elements related to the management and operation of the supply of water, gas, heating, electricity and critical digital infrastructure;
  • use in order to influence the result of elections or referendums or the behaviour of individuals during voting;
  • use by social media platforms (which have been recognised as very large online platforms within the meaning of the Digital Services Act) in their advice systems that urge to users the content generated by another users of this platform.

The circumstantial obligations imposed on high-risk AI systems related to hazard management systems, data and data management, method documentation and event recording have besides changed. In fact, the deadline for reporting to national supervisory authorities serious errors related to the operation of specified systems was reduced from 15 days to 3 days. In addition, Parliament introduced an work to measure the impact of the AI on fundamental rights (fundamental rights impact assessment) before placing the AI strategy at advanced hazard for use.

Basic models (foundation models) and generic AI

Parliament decided to incorporate the concept of the alleged basic models into the draft AI Act (foundation models), the first draft presented by the Commission has not taken into account at all. They take over the main burden of the responsibilities that were linked to the general intent AI systems in the Council proposal. In Parliament's position, the basic model is "the AI model, which is trained on a wide scale of data, designed for the generality of results and can be adapted to a wide scope of diverse tasks".

According to Parliament's position, the supplier of the basic model before placing it on the marketplace or putting it into service, whether made available as a stand-alone model, built into an AI strategy or another product, licensed opensource as a service, as well as another distribution channels, should ensure, among others:

  • identify, reduce and mitigate the reasonably foreseeable risks to health, safety, fundamental rights, the environment and democracy and the regulation of law which may consequence in specified a model;
  • processing and including only data sets subject to appropriate data management measures in these models;
  • design and make a model to accomplish appropriate levels of performance, predictability, interpretation, consistency, safety and cybersecurity;
  • design and improvement of the model, utilizing existing standards, to reduce energy and resource consumption as well as to increase energy efficiency and overall strategy efficiency;
  • drawing up comprehensive method documentation and understandable instructions to enable derivative service providers to fulfil all their obligations under the AI Act.

In contrast, suppliers of basic models utilized as generic AI systems should in addition:

  • fulfil transparency obligations;
  • train (and, if possible, plan and develop) their models in specified a way as to guarantee adequate safeguards against the generation of content infringing EU law;
  • document and make publically available a sufficiently detailed summary of the usage of copyrighted training data.

Other amendments

In addition to the proposal for amendments to the definition of the AI and to the fresh rules for classifying these systems, Parliament proposed amendments in another areas, e.g.

  • penalties for failure to fulfil the obligations under the AI Act which Parliament considers should be increased,
  • the date of application of the AI Act (i.e. 2 years after entry into force; presupposes 3 years);
  • the request to build and apply all AI systems and basic models on a trustworthy AI basis (Trustworthy AI),
  • ensuring the right to request a clear and factual explanation of the function of the AI strategy in the decision-making procedure.

Further work

The position adopted by the combined committees of IMCO and LIBE is not yet the authoritative position of Parliament on the AI Act. It will most likely be adopted at the next session of Parliament, which is scheduled for 12-15 June and should not deviate from the joint committees. Parliament's authoritative adoption of the position will in turn open the door to starting trilogues, which will bring us closer to the final text of future EU regulation.

[1]https://www.europarl.europa.eu/meetdocs/2014_2019/plmrep/COMMITTEES/CJ40/DV/2023/05-11/ConsolidatedCA_IMCOLIBE_AI_ACT_EN.pdf

[2]https://eur-lex.europa.eu/resource.html?uri=cellar:e0649735-a372-11eb-9585-01a75ed71a1.00.02/DOC_1&format=PDF

[3]https://data.consilium.europa.eu/doc/document/ST-14954-2022-INIT/en/pdf

[4]https://legalinstruments.oecd.org/en/instruments/oecd-legal-0449#mainText

[5]https://www.congress.gov/bill/116th-congress/house-bill/6216/text#toc-H41B3DA72782B491EA6B81C74BB00E5C0

Read Entire Article