Publication of the AI Act in the Official Journal of the EU

On 12 July 2024 the European Union released regulation (EU) 2024/1689, establishing a regulatory framework for artificial intelligence (AI), the AI Act. The publication came more than three years after the European Commission published the proposed original text in April 2021. The AI Act comes into force on 1 August 2024, and aims to promote trustworthy artificial intelligence (AI) while ensuring a high standard of protection for health, safety and fundamental rights of the EU.

Robot on blue background

The need for harmonized artificial intelligence regulations

Artificial intelligence (AI) is a rapidly growing field of technology that brings numerous economic, environmental and social benefits to various industrial sectors and areas of social activity. At the same time, AI can also generate risks to the public interest and fundamental rights protected by EU law  (among others such as the right to human dignity, freedom of assembly). In order to ensure a high level of safety and improve the functioning of the internal market, common rules have been established for the development, the placing on the market, the putting into service and use of artificial intelligence systems in the EU, which were published in the AI Act.

Scope of application

The EU AI ​​Act, taking a risk-based approach, defines requirements for all AI systems and imposes obligations on:

  • providers placing on the market or putting into service AI systems or placing on the market general-purpose AI models in the EU (regardless of whether those providers are established or located within the EU or in a third country)
  • deployers of AI systems that have their place of establishment or are located within the EU;
  • providers and deployers of AI systems that have their place of establishment or are located in a third country, where the output produced by the AI system is used in the EU;
  • importers and distributors of AI systems;
  • product manufacturers who place on the market or put into service AI systems together with their product and under their own name or trademark;
  • authorised representatives of non-EU established providers;
  • affected persons that are located in the EU.

The AI Act also provides for certain exemptions and does not apply, among other things, to AI systems used exclusively for military, defense and national security purposes.

What is an AI System?

Some technologies that are often associated with the concept of artificial intelligence may not fall within the defined term “AI system,” and therefore do not fall within the scope of the regulation.  The definition of an AI system has seen many changes and has evolved considerably since the original text was published. In accordance with the Article 3(1) of the AI Act, an ‘AI system’ means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.

Risk based approach

The following levels of risk can be distinguished depending on the risks associated with the use of the AI system:

  • unacceptable risk – AI systems that pose unacceptable risks to individuals and their basic rights, e.g. AI system that uses manipulation techniques to impair elderly people’s ability to make informed decisions;
  • high risk – High-risk AI systems such as remote biometric identification systems, AI systems intended to be used for the recruitment;
  • limited risk – AI systems that are subject to fewer transparency obligations compared to high-risk systems e.g. chatbots, deepfakes;
  • minimal risk – other AI systems not belonging to the above categories with minimal impact on individuals such as email spam filters.

Deepfake

Deployers of an AI systems to create images, audio or video content that constitutes deepfake* have an obligation to inform that such content has been artificially generated or manipulated. This obligation does not apply if such use is lawful and for the detection, prevention, investigation or prosecution of crimes.  If such content forms part of an artistic, creative, satirical, fictional or similar work or programme, this obligation is limited to adequately informing the public of the existence of such generated or manipulated content in a manner that does not hamper the display or enjoyment of the work.

*‘deep fake’ means AI-generated or manipulated image, audio or video content that resembles existing persons, objects, places, entities or events and would falsely appear to a person to be authentic or truthful

Critical dates

12 July 2024 –  AI Act published in the Official Journal of the European Commission,

1 August 2024 – entry into force of the AI Act,

2 February 2025 – application of general provisions (chapter I) and provisions on prohibited AI practices (chapter II),

2 August 2025 – application of provisions on notifying authorities and notified bodies, general-purpose AI models, governance, penalties (except fines for providers of general-purpose AI models) and confidentiality,

2 August 2024 – Member States shall make publicly available information on how competent authorities and single points of contact can be contacted, through electronic communication means,

2 August 2026 – application of AI Act,

2 August 2027 – application of classification rules for high-risk AI systems and corresponding obligations.

Conformity assessment of medical devices containing AI system

According to the general rule of the New Legislative Framework, more than one legal act of Union harmonisation legislation can apply to  one product, because such a product can be made available if it complies with all applicable Union harmonisation legislation. Medical devices and in vitro diagnostic medical devices incorporating an AI system may present risks that are not addressed by the EU medical device legislation (regulations (EU) 2017/745 and (EU) 2017/746, respectively), as these regulations do not address risks specific to AI systems. This means that manufacturers of such devices are required to comply with the requirements of both regulation (EU) 2017/745 or (EU) 2017/746 and the AI ​​Act.

In order to ensure consistency and reduce costs, providers of medical devices containing one or more high-risk AI systems should have flexibility in how they ensure compliance of such devices with all applicable requirements of Union harmonisation legislation. This flexibility could, for example, mean that the provider of an AI-based device decides to incorporate some of the necessary testing and reporting processes, information and documentation required by the AI ​​Act into the already existing documentation and procedures required by the regulation (EU) 2017/745 or (EU) 2017/746. This should not in any way undermine compliance with all applicable requirements. Manufacturers should remember to ensure in their contracts with data providers that datasets can be made available to the relevant authorities.

Notified bodies will be assessed by the Competent Authorities for their expertise in the field of artificial intelligence and, following a positive assessment, will be able to issue CE certificates in accordance with the applicable conformity assessment procedures. The audits carried out as part of the conformity assessment procedure will take into account both the regulation (EU) 2017/745 or (EU) 2017/746 requirements and the requirements of the AI ​​Act, e.g. data governance procedures, logging methods, human oversight.

Penalties for non-compliance

The following penalties are provided for non-compliance with AI Act obligations:

  • up to 35 million euros or up to 7% of its total worldwide annual turnover for the preceding financial year (whichever is higher) for non-compliance with the prohibition of the AI practices;
  • up to 15 million euros or up to 3% of its total worldwide annual turnover for the preceding financial year (whichever is higher) for non-compliance with provisions related to operators* or notified bodies;
  • up to 7.5 million euros or up to 1% of its total worldwide annual turnover for the preceding financial year (whichever is higher) for providing incorrect, incomplete or misleading information to notified bodies or national competent authorities.

*‘operator’ means a provider, product manufacturer, deployer, authorised representative, importer or distributor;

In addition, fines were also set for providers of general-purpose AI models. If providers have infringed the provision of the AI Act, failed to provide relevant information/documents upon the Commission’s request or provided them in an incorrect or misleading manner, failed to comply with the Commission’s requests, or failed to make available to the Commission access to the general-purpose AI model, such suppliers may be subject to fines of up to 3% of their total annual worldwide turnover in the preceding financial year or 15 million euros (whichever is higher).

In deciding whether to impose an administrative fine, and in determining the amount of the fine in each individual case, all relevant circumstances of the situation shall be taken into account, such as, among others, the nature, gravity and duration of the infringement and its consequences, fines imposed by other authorities for the same infringement, or the manner in which the national competent authorities became aware of the infringement.

The penalties provided must be effective and dissuasive, however they must also take into account the interests of small and medium enterprises (including start-ups) and their economic situation.

Conclusion

The AI ​​Act is a comprehensive legal framework aimed at regulating the use and development of AI systems in the EU in accordance with an approach based on the risks associated with the application and use of AI systems. During the work on the document, many changes were introduced to enable the use of conformity assessment procedures specified in the regulations on medical devices (regulation (EU) 2017/745 or (EU) 2017/746) to demonstrate compliance with the AI ​​Act. However, there are still many challenges regarding the implementation of certain requirements.

According to the information available on the European Commission website, the Medical Devices Coordination Group (MDCG) plans to issue guidelines entitled FAQ on Interplay between MDR/IVDR and AIA to facilitate the understanding of the interaction between MDR/VDR and AI Act (planned MDCG endorsement time- Q4 2024). However, manufacturers of medical devices incorporating AI systems should now review and/or update technical documentation their device and quality management system to meet the applicable requirements of the AI ​​Act.

Find out more: https://eur-lex.europa.eu/eli/reg/2024/1689/oj

CONTACT US