Skip to main content

Artificial Intelligence Act: What can we learn from it?

Artificial Intelligence can bring a wide array of economic and societal benefits but also generate new risks for individuals. The Artificial Intelligence Act presents a risk-based regulatory approach to AI across the EU without unduly constraining or hindering technological development.

01

What is the Artificial Intelligence Act?

Main objectives

  • Ensure that AI systems available on the European market are safe and respect the fundamental rights of citizens and the values of the EU.
  • Ensure legal certainty to facilitate investment and innovation in AI.

  • Improve governance and effective enforcement of existing legislation on fundamental rights and safety requirements for AI systems.

  • Facilitate the development of a single market for safe, legal, and trustworthy AI applications, and prevent market fragmentation.

Which Companies are Concerned?

  • Providers who distribute or utilize AI systems in the European Union, whether these suppliers are established in the Union or in a third country (extraterritorial reach);  

  • Deployers of AI systems who are located in the EU;  

  • Providers and deployers of AI systems located in another country if the results generated by the system are intended for use in the EU (extraterritorial reach);  

  • Importers and distributors of AI systems.  

 

Fines 

Up to 7% of total worldwide annual turnover or €35M, depending on the violation found. Member States are responsible for designing their own sanctions regimes. 

Fines. 

Up to 7% of total worldwide annual turnover or €35M, depending on the violation found. Member States are responsible for designing their own sanctions regimes. 

7%

Supervisory Authorities

  • European Artifical Commission Office  

  • National authorities to be created or appointed depending on the country   

 

Associated Regulations

The AI Regulation is part of the European data regulation package and is therefore linked to the DSA, DGA, DMA, etc. but also to the GDPR and the recent proposal for an AI Liability Directive. 

Timeline

Key dates

  • 13 March 2024: Parliamentary vote on the text  

  • 14 June 2024: European Council endorsement  

  • 12 July 2024: Publication in the Official Journal of the EU  

  • 1 August 2024: Entry into force 

 

Progressive deadlines for compliance

  • 2 February 2025: Prohibitions on unacceptable AI risk (6 months after entry into force)  

  • 2 August 2025: Obligations enter into effect for providers of general-purpose AI models. Appointment of member state competent authorities. Annual Commission review of the list of prohibited AI (potential amendments (12 months after entry into force) 

  • 2 February 2026: Commission implements act on post-market monitoring (18 months after entry into force) 

  • 2 August 2026: Obligations go into effect for high-risk AI systems listed in Annex III. Member states to have implemented rules on penalties, including administrative fines. Member state authorities to have established at least one operational AI regulatory sandbox. Commission review, and possible amendment of, the list of high-risk AI systems (24 months after entry into force) 

  • 2 August 2027: Obligations go into effect for high-risk AI systems that are intended to be used as a safety component of a product. Obligations go into effect for high-risk AI systems in which the AI itself is a product and the product is required to undergo a third-party conformity assessment under existing specific EU laws (36 months after entry into force)  

  • By the end of 2030: Obligations go into effect for certain AI systems that are components of the large-scale information technology systems established by EU law in the areas of freedom, security and justice, such as the Schengen Information System.  

02

Focus & Impacts

Key Focus

Risk-based classification of systems 

  • Prohibition of AI systems presenting intolerable risks (Article 5).

  • Increased obligations for high-risk AI systems.

  • Less extensive obligations for general-purpose AI systems that do not pose systemic risks and for systems interacting with humans.

  • Fundamental rights impact assessments to be carried out for certain high-risk systems.

Responsibility of providers and deployers

  • Deployers are now responsible when using AI systems: they must implement human oversight to ensure the system is used responsibly and address any issues. The data used with the system must be relevant and up-to-date.  

  • Providers must ensure their AI systems comply with the AI Act requirements, such as maintaining detailed technical documentation and offering clear information on the system's capabilities, limitations, and performance. 

Harmonization of internal market

  • Creation of a European-wide AI Office and national control authorities to ensure legal certainty by verifying the effective implementation of the regulation, and by sanctioning bad practices.  

  • Registration of high-risk AI systems in an European database.

  • Obtaining a CE marking will be necessary before placing a high-risk AI system on the market.   

Regulatory Sandboxes

  • National authorities may establish regulatory sandboxes that offer a controlled environment for testing innovative technologies for a limited time. These sandboxes are based on a test plan agreed with the relevant authorities to ensure the compliance of the innovative AI system and to accelerate market access. SMEs and start-ups can have priority access to them.

Impacts

Prohibited Artificial Intelligence Practices

AI systems that contravene the values of the European Union by violating fundamental rights are prohibited, such as:

  • Unconscious manipulation of behavior;
  • Exploiting the vulnerabilities of certain groups to distort their behavior;
  • AI-based social rating for general purposes by public authorities;
  • The use of "real-time" remote biometric identification systems in publicly accessible spaces for law enforcement (with exceptions);
  • Social scoring, dark pattern AI . 

High-risk Systems (defined and listed by the EU Commission)

Companies are subject to several obligations related to documentation, risk management systems, governance, transparency, or safety, depending on their status (supplier, user, distributor, and other third parties). These systems must also be declared to the EU and bear a CE mark. 

Specific Risk Systems

These are systems that (i) interact with humans, (ii) are used to detect emotions or determine association with (social) categories based on biometric data, or (iii) generate or manipulate content (‘deep fakes’). For these systems, there is an obligation to disclose whether the content is generated through automated means or not.

Non-High-Risk Systems

Voluntary creation and enforcement of a code of conduct that may include commitments to environmental sustainability, accessibility for people with disabilities, stakeholder participation in AI system design and development, and development team diversity.

Focus: General Purpose AI Systems

General purpose AI systems are AI systems that have a wide range of possible uses, both for direct use and for integration into other AI systems. They can be applied to many different tasks in various fields, often without substantial modification and fine-tuning. Unlike narrow AI, which is specialized for specific tasks, general-purpose AI can learn, adapt, and apply knowledge to new situations, demonstrating versatility, autonomy, and the ability to generalize from past experiences.  

 

Impact of the AI Act on General Purpose AI systems:  

  • Codes of conduct will be established at the European Union level to guide suppliers in applying the rules regarding general-purpose artificial intelligence (GPAI) models.   

  • Systemic risk: A GPAI model represents a systemic risk if it is found to have high-impact capabilities based on appropriate technical tools and methodologies, including indicators and benchmarks.   

  • Specific obligations regarding systemic risks:  

    • Notify the European Commission in the event of systemic risk and mitigate such risks wherever possible.  
    • Perform model evaluation in accordance with standardized protocols and tools.  
    • Report serious incidents to national authorities and the AI Office. 
    • Cybersecurity protection. 
  • Providers' obligations:  

    • Develop and maintain model technical documentation and share it with other vendors if they wish to integrate the GPAI model into other systems.  

    • Establish a policy to comply with EU copyright law.  

    • Publish a detailed summary on the content used for training the GPAI model (based on the model provided by the AI Office).

03

How are High-Risk Systems Impacted?

Types of High-risk Systems

This includes the safety component of a product or a product requiring a third-party conformity assessment according to existing regulations (Dir 2009/48/EC on the safety of toys, Reg 2016/424/EU on cableways, etc).

It also includes products listed in Annex III:

  • Biometric identification and categorization of humans
  • Management and operation of critical infrastructure
  • Education and vocational training
  • Employment, worker management, and access to self-employment
  • Access to, and enjoyment of, essential private services and public services and benefits
  • Law enforcement
  • Migration, asylum, and border control management
  • Administration of justice and democratic processes

System Requirements

  • Risk Management System

Continuous iterative process run throughout the entire lifecycle of a high-risk AI system (identification, evaluation of risks, and adoption and testing of risk management measures)

  • Accuracy, Robustness, and Cybersecurity

Implementation of measures and information in the instructions

  • Human Monitoring

Ensure human monitoring during the period in which the AI system is in use

  • Transparency and Provision of Information to Users

Transparent design & instructions for users

  • Record-keeping

Design and development with capabilities enabling events to be recorded automatically

  • Technical Documentation

Demonstration of high-risk AI system compliance with requirements

  • Data and Data Governance

Training, validation and testing of data sets to be sure they meet quality criteria

Focus on High-Risk Systems

OBLIGATIONS FOR PROVIDERS OBLIGATIONS FOR DISTRIBUTORS OBLIGATIONS FOR USERS
GENERAL REQUIREMENTS Ensure that the system is compliant No distribution of a non-compliant high-risk system and if the high-risk AI system is already in the market, Ensure the relevance of the data entered Stop the use of the system if it is considered to present risks to health, safety, the protection of fundamental rights, or in the event of a serious incident or malfunction.
Take the necessary corrective actions if the high-risk AI system is not compliant  Storage or transportation conditions must not compromise the system's compliance with requirements
Verify that the high-risk AI system bears the required CE mark of conformity
PROCESSES Have a quality management system (strategy, procedures, resources, etc.) Third party monitoring: to verify that the supplier and importer of the system have complied with the obligations set out in this regulation and that corrective action has been or is being taken Keep logs automatically generated by the system if they are under their control
Write technical documentation
Conformity assessment EU declaration and CE marking
Design and develop systems with automatic event logging capabilities
Maintain logs generated automatically by the system
Establish and document a post-market surveillance system
TRANSPARENCY & INSTRUCTIONS Design transparent systems Ensure that the AI system is accompanied by operating instructions and required documentation Obligation to use and monitor systems following the instructions of use accompanying the systems
Draft instructions for use
INFORMATION & REGISTRATION Obligation to inform the competent national authorities in case of risks to health, safety, protection of fundamental rights or in case of serious incidents and malfunctions. Obligation to inform the supplier/importer of a non-compliant high risk system and the competent national authorities Obligation to inform the supplier/distributor, or the market surveillance authority, if the user cannot reach the supplier and the systems present risks to the health, safety, protection of fundamental rights of the persons concerned.
Register the system in the EU database
04

How can we help?

We rely on teams of highly complementary experts to offer you robust, reliable and effective compliance, in line with your strategic objectives, your use of AI and your internal processes and governance. Our consultants are committed to helping you manage your risks and create synergies as part of your AI projects. 

  • Compliance specialists 

  • Data Scientists  

  • Cybersecurity experts  

We have built over the years key enablers that will allow to accelerate the project: modular AI governance framework, benchmark of market best practices, system mapping templates, code assessment automations, mature training modules, custom solutions functional to the PoC, standard policies, charters and procedures, etc. 

Thanks to our extensive experience of supporting our customers in AI governance and AI risk assessment, our team will be able to quickly get to grips with your specific needs and save time in interacting effectively with your  data scientists. 

Contact us to find out how we can help

Sia Partners integrates this data in its client database to send you marketing communications (invitations to events, newsletters and new commercial offers).
This data will be kept for 3 years before being deleted and you can withdraw your consent to the processing of your data at any time.
To learn more about the management of your personal data and to exercise your rights, please consult our Data Protection Policy.

CAPTCHA

Your data are used by Sia Partners to process your contact request. Please note that you have rights regarding your personal data. For more information, we invite you to read our data protection policy