Skip to Main Content
Main Menu
Regulation

EU Artificial Intelligence Act (EU AI Act)

The EU AI Act promotes the responsible use of AI to ensure the highest level of protection on the rights, freedoms, and safety of consumers against the harmful effects of AI within the EU.

Who is subject to the EU AI Act?

The EU AI Act applies to:

  • Providers of AI systems and/or general-purpose AI models within the EU, irrespective of whether those providers are established or located within the EU or in a third country;
  • Deployers of AI systems who are established or are located within the EU;
  • Providers and deployers of AI systems who are established or are located in a third country, and where the output produced by the AI system is used in the EU;
  • Importers and distributors of AI systems;
  • Product manufacturers placing on the market or putting into service an AI system, together with their product and under their own name or trademark;
  • Authorized representatives of providers not established in the EU; and
  • AI systems released under free and open source licenses, unless they are placed on the market or put into service as high-risk AI systems.

Key obligations of the EU AI Act

Responsibilities of high-risk AI systems

Providers and/or deployers of high-risk AI systems have heightened responsibilities. They must establish and maintain a risk management system to identify and analyze risks associated with using the system. Human oversight must be assigned to supervise the use and activities of the system. Providers must retain documentation regarding the system, such as logs automatically generated by the system, and draw up an EU declaration of conformity.

Post-market monitoring

AI providers shall establish and document a post-market monitoring system based on a post-market monitoring plan. This system is used to systematically gather relevant data regarding the performance of the high-risk system throughout its lifetime. It enables the AI provider to evaluate continuous compliance of the AI system in accordance with the EU AI Act, ensuring ongoing adherence to regulatory standards.

Prohibited AI practices

AI systems, including general-purpose and high-risk AI systems, shall not be used in specific prohibited use cases. These include categorizing individuals based on biometric data via biometric categorization systems, creating or expanding facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage, assessing individuals to predict the likelihood of committing a criminal offense, and inferring emotions of individuals in workplaces and educational institutions.

Transparency obligations to AI deployers, and consumers

Providers of any form of AI model must clearly disclose to consumers when they are interacting with and/or viewing content generated by AI, and issue clear instructions to AI deployers that enable them to interpret the model’s output and how to use it appropriately.

Reporting of serious incidents

Providers of high-risk AI systems shall report any serious incident to relevant market surveillance authorities of the Member States where that incident occurred, and an incident report shall be made immediately after the provider has established a causal link between the AI system and the serious incident, and not later than 15 days after the provider and/or deployer becomes aware of the incident.

Enforcement

Non-compliance with the prohibition of AI practices is subject to administrative fines of up to 35,000,000 EUR; or up to 7 % of total worldwide annual turnover for the preceding financial year. Non-compliance with transparency requirements and responsibilities along the AI value chain is subject to: administrative fines of up to 15,000,000 EUR; or up to 3 % of total worldwide annual turnover for the preceding financial year.

Conditions to test high-risk AI models outside of regulatory sandboxes

Providers may conduct tests in real world conditions only where certain conditions have been met, including, but not limited to:

  • the provider has drawn up a real-world testing plan, submitted it to the market surveillance authority in the Member State where the testing will take place, and the plan has been approved;
  • the testing does not last longer than necessary to achieve its objectives, and shall not last longer than six months;
  • the subjects participating in the testing has given informed consent, and can revoke consent at any time without providing justification and all personal data will be deleted soon after revocation; and
  • the predictions, recommendations or decisions of the AI system can be effectively reversed and disregarded.
EBOOK

Advancing Accountable AI: A Readiness Guide For Privacy

Adoption and advancements of AI is ushering in a new set of opportunities — and risks — for organizations and privacy teams.

Achieve compliance

FAQs

  • How are high-risk systems classified?

    AI systems are classified as high-risk if the following conditions are met:

    • the AI system is intended to be used as a safety component of a product, or the AI system is itself a product;
    • the product whose safety component in accordance with the point above is the AI system, or the AI system itself as a product, and is required to undergo a third-party conformity assessment; and
    • the AI system performs activities in the following areas:
      • biometrics;
      • critical infrastructure;
      • education and vocational training;
      • employment, workers management and access to self-employment;
      • access to and enjoyment of essential private services, and essential public services and benefits;
      • migration, asylum and border control management; and
      • administration of justice and democratic processes.
  • What happens if I place an AI system on the market before the date of enforcement of the EU AI Act?

    AI systems that have been placed on the market or put into service 36 months prior to the date of entry must be brought into compliance with the EU AI Act by December 31, 2030.

  • Does the EU AI Act provide data subjects with data rights?

    Yes. Any affected consumer subject to a decision taken by an AI deployer, on the basis of the output from a high-risk AI system, has the right obtain from the AI provider and/or deployer clear and meaningful explanations of the:

    • role of the AI system in the decision-making procedure; and
    • main elements of the decision taken.

The information provided does not, and is not intended to, constitute legal advice. Instead, all information, content, and materials presented are for general informational purposes only.

Back to Top