📜

EU AI Act and Conformity Assessment Overview

May 10, 2025

Conformity Assessment under the EU AI Act General Approach

Overview

  • EU AI Act: Proposed harmonized rules on artificial intelligence by the European Commission on April 21, 2021.
  • General Approach: Published on November 25, 2022, after consultative processes and amendments.
  • Approval: Initial draft approved by the EU Parliament in May 2023.
  • Trilogue Meetings: Conducted in June, July, September, and October 2023, targeting final adoption in early 2024.
  • Objective: Establish a legally binding AI regulatory framework akin to GDPR, with global implications.

Key Concepts

  • Extraterritorial Effect: Similar to GDPR, the EU AI Act is designed to have a global impact, potentially setting a global standard for AI regulation.
  • AI Trustworthiness: The Act aims to foster global consensus on AI trustworthiness.

Conformity Assessment

  • High-Risk AI Systems: AI providers are required to conduct conformity assessments for high-risk AI systems before entering the EU market.
  • Guidelines: Limited guidance on conducting conformity assessments and ex-post monitoring in practice.
  • Need for Consensus: Emphasizes the necessity for building consensus on conducting conformity assessments effectively.

Governance Structure

  • Proposed by EU AI Act: The governance structure was approved by the European Council in November 2022.
  • Tools and Methods: The paper proposes tools for conducting conformity assessments of AI systems.

References and Resources

  • Multiple referenced articles and resources provide additional context and analyses related to the EU AI Act, such as works by Demetzou Katerina and Floridi et al.

Authors

  • Eva Thelisson: Affiliated with AI Transparency Institute, Lausanne, Switzerland.
  • Himanshu Verma: Part of the Knowledge and Intelligence Design Group at TU Delft, Netherlands.

Publication Details

Additional Insights

  • The paper indicates that the EU AI Act could influence global AI standards and emphasizes the importance of structured conformity assessments to ensure the trustworthiness of AI systems.

Further Reading and Resources

  • OECD AI Principles: Identifies key principles for AI trustworthiness such as fairness, transparency, contestability, and accountability.
  • Various articles and preprints like Zicari et al. on assessing trustworthy AI in practice and related ethical auditing approaches.

These notes provide a concise overview of the EU AI Act's approach to conformity assessment, its potential global impact, governance structure, and the key elements required for ensuring AI trustworthiness.