top of page

EU Commission Opens Consultation on High-Risk AI Systems: Here’s What You Need to Know

  • 11 minutes ago
  • 3 min read

The European Commission launched a targeted public consultation on how to classify and regulate high-risk AI systems under the AI Act. Open until 18 July 2025, this consultation is a key step toward shaping the Commission’s upcoming guidelines, which will interpret how core obligations under the AI Act are applied in practice.

For any organisation developing, deploying, or integrating AI in high-impact domains—particularly those covered by product safety laws or fundamental rights use cases—this is the moment to speak up.

What Is This Consultation About?

The AI Act categorises certain AI systems as “high-risk” when they either:

  • Are used as safety components of products already regulated under EU product safety law (e.g. medical devices, vehicles, machinery); or

  • Pose significant risks to health, safety, or fundamental rights, based on specific use cases listed in Annex III (e.g. recruitment tools, biometric identification, critical infrastructure).

These high-risk systems are subject to strict obligations related to:

  • Risk management and data governance

  • Documentation, transparency, and human oversight

  • Accuracy, robustness, and cybersecurity

  • Conformity assessments and post-market monitoring

Starting 2 August 2026, these requirements become enforceable.

Why These Guidelines Matter

While the AI Act sets the legal framework, it leaves many implementation questions open - especially around how to interpret key terms like:

  • “Safety component”

  • “Intended purpose”

  • The scope of value chain obligations (e.g. who becomes a “provider” after a substantial modification?)

  • Whether a system in Annex III actually presents significant risk (triggering Article 6(3) exemptions)

The Commission is legally required to publish guidelines on high-risk classification by 2 February 2026 and on value chain obligations and requirements under Articles 6 and 25. This consultation is the only formal opportunity to influence how those guidelines are shaped.

Structure of the Consultation

The questionnaire is organised into five key sections:

  1. Classification under Annex I - Interpretation of “safety component” and links to existing conformity assessments under product safety laws.

  2. Classification under Annex III - Sector-specific questions on each of the eight use case categories (e.g. employment, law enforcement), exemptions under Art. 6(3), and distinction from prohibited practices under Art. 5.

  3. General classification questions - Clarifications on “intended purpose,” overlaps between Annex I and III, and how general-purpose AI fits into the high-risk structure.

  4. Obligations and the AI value chain - Covers core obligations for providers and deployers (Arts. 8–21, 26), the concept of substantial modification, and how roles shift within the AI supply chain.

  5. Potential amendments - Input on whether new high-risk use cases or prohibited practices should be added or removed from Annex III or Article 5.

Stakeholders can respond selectively only to sections relevant to their activities, and are strongly encouraged to include practical examples.

Why You Should Participate

Whether you’re a startup working on AI diagnostics, a financial institution using AI in credit scoring, or a public authority deploying smart surveillance systems, this consultation can shape:

  • How your AI systems are classified;

  • Which obligations apply to your role in the AI value chain;

  • What counts as a substantial modification;

  • How you demonstrate compliance through harmonised standards.

The practical impact of the AI Act will depend on how these guidelines are written. This is the opportunity to address ambiguities without waiting for court cases or enforcement proceedings.

How to Respond

The consultation is available here and remains open until 18 July 2025. Responses may be made public, so contributors are advised not to include confidential information.

If you’re considering submitting a response and need help identifying the most relevant issues, preparing practical examples, or framing your position strategically, feel free to get in touch. I can support you in shaping an effective contribution to this important consultation.

Kommentarer


Privacy & digital news FOMO got you puzzled?

Subscribe to my newsletter

Get all of my privacy, digital and AI insights delivered to you weekly, so you don’t need to remember to check my blog. You can unsubscribe at any time.


My newsletter can also include occasional marketing, such as information on my product launches and discounts.


Emails are sent through a processor located outside of the EU. Read more in the Privacy Notice.

It  takes  less  time  to  do  a  thing  right  than  to  explain  why  you  did  it  wrong.


Henry Wadsworth Longfellow

bottom of page