18 Oct 2023

AI Act trilogues: A vision for future-proofing, governance and innovation in Europe

Executive summary

DIGITALEUROPE has been a strong supporter of the overall objectives of the proposed AI Act, and its focus on high-risk uses of artificial intelligence (AI). We welcome the Council’s and the European Parliament’s efforts to strike a balance between protecting the health, safety and fundamental rights of European citizens and ensuring that Europe’s growing AI industry remains competitive and continues to innovate.

When regulating something as dynamic and with such high potential as AI, it is paramount not to fall into the pitfall of regulating out of fear. To avoid this, we need clear goals, agile policymaking processes and multi-stakeholder engagement. DIGITALEUROPE’s sandboxing report showed how commitment to regulation alone is not enough, and needs to be complemented with proper dialogue and regulatory prototyping across the AI industry ecosystem to be effective.

At the moment, work is still needed to reach the delicate balance and dual ambition of protecting citizens and driving the AI-fuelled business solutions of tomorrow, especially through deeper consultation with industry experts who can interpret how complex AI rules may impact AI-powered businesses.

This paper compares the Parliament’s and the Council’s mandates for trilogue negotiations, contributing the following recommendations to improve the AI Act and make it truly future proof:

  • AI definition and scope: The definition of ‘AI’ must be focused and should align with international frameworks like OECD and international harmonisation and market access in third countries. Research and development (R&D) and open-source exemptions are essential for innovation.
    Risk categorisation: The risk-based approach is at the core of the AI Act. It is central to ensure that the risk categorisation framework is technology-neutral and focuses on truly high-risk use cases.

    • Prohibited practices need precise definition and clarity to avoid unintended restrictions. Prohibitions of social scoring, biometric identification and emotion recognition should be targeted, to permit controlled high-risk applications.
    • High-risk systems: The Parliament’s ‘significant risk’ criterion should be upheld, combined with the Council’s condition on human oversight, enhanced. The proposed notification process for providers, however, will generate uncertainty and delays, and should be replaced with a documentation-based approach.
  • Alignment with existing legislation: The AI Act must align with Europe’s existing comprehensive legislation, avoiding disruptions to well-established sectoral frameworks such as product legislation, from healthcare to machinery, and finance. The final text should explicitly provide that existing governance and enforcement frameworks, including automatic recognition of notified bodies and market surveillance authorities, can be used when assessing and applying the AI Act’s requirements.
  • Requirements for high-risk AI: Requirements must be technically feasible, avoid double regulation and align with existing legislation. The Parliament’s expansion beyond health, safety and fundamental rights, covering rule of law and environment, muddles the AI Act’s scope and will only make compliance more problematic.
  • Allocation of responsibilities: Flexibility in allocating responsibilities to the actors that can most appropriately ensure compliance is crucial. The Parliament’s proposed fundamental rights impact assessment for deployers, whilst well-intentioned, is merely duplicative and should be rejected.
  • General-purpose AI (GPAI): Regulating GPAI requires a light-touch approach, to avoid treating all systems without an intended purpose as high-risk. Any requirements on GPAI or foundation models should focus on information sharing, cooperation and compliance support across the value chain.
  • Implementation: The availability of harmonised standards to prove compliance, aligned with international efforts, will be central to the AI Act’s success. The AI Act should balance risk prevention with innovation support. Regulatory sandboxes should be mandatory across Europe, encouraging participation and real-world testing. To boost innovation, a robust investment plan, especially for start-ups and SMEs, should accompany the AI Act, ensuring growth and competitiveness.
  • Governance: The AI Board, or AI Office in the Parliament’s mandate, should ensure a centralised approach, with continuous engagement with industry and civil society. Coordination and advisory roles of the AI Board and the Commission are essential to ensure consistent application and avoid inconsistencies.
  • Enforcement: EU-wide safeguards against disproportionate decisions are necessary. A 48-month transitional period is necessary for overall ecosystem readiness, including the timely availability of harmonised standards.
Download the full position paper
For further information, please contact
Alberto Di Felice
Policy and Legal Counsel
Julien Chasserieau
Associate Director for AI & Data Policy
Bianca Manelli
Officer for AI & Data Policy
Back to Artificial Intelligence & Data
View the complete Position Paper
PDF
Our resources on Artificial Intelligence & Data
20 Nov 2024 Policy Paper
Legitimate interest: One of six legal bases to process personal data
09 Sep 2024 Policy Paper
First review of the EU-US Data Privacy Framework
19 Jun 2024 Publication & Brochure
The EU's Critical Tech Gap: Rethinking economic security to put Europe back on the map
Hit enter to search or ESC to close
This website uses cookies
We use cookies and similar techonologies to adjust your preferences, analyze traffic and measure the effectiveness of campaigns. You consent to the use of our cookies by continuing to browse this website.
Decline
Accept