17 Jul 2020

High-Level Expert Group unveils new tool for responsible AI, in collaboration with DIGITALEUROPE

Today, the Commission’s Artificial Intelligence High-Level Expert Group published its Assessment List for Trustworthy Artificial Intelligence, a self-evaluation tool for organisations to assess, identify and mitigate risks related to AI systems.

As a key member of the Group, DIGITALEUROPE was instrumental in developing this groundbreaking tool, which will further contribute to a culture of responsible competitiveness on AI in Europe.

Director-General of DIGITALEUROPE Cecilia Bonefeld-Dahl said:

“This groundbreaking Assessment List is a first-of-its-kind tool, aiming to practically support organisations both large and small to make sure their AI is safe and trustworthy. I am proud of the instrumental role DIGITALEUROPE played in its design and development.

We warmly encourage our members, partners, and governments to use this online tool. It is vital we all think carefully about how to apply AI in practice and consider the right measures and safeguards.

With its practical questions and examples, the Assessment List provides a model for AI applications which are safe and fit for society throughout the design process. These self-assessment instruments are exactly what businesses need to show they are responsible actors and to tailor technology applications to their unique case.

I want to thank Executive Vice-President Vestager for launching this process and putting her faith in this group of diverse experts from civil society and industry. I look forward to working with her, with Commissioner Breton and Commissioner Reynders on achieving Trustworthy AI in Europe.”


Background

The Assessment List for Trustworthy AI is a practical checklist that will help businesses and organisations better understand what Trustworthy AI is, what risks an AI system might generate, and how to minimize those risks while maximising the benefit of AI.

The List is devised as a self-assessment questionnaire aiming to help organisations design and deploy AI systems that are lawful, ethical and robust. Its goal is to improve governance and make sure that all aspects – good and bad – of AI have been considered.

DIGITALEUROPE has been a member of the AI HLEG since its creation in June 2018. The last deliverable of the AI HLEG will be the Sectoral Considerations for healthcare, manufacturing and public sector, which will be published next week.

For more information, please contact:
Chris Ruff
Director for Political Outreach & Communications
20 Nov 2024 Policy Paper
Legitimate interest: One of six legal bases to process personal data
20 Nov 2024 Policy Paper
Copyright and AI: For effective implementation of existing rules
14 Nov 2024 The Download
The Download - Taming the cyber storm whilst empowering European businesses to thrive
Hit enter to search or ESC to close
This website uses cookies
We use cookies and similar techonologies to adjust your preferences, analyze traffic and measure the effectiveness of campaigns. You consent to the use of our cookies by continuing to browse this website.
Decline
Accept