Artificial intelligence: Orgalim Position Paper on Ethics Guidelines for Trustworthy AI
Published: 9 April 2019
Policies & Issues: Digital Transformation
As a member of the High-Level Expert Group on Artificial Intelligence (HLEG on AI), Orgalim welcomes the publication of the “Ethics Guidelines for Trustworthy AI” developed by the group.
These Guidelines provide a general framework for stakeholders in the European AI eco-system to apply a set of consensual ethical principles, such as respect for human autonomy, prevention of harm, fairness, and explicability, while addressing potential tensions between these principles. They aim to support the development of AI systems that, beyond fully complying with all applicable laws, are also ethical and technically and socially robust (“Trustworthy AI”).
This approach forms an important element of promoting a human-centric approach to AI in Europe, and supporting the continued trust of citizens and businesses in the wide-spread integration of AI into our society and economy. To ensure the Guidelines deliver on these objectives, we believe a stronger sectoral approach needs to be introduced in the forthcoming piloting process.
To read our recommendations in full, please download the position paper above.
Manager - Industrial Policy & Digitalisation