Digital transformation: Orgalim input into the European Commission consultation on “Artificial Intelligence – ethical and legal requirements”
Published: 10 September 2020
Policies & Issues: Digital Transformation
Orgalim strongly endorses the overall policy objective of ensuring the development and uptake of trustworthy AI across the Single Market. As this inception impact assessment outlines various options, Orgalim would like to affirm its support for Option 1 of the alternative options to the baseline scenario – i.e. the option of an EU ‘soft law’, a non-legislative approach to facilitate and encourage industry-led intervention (with no EU legislative instrument).
In general, Orgalim believes that, before choosing any option, existing regulation needs to be carefully analysed, potential gaps precisely formulated, and the right tools adequately proposed, based on a realistic definition of AI. For the manufacturing sector, the most important aspect to keep in mind is that AI is not a product, but a technology embedded in products (applications), which puts all concerns related to AI into another perspective. There are very diverse applications that might be deemed AI-based or AI-operated systems, ranging from a driverless car to a smart-toothbrush, a robot-companion, or a non-embedded expert system for medical diagnosis. New regulation should be introduced only where it is necessary, and where it delivers clear benefits (e.g. helps to uptake the new technologies by creating a level playing field, ensuring safety etc.), and with a reference to industry standards which reflect the state of the art.
It is important for policymakers to differentiate between the varying degrees of risk linked to use of AI technologies in their different applications. Clear criteria should be established for identifying critical areas in a way that is legally certain. In Orgalim’s view, the quality of any future regulation will depend on the ability to identify a common, transparent and easily applicable understanding of ‘high-risk’. High-risk situations should be defined in cooperation with industry, based on risk-benefit considerations and adjusted when necessary. Clear definition of criteria for perceived high-risk applications and the degree of autonomy is crucial, in order to avoid over-regulation of completely harmless automation. When something has been identified as a high-risk application (which we believe will be a minority of industrial AI applications) a targeted approach to risk-management could be the right one. Taking this into account, it can for instance be concluded that most industrial AI application use cases have entirely different ethical implications compared to consumer-oriented AI solutions for end-consumers. It is crucial that the framework for identifying high-risk use cases is predictable and proportionate in order to create a stable environment for investments.
From a policy-making perspective, clearly identifying the object to be regulated is essential. In the absence of a precise definition, which is currently the case for AI, the scope of any intended regulation would be uncertain, potentially being either over- or under-inclusive, and triggering litigation. Orgalim would like to highlight a definition of AI, as outlined in our previous position papers and it is similar to the definition given by the Commission’s High-Level Experts Group on AI. More detailed analysis and suggestions, especially when it comes to safety and liability, can be found in the position paper which can be downloaded above.
Adviser - Digital