A prototype of an award-winning robotic fish design that filters water to trap micro plastics has now been tested in
A new set of European Commission rules, called the AI Liability Directive, could make it easier for people injured in AI accidents to sue.
The European Commission says that the new proposals, which could become the first-ever legal framework on AI, are needed because:
– AI developers, deployers and users need clear requirements and obligations regarding specific uses of AI.
– The administrative and financial burdens for businesses, in particular small and medium-sized enterprises (SMEs), need to be minimised.
– Some AI systems create risks. For example, it’s not possible to find out why an AI system has made a decision or prediction and taken a particular action and may, therefore, be difficult to assess whether someone has been unfairly disadvantaged, such as in a hiring decision or an application for a public benefit scheme.
– Although existing legislation provides some protection, it is not currently sufficient to address the specific challenges AI systems may bring.
What Will The Proposed New Rules Cover?
The proposed rules will:
– Address risks specifically created by AI applications.
– Propose a list of high-risk applications.
– Set clear requirements for AI systems for high-risk applications.
– Define specific obligations for AI users and providers of high-risk applications.
– Propose a conformity assessment before the AI system is put into service or placed on the market.
– Propose enforcement after such an AI system is placed in the market.
– Propose a governance structure at the European and national levels.
The European Commission’s rules will take a risk-based approach in deciding the strength of rules that will be applied to different AI systems. For example:
– AI systems which pose an ‘unacceptable risk’, e.g. if they are considered a clear threat to the safety, livelihoods and rights of people, will be banned, from social scoring by governments to toys using voice assistance that encourages dangerous behaviour.
– AI systems identified as high-risk, e.g. AI technology used in critical infrastructures, safety components of products, essential private and public services (credit scoring), law enforcement that may interfere with people’s fundamental rights, and similar will be subject to strict obligations before they can be put on the market. For example, this could include risk assessment and mitigation systems, logging of activity to ensure traceability of results, providing clear and adequate information and documentation to the user, or appropriate human oversight measures to minimise risk. Biometric identification is an example of a ‘high-risk’ AI system.
In addition to the unacceptable risk and high-risk categories, the European Commission’s proposed regulatory framework for AI systems will also include two more categories:
– Limited risk – AI systems with specific transparency obligations, e.g. chatbots, where users must be made aware that they are interacting with a machine.
– Minimal or zero risk. For example, the proposed rules will allow the free use of minimal-risk AI, such as AI-enabled video games or spam filters.