Insights

Draft EU regulation on AI and its impact on healthcare

We share an overview of the new concepts of the draft Regulation on a European Approach For Artificial Intelligence and the impact on AI based medical devices.

The long awaited draft regulation from the European Commission has been leaked as reported by media Politico Europe. This is a key document that lays the foundations of the future regulatory regime applicable to AI in Europe. Here is an initial overview of the main measures and concepts introduced by the draft Regulation proposal, with a specific focus on medical devices and healthcare.

Warning : this is a draft document which may still evolve until its publication by the European Commission. It will thereafter have to be adopted by the Member States and Parliament.

Main measures

  • Some AI systems will be banned;

  • Some high risk AI systems will be subject to specific measures and fines. It can be understood from the draft Regulation that some categories of AI based medical devices can be categorized as high risk;

  • Some non high risk AI systems may be subject to national ‘sandboxing’ initiatives.

Which AI systems will be banned?

  • AI systems designed or used in a manner that manipulates human behaviour, opinions or decisions through choice architectures or other elements of user interfaces, causing a person to behave, form an opinion or take a decision to their detriment.

  • AI systems designed or used in a manner that exploits information or prediction about a person or group of persons in order to target their vulnerabilities or special circumstances, causing a person to behave, form an opinion or take a decision to their detriment.

  • AI systems used for indiscriminate surveillance applied in a generalised manner to all natural persons without differentiation. The methods of surveillance may include large scale use of AI systems for monitoring or tracking of natural persons through direct interception or gaining access to communication, location, meta data or other personal data collected in digital and/or physical environments or through automated aggregation and analysis of such data from various sources.

  • AI systems used for general purpose social scoring of natural persons, including online.

Which AI systems are considered as high-risk?

  • AI systems intended to be used for the remote biometric identification of persons in publicly accessible spaces (with some restrictions);

  • AI systems intended to be used as safety components in the management and operation of essential public infrastructure networks;

  • AI systems intended to be used to dispatch or establish priority in the dispatching of emergency first response services, including by firefighters and medical aid;

  • AI systems intended to be used for the purpose of determining access or assigning persons to educational and vocational training institutions, as well as for assessing students in educational and vocational training institutions and for assessing participants in tests commonly required for admission to educational institutions;

  • AI systems intended to be used for recruitment as well as for making decisions on promotion and termination of work-related contractual relationships, for task allocation and for monitoring and evaluating work performance and behaviour;

  • AI systems intended to be used to evaluate the creditworthiness of persons; AI systems intended to be used by public authorities or on behalf of public authorities to evaluate the eligibility for public assistance benefits and services, as well as to grant, revoke, or reclaim such benefits and services;

  • AI systems intended to be used for making individual risk assessments, or other predictions intended to be used as evidence, or determining the trustworthiness of information provided by a person with a view to prevent, investigate, detect or prosecute a criminal offence or adopt measures impacting on the personal freedom of an individual;

  • AI systems intended to be used for predicting the occurrence of crimes or events of social unrest with a view to allocate resources devoted to the patrolling and surveillance of the territory;

  • AI systems intended to be used for the processing and examination of asylum and visa applications and associated complaints and for determining the eligibility of individuals to enter into the territory of the EU;

  • AI systems intended to be used to assist judges at court, except for ancillary tasks.

Some AI based medical devices may be considered high-risk

The draft document states that AI systems intended to be used as a safety component of products which are devices in themselves in the meaning of the Medical Device Regulation and the In Vitro Diagnostic Regulation should be considered high-risk if the product or device in question undergoes the conformity assessment procedure with a third-party conformity assessment body pursuant that relevant EU legislation.

The draft regulation also states that a classification of an AI system as high-risk for the purpose of this Regulation may not necessarily mean that the system as such or the product as a whole would necessarily be considered as ‘high-risk’ under the criteria of the sectoral legislation. This is notably the case for the Medical Device Regulation, where a third-party conformity assessment is foreseen for medium-risk and high-risk products.

Strict framework and high fines: measures applicable to high-risk AI systems

High-risk AI systems :

  • shall be developed on the basis of training and testing data sets that are of high quality.

  • shall be designed and developed so as to ensure that their operation is sufficiently transparent to enable users to understand and control how the high-risk AI system produces its output.

  • shall be designed and developed in such a way, including with appropriate human-machine interface tools, that they can be overseen by natural persons.

  • shall perform consistently throughout their lifecycle in respect of their accuracy, robustness and security.

  • shall be designed and developed so as to ensure that their outputs can be verified and traced back throughout the high-risk AI system’s lifecycle, notably through the setting up of features allowing the automatic generation of logs.

Providers of high-risk AI systems shall establish, implement, document and maintain a risk management system.

Finally, the regulation defines categories of infringements which may lead to administrative fines of up to 20 million euros, or in the case of an undertaking, up to 4% of the total worldwide annual turnover of the preceding financial year.

Non high risk systems may be subject to specific national testing and piloting schemes

To ensure a legal framework that is innovation-friendly, future-proof and resilient to disruption, national competent authorities from one or more Member States should be encouraged to establish artificial intelligence regulatory sandboxing schemes. These schemes would facilitate the development and testing of innovative AI systems under strict regulatory oversight before these systems are placed on the market or otherwise put into service.

About us

Kantify is a startup specialized in the development of AI solutions for healthcare and transport. We help our clients translate regulatory requirements into their AI solution development and industrialization.

Get in touch !