Proposal for a regulation on the use of artificial intelligence: fundamental rights mark the red lines

Rita Gomes, Associate, Intellectual Property Department

The European Commission has just published its proposal for a Regulation laying down harmonized rules on Artificial Intelligence (AI) which seeks to strike a balance between its intention to promote the use of artificial intelligence, while at the same time establishing certain limits and rules on its use in order to offset certain risks. The European Commission’s aim is to secure a balance that without slowing down innovation, is capable of creating a climate of trust for organizations and citizens in the increasingly widespread use of AI.

Although the draft Regulation does not specifically mention intellectual property rights, if we consider some AI applications that are starting to be implemented in relation to IP rights (for example filtering systems to detect copyright infringement, recommendation systems based on users’ conduct, detection of counterfeits, prediction of trends, automated creation of content, detection of compliance with protection requirements for the different types of protection, etc.), it is undoubtedly necessary to understand how the EU legislator approaches these new technological developments.

The European Commission’s Proposal commences by establishing four levels of risk in the use of AI which include: (i) AI systems of an unacceptable risk, (ii) high risk AI systems, and (iii) low or minimal risk AI systems that would fall outside the scope of the Regulation, leaving it up to stakeholders to decide whether to use higher or lower standards through self-regulation. Let’s take a look.

Classification according to risk

  1. Unacceptable risk systems

Unacceptable risk systems are those that involve a high risk for safety, life and for fundamental rights of EU citizens. The risks expressly contained in the proposal include the following:

  • Social scoring by governments;
  • Exploitation of vulnerabilities of children;
  • Use of subliminal techniques; or
  • Remote biometric identification systems in publicly accessible spaces used for law enforcement purposes (subject to narrow exceptions, which will always be high-risk).
  1. High-risk

High-risk AI systems are potentially damaging and can also have an adverse impact on the fundamental rights of EU citizens. The list, which can be reviewed to align with the evolution of AI use cases (future-proofing), includes the following uses.

  • Safety components of products covered by sectorial Union legislation, which will always be high-risk when subject to third-party conformity assessment;
  • AI systems intended to be used for permitted biometric identification will always be considered high-risk and therefore subject to an-ex ante third party conformity assessment including human oversight requirements by design.
  • Annex II of the Proposal includes, among others, the following examples:
    • AI systems intended to be used to dispatch or establish priority in the dispatching of emergency first response services, including firefighters and medical assistance;
    • AI systems intended to be used for determining access or assigning persons to educational and vocational training institutions or to evaluate students in educational and professional training institutions or persons taking part in tests which are commonly required as part of or as a precondition for their education;
    • AI systems used in recruitment – for example to advertise vacancies, select persons, evaluate candidates during interviews or tests – and for making decisions on promotion and termination of work-related contractual relationships, for the assignment of tasks and for monitoring performance and behavior in the workplace;
    • AI systems intended to be used to evaluate the creditworthiness of individuals;
    • AI systems intended to be used by public authorities or on their behalf to evaluate the right to public assistance benefits and aid, and to grant, revoke or reclaim such benefits and aid;
    • AI systems intended to be used for individual risk assessments, or other predictions intended to be used as evidence or to determine the reliability of the person with a view to preventing, investigating, detecting or pursuing a crime or adopting measures that affect an individual’s personal freedom;
    • AI systems intended to be used to predict the occurrence of crime or events of social unrest in order to assign resources to patrolling and monitoring the territory;
    • AI systems intended to be used for applications for asylum, visas and for the relevant complaints and to establish the eligibility of natural persons entering the EU;
    • AI systems intended to assist judicial authorities, except for ancillary activities.

In these cases, their use is permitted, but always following a conformity assessment in order to evidence that the system meets the requirements of trustworthy AI, which is the aim of the EU. The points that need to be borne in mind in this assessment include: the quality of data sets used (input); technical documentation and record keeping; transparency and the provision of information to users; human oversight; and robustness, accuracy and cybersecurity. The aim is none other than to ensure that in the event of a breach, the national authorities have access to the information needed to investigate whether the use of the AI system complied with the national law applicable.

  1. Limited risk systems

Systems which only entail a limited risk and as a result only have to comply with transparency obligations. The paradigmatic example are chatbots.

  1. Minimal risk

This is the catch-all category, for all systems that cannot be classified in any other category. As stated by the EU Commission in its proposal, the vast majority of AI systems currently used fall into this category. In this case, and given the minimal risk involved in its use, the Commission is satisfied with self-regulation, for example by means of adhesion to voluntary codes of conduct.

How to enforce compliance

The Proposal indicates that the Member States should be in charge of applying and enforcing this Regulation, for which each Member State should designate one or more competent authorities to supervise their correct application and implementation. It also envisages that each Member State should have one national supervisory authority, which will also represent the country in the European Artificial Intelligence Board.

Penalties

If AI systems that do not fulfill the requirements of the Regulation are put on the market or used, Member States will have to lay down effective, proportionate and dissuasive penalties. For this purpose, the Regulation has chosen to set thresholds that the national authorities have to take into account in their penalty proceedings:

  • Infringements on prohibited practices: up to €30m or 6% of the total worldwide annual turnover of the preceding financial year;
  • Non-compliance with any of the other requirements or obligations foreseen in the Regulation: up to €20m or 4% of the total worldwide annual turnover of the preceding financial year;
  • Supply of incorrect, incomplete or misleading information to notified bodies and national competent authorities: up to €10m or 2% of the total worldwide annual turnover of the preceding financial year;

Next steps

The European Parliament and the Board must now review and debate the proposal and will have the chance to make amendments. It could be a lengthy process, so no timelines are envisaged for the approval and publication of the draft.