The exponential increase in the use of artificial intelligence (AI) to generate audiovisual content is forcing us to re-think the risks involved in the proliferation of deepfakes. The quality of this synthetic content is such that it can generate situations or conversations that never existed, blurring the line between real and make believe, just like Ridley Scott’s sci-fi classic ‘Blade Runner’. The Proposal for an AI Regulation that the EU is working on, aims to lay down transparency obligations, but we need to be ready to take action against the most predictable breaches. 

There are many tools available to create deepfakes for legitimate purposes. In the audiovisual industry for example, the technology used in the Martin Scorsese film The Irishman (2019), to make Robert De Niro, Al Pacino and Joe Pesci look younger so they could play their synthetic characters in the film’s flashbacks. In Spain we have also seen Lola Flores brought back to life to star in an ad for a well-known brewery, as we told you here. However, AI is developing so fast and becoming so popular that it is now used on a massive scale for purposes that have nothing to do with entertainment. For example to spread fake news, including the photographs falsely showing the arrest of Donald Trump (link) or the videos in which Zelenski encouraged Ukrainian troops to surrender (link).

In the legal sector, the fact that images may have been doctored will also set in motion challenges to the authenticity of evidence in the form of photographs, videos or audio recordings. We envisage that technical experts will be needed who are able to determine conclusively, whether or not specific content has been doctored, in the event that that the validity of such documents are contested and it is necessary to provide evidence of their authenticity. In the United States, the term “the deepfake defense”, coined by Rebeca Delfino, professor at Loyola Law School, is becoming popular.

Moreover, the risks are increasing because the possibility of creating synthetic hyperreal content has reached end consumers, and there are plenty of apps around for editing photos or videos. This can be seen in the widespread use on social media, including Twitter or Facebook, of specific tools to combat deepfakes (see here). TikTok has just updated its content moderation policy by adding a new section in connection with synthetic media and manipulated content (see here). The aim is to make sure that the creators clearly label synthetic content by indicating that it is AI-generated (using terms such as “synthetic”, “fake”, “not real” or “altered”).

The Proposal for a Regulation on Artificial Intelligence

The Proposal for a Regulation on Artificial Intelligence (AI Act), which we spoke about here, deals with deepfakes from various angles, including their use by individuals and the public authorities.

As far as the use of AI systems that are capable of generating deepfakes is concerned, only transparency obligations are imposed, since they are considered to be limited risk. The AI Act defines deep fakes as synthetic or manipulated images, audio or video content that falsely appears to be authentic or accurate, including representations of individuals who appear to say or do things that they in fact never said or did using AI techniques, including automatic learning and deep learning. In this context, users are asked to disclose that the content is synthetic, i.e. that it has been artificially generated or manipulated, by labeling it adequately and visibly (article 52.3 of the AI Act). However, there are exceptions to this general rule:

  • First, where it is necessary to exercise the right to freedom of expression and freedom of the arts and sciences guaranteed in the Charter of Fundamental Rights of the EU. The text gives examples in which the right of freedom of expression and information could prevail, such as in content that is clearly intended as a parody, that is artistic, use in films or videogames, etc. In these cases, it is clear that it will be difficult to implement these exceptions with the protection of the right of publicity which is not harmonized at an EU level and which in Spain, for example, is also considered a fundamental right.
  • Where the use is authorized by law to detect, prevent, investigate and prosecute criminal offenses.

In addition, the proposal provides that the information or labeling on the use of deepfakes should also bear in mind the special needs of children or persons with disabilities.  

The obligation is clear, but with a view to compliance, requests abound for the creation of standards to help identify super fakes in the event that users do not fulfill the obligation to duly identify manipulated content.

In relation to the use of tools to detect deepfakes, by the authorities in charge of applying the law, although the most recent proposal approved on May 22 2023, eliminates their direct inclusion in the list of high risk tools, in certain cases, it is to be assumed that they would be indirectly included in article 6(d) of Annex III of the AI Act as AI systems intended to be used by or for the benefit of judicial authorities in order to assess the reliability of evidence used during investigations or the pursuit of criminal offenses. This means that high risk AI systems must meet the requirements of Chapter II of the AI Act, which mainly focuses on the security, transparency and possibility of supervision criteria which must be met before the launch on the market. For example, the existence of a risk management system, use of data-driven models of high quality that, for example, avoid biases, or the implementation of functions that enable users to correctly interpret the AI results, or to submit them to human oversight, are required.

Where are we?

The text proposed is to be approved by the European Parliament in June 2023, although the speed at which changes occur and the proliferation of new tools means that we cannot rule out delays.

Final points to be considered

Regarding the increase in deep fakes, it is important to educate users and to pay attention to the fine details that can help us detect them, i.e. blinking less than normal, disproportionately large or small faces as compared to the body, audio that is out of sync with the person’s mouth movements, or incorrect representations of the mouth cavity, which is a part of the body that is very difficult to simulate.

With high quality deep fakes it may be necessary to use specific technology, or even to turn to experts. As we have said, the impact on criminal and civil proceedings can be huge, since sources are questioned that up until now were considered solid, such as photographs, videos or audios, in which persons that supposedly participated in the events, the so-called “deep fake defense”, can be recognized. In these cases, “supposedly” takes on a whole new dimension, since we may now wonder whether we are looking at a synthetic creation of facts that is not real.

 

Cristina Mesa Sánchez

Intellectual Property Department