REUTERS | Yara Nardi

“Deepfakes”: a rising threat in international arbitration?

In recent years, “deepfakes” have on numerous occasions captured public attention by creating viral videos of public figures or celebrities sometimes acting out of character.  When utilised by specialists, “deepfake” technology can render it nearly impossible with the naked eye to detect whether an image or a video is fake.

In 2017, researchers at the University of Washington produced a “deepfake” of former US President Barack Obama, turning audio clips into a realistic, lip-synced video using video addresses that were originally on a different topic. In 2018, the actor and comedian (and President Obama impersonator), Jordan Peele, utilised “deepfake” technology to create a video purporting to be President Obama saying things that, in fact, he had never said.

In 2021, millions have watched videos that would appear to show actor Tom Cruise playing golf, or performing magic tricks. To the naked eye, these appear to be videos of Tom Cruise: the movements are smooth, whether he is putting on sunglasses, slicking back his hair, or using sleight of hand to perform a magic trick. But, to the surprise of millions on social media, the videos aren’t real. The person in them isn’t Tom Cruise. Rather, they have been reportedly created by a visual effects specialist from Belgium using “deepfake” technology. They have been made with such a level of sophistication that it is difficult to tell with the naked eye that the videos are not true depictions of Tom Cruise.

How do “deepfakes” work and can they be detected?

The most straightforward way to think of a “deepfake” is that it is the video equivalent of a photoshopped picture. Like edited photographs, “deepfakes” have been in existence in certain forms for quite some time. However, the technology is rapidly advancing and becoming more sophisticated.

Here’s how it works: images of a person are fed into a deep-learning algorithm in order to create a fake image or video of that person. With that, the creator can have the person do or say anything they want them to. An artificial intelligence technique known as a “generative adversarial network” can be used to effectively swap one person’s face onto the body of someone else, matching the target’s facial movements with the new face.

While the clips of “Tom Cruise”, for example, would seem to be fairly innocuous, it is easy to see how a “deepfake” could be used for more sinister purposes. While there is “deepfake” detection technology, it has been reported that the videos of “Tom Cruise” avoided discovery when scanned through several of the best publicly available “deepfake” detection tools.

The technology to create “undetectable deepfakes” is arguably developing at a faster rate than detection technology, since publicly known detection methods can often be circumvented by sophisticated creators. For example, a common method previously used to detect “deepfakes” was an analysis of how often the relevant individuals blinked in the video. However, once this detection method became publicly known, “deepfake” technology itself advanced to make individuals blink.

How should alleged “deepfakes” offered as evidence be treated in international arbitration?

Experts have long been concerned that “deepfakes” may be utilised to disrupt elections or violate privacy through the spread of misinformation. These concerns have become more pronounced as the technology has developed. But what concerns could this cause for the users of international arbitration?

Video evidence is increasingly prevalent and tends to be persuasive. It offers an additional theatrical effect that can often be particularly compelling for tribunals. Since the algorithm used to create “deepfakes” is fed with images or video of a person, high-profile individuals, such as politicians are particularly at risk. This creates a specific concern for investment arbitration, which often concerns high-profile political figures.

Further, in high-stakes, high-value international disputes, certain governments or deep-pocketed parties may be tempted to commit significant time and resources to creating fake video evidence that is practically undetectable.

This leads to two competing evidential concerns. First, there is the question of how to deal with a suspected “deepfake” introduced into evidence in circumstances where it can be difficult to detect that it is fake. Certain tribunals have attached a heightened standard of proof for a party alleging that a piece of evidence is fake. For example, in Dadras International v Iran, the tribunal commented that allegations of forgery, because of their implications of fraudulent conduct and intent to deceive, are particularly grave. This justified, in the tribunal’s view, a heightened standard of proof of “clear and convincing evidence”. Recalcitrant parties may be buoyed by this standard and encouraged to use “deepfakes” due to the difficulties which may be faced by their counterparty in establishing that they are, in fact, fake.

Second, there is a risk that recalcitrant parties deny authentic video evidence and claim that it is a “deepfake”. While denying such evidence (particularly without supporting evidence) may have been more difficult in the past, the existence of “deepfakes” and the developing rhetoric that “seeing is no longer believing” may provide such parties with a previously unavailable escape route concerning authentic video evidence against them.

How should parties and tribunals react?

Given how hard it is becoming to prove that a “deepfake” is fake, tribunals may struggle to determine conclusively how to deal with contested video evidence. Rules on how tribunals treat video evidence and allegations of technological tampering may have to be updated before they fall too far behind the quickly developing technology.

Striking a balance between these competing threats may be very difficult. While there would not yet appear to be a publicly available example of a “deepfake” being used (or alleged to have been used) in international arbitration proceedings, given the increasing prevalence of the technology, it may be an issue that tribunals may have to address more frequently in the near future.

 

Share this post on: