What to do about “deepfakes”
We’ve been hearing about fake news for quite some time now, but another phenomenon is currently sweeping the world and that is “deepfakes”. The way we see it, this controversial new AI-based technology has the potential to substantially alter the way we consume content. Read on to find out more!
What are deepfakes?
In order to create a deepfake, you’ll simply need to procure a source image or video of a person and multiple images and videos of another person (whose face you want to superimpose onto the original person’s face). Your computer’s neural network learns the movements and expressions of whoever’s in the source video, then maps the other person’s image onto this video.
Today, deepfakes technology is mainly applied to what’s called ‘face swapping‘ – digitally swapping the faces of two individuals. However, the technology also allows to easily replicate human speech. With the app Lyrebird, for example, it’s possible to generate realistic artificial voices using only a one-minute sample audio.
The rise of deepfakes and the dark side of deep learning
While deepfakes have been around for quite some time, it’s grown considerably over the last year or so, when the Reddit community got heavily involved in it. It all started when a Reddit user posted several deepfake porn videos on Reddit; a few months later, another Reddit user released a tool called FakeApp. This tool allowed anyone to download an AI software that they could use to stitch any face seamlessly into a video. Following this, deepfakes quickly took off. As a response, the related FakeApp subreddit was shut down by Reddit earlier this year. As of today, the software FakeApp has been taken offline but there is one other application that can be used to create deepfakes — Faceswap.
For the longest time, we’ve been exposed to the benefits of machine learning and deep learning. Among other things, machine learning allows us to create automation tools and outsource tedious, mundane tasks to these tools, contributing to greater productivity and efficiency.
Deepfakes, however, expose us to the darker side of deep learning; they also show us how deep learning can be used in a destructive manner. Now that deep learning has advanced to the point where software has become publicly available and accessible to everybody, this poses the problem not only of consent and accountability but, more specifically, for professional communicators looking to protect the reputation of their clients.
Examples of deepfakes
In April 2018, Jordan Peele worked with Buzzfeed to create a deepfake in which President Obama delivered a public service announcement, talking about how society moves forward in the age of information will determine “whether we become some kind of fucked-up dystopia.” The video was featured on news publications across the globe and it drew plenty of attention to deepfakes.
Interestingly enough, there are several recently-released films which have also utilised deepfake techniques to produce synthesized images. One notable example is Star Wars: The Last Jedi, in which the producers utilised AI to recreate a 19-year-old Carrie Fischer.
Risk levels increase with deepfakes
As you might imagine, there are plenty of risks revolving around deepfakes. Politically speaking, deepfakes have the power to influence an election or incite violence in cities which are experiencing unrest. From a societal standpoint, experts believe that deepfakes will contribute to a “zero trust” model, where consumers believe nothing by default. Then there are privacy risks, which include concerns over how deepfakes can be used to further revenge porn or any other intentionally malicious face swap.
On top of that, deepfakes also pose a significant problem to communication and PR teams. This is fairly straightforward — deepfakes can be utilised to discredit a company and/or a prominent figure, bearing this in mind, we expect that the umbrella term of “reputation management” will soon have to expand to incorporate dealing with deepfakes.
Deepfakes and ethics
When it comes to deepfakes and ethics, the one most important concept is that of consent. As long as the person whose face is superimposed on deepfakes did not give their consent to be featured (and for the resulting video to be distributed), this violates this person’s rights; it could also result in serious repercussions for their personal and professional life.
The same goes regardless of whether the person involved is a celebrity or prominent feature (several actresses including Gal Gadot and Emma Watson have been victims of deepfake porn), or whether they’re a layperson. While some may see creating deepfakes of their friends as family as “harmless fun”, this isn’t the case. All deepfakes constructed without consent are, as a general rule of thumb, problematic and unethical.
How communications and PR teams can handle deepfakes
With deepfakes becoming more mainstream, communication and PR teams should start acquainting themselves with deepfake technology and train themselves to spot and handle deepfakes. On the media side, The Wall Street Journal is taking seriously the risks coming from deepfakes and launched a deepfakes task force, dubbed the WSJ Media Forensics Committee, led by its Ethics & Standards and the Research & Development teams. “Raising awareness in the newsroom about the latest technology is critical,” said Christine Glancey, the newspaper’s deputy editor on the Ethics & Standards team. “We don’t know where future deepfakes might surface so we want all eyes watching out for disinformation.” Sam Woolley of the Digital Intelligence Lab (Institute for the Future) adds that “tools such as artificial intelligence, automated voice systems, machine learning, deepfakes, interactive memes, virtual reality, and augmented reality will make digital disinformation more effective and harder to combat”. However, here are a few tips to get you started:
Protecting your reputation against deep fakes
Deepfakes are still relatively new, so, unfortunately, there aren’t any tried-and-tested strategies for communications or PR teams whose client has been a victim of deepfakes. Although there aren’t any concrete methods of protecting a company or an individual from deepfakes today, we’re hopeful that a solution will arise in the near future. This might come from image and video authenticity certification company Truepic, which has recently raised US$8,000,000 in order to tackle the challenge of exposing deepfakes. When communicators use Truepic’s camera feature to take photos, a watermark URL is added leading to a copy of the image it saves. That way, viewers can compare visuals and be certain they look at an unaltered version. Social news aggregator and discussion website Reddit, for example, uses Truepic’s technology within their live Ask Me Anything Q&As with celebrities.
Additionally, possible routes include filing a defamation or harassment claim, sue for copyright infringement or violation of privacy or possibly suing a platform that’s perpetuating their deepfakes for misappropriating the commercial use of their identity.
Outlook: what’s next for deepfakes?
What’s next for deepfakes? The jury’s still out, but it looks as though countries might start passing legislation against this technology. US lawmakers have recently talked about how deepfakes pose a potential risk to national security and called for intelligence agencies to investigate the rise of these deepfakes. However, there are credible arguments that existing legislation is sufficient. Still, it’s in communications and PR teams best interests’ to start learning about deepfakes and guarding against them. At the end of the day, deepfakes have the power to destroy a company or individual’s reputation in a matter of seconds and they should not be taken lightly.