Attention Communicators: “Synthetic Media” is on the rise

Attention Communicators: “Synthetic Media” is on the rise
3. April 2019 Falk Rehkopf

Attention Communicators: “Synthetic Media” is on the rise

Synthetic Media

In a nutshell, “Synthetic Media” refers to digitally created or modified media – often driven by algorithms. While synthetic media has been around for decades now, it’s only in the past few years that it’s begun to truly penetrate our digital landscape.

Here’s what’s worrying: while synthetic media was previously the domain of highly skilled programmers and special-effects artists, new technology has “democratised” it and made it increasingly accessible to the masses. Today, pretty much everybody can now tap on expert tools and software that are capable of producing realism on a vast scale, which means that we’re now living in an era where virtually nothing we see, or hear, can be trusted.

How synthetic media works

Powering the advent of synthetic media is deep learning; this is a field of machine learning that teaches computers to learn by example. More specifically, synthetic media relies on Generative Adversarial Networks (GANs) and involves two deep neural networks competing to produce the most high-quality fakes.

The first GAN acts as a “generator” and creates images that look like the original image with the second one acting as the “discriminator” trying to figure out if an image is real or fake. So, both networks compete in a cat-and-mouse game in order to produce fakes that are top-notch and almost indistinguishable from the real thing.

Synthetic Media

Examples of synthetic media and content

There are many forms of synthetic media, including:

Not many realise this, but in this day and age, synthetic media is both far-reaching and ubiquitous. For instance, if you have an iPhone and you regularly use its Portrait Mode feature, this basically constructs synthetic images that simulate what your actual pictures would look like if they were taken by a more powerful camera.

Impact: Are synthetic media positive or negative?

Those in favour of synthetic media argue that it’s a natural extension of media that society should embrace. After all, there are many examples of synthetic media being used for “good”. For instance, Star Wars fans across the globe were delighted upon realising that Star Wars: The Last Jedi featured synthetic versions of Carrie Fisher and this is something that would have been drastically harder to pull off without the recent advancements in synthetic technology.

On the flip side, synthetic media has been exploited and used to further disinformation campaigns and target minority groups. There’s the case of the 2016 US election, where Russia utilised thousands of political bots to distort public discourse and amplify extremist viewpoints. In India, a video designed to create awareness about child safety was edited and mis-contextualised and the resulting clip incited mob violence and resulted in several deaths.

On the PR front, we’re also predicting that synthetic media will transform the way dark PR is conducted, making it more insidious than before. As discussed in a separate post, dark PR refers to the act of discrediting an individual, an organisation or an entire country with the aim to destroy their credibility and with it their reputation. With synthetic media now being fair game, dark PR practitioners have more “ammunition” than ever, making it easy for them to sway public opinion about their targets.

Synthetic or not: How can you verify content?

In the age of synthetic media and disinformation, how do reporters and PR professionals ensure that the content or media they’ve received or are disseminating is in fact ‘real’?

While there’s no approach that’s proven to be foolproof so far, experts are calling for media professionals to tap on the emerging field that is automated forensics to verify the information they receive. This involves looking at large datasets and using machine learning to conduct forensic analysis.

According to WITNESS, the human rights organisation focused on the power of video and technology for good, forensics in this context includes:

  • Detection of “heat map” of fake pixels in facial images created using FaceSwap
  • Identification of where elements of a fake image originate via image phylogeny
  • Use of neural networks to detect physiological inconsistencies in synthetic media, for example, the absence of blinking
  • Use of GANs themselves to detect fake images based on training data of synthetic video images created using existing tools (such as the FaceForensics database or others).

Will PESO turn into PESO-SM?

Currently, best practice based PR embraces all media types: Paid, Earned, Shared and Owned (PESO). But with synthetic media changing the landscape as we know it, is it possible that the existing PESO model will evolve to one that includes synthetic media as well?

The answer? It all depends on your industry. The most relevant current “use case” of synthetic media, in the context of marketing and PR, is virtual influencers. Consider this: Brud, the company behind the virtual celebrity Lil Miquela, is now worth at least $125 million dollarsBearing this in mind, it’s plausible that companies in certain industries (fashion, jewellery and other lifestyle industries that are well-suited to influencer marketing) will move towards making virtual influencers and synthetic media part of their PR strategy. 

That said, other companies in different industries might be slower to follow suit; these companies might choose to focus on the current PESO model, instead of incorporating synthetic media into the mix.

But one thing is clear, algorithmically created or modified content will play a role in all of the PESO media types – however, it is simply too early to say to what degree and for what purpose – and if the impact will be mainly negative or positive.

How will synthetic media change our media landscape?

First up, synthetic media may further depress trust levels. As Sam Gregory, Program Director with WITNESS, put it, the most serious ramification of synthetic media is that they “further damage people’s trust in our shared information sphere” and “contribute to the move of our default response from trust to mistrust”. Adding weight to his argument, according to EBU’s 2018 report, the trust in new media keeps falling with 61% of Europeans distrusting the internet and 97% having no faith in social networks. 

Also, it’s possible that these never seen before levels of distrust may result in certain platforms or media outlets taking on the role of ‘information verification’ agents. If this happens, trust will revert to a few official sources of media and the power will be concentrated in the hands of these select few.

Last but not least, the media landscape might evolve to the point where it’s routine for PR pros, reporters and other media pros to utilise watermarking techniques and digital signatures for verification purposes.

New opportunities for communicators

For PR pros who are interested to experiment with synthetic media to create new opportunities for their clients’ brands, here is a list of tools and technologies; these include:

  • Individualised simulated audio: tools such as Lyrebird or Baidu DeepVoice allow you to simulate people’s voices
  • Editing videos: Adobe Photoshop, Adobe Premiere and Pixelmator all come with advanced features that allow you to edit certain elements within existing videos
  • Facial reenactment: Using Face2Face and Deep Video Portraits, you can easily transfer the facial and upper body movements of one person onto another person’s face and upper body.

Implications for communicators

A study from the International Center for Journalists (ICFJ) shows that more than 70% of journalists use social media to find new stories but only 11% of these journalists use social media verification tools to fact-check said stories. But, we predict that journalists and reporters will quickly learn to develop stronger defense mechanisms against synthetic media that’s malicious and manipulative. It will only be a matter of time before newsrooms will establish higher standards and norms of verification. As Nic Dias, Senior Researcher for the non-profit organisation First Draft News, puts it: “any robust defense against malicious, newsworthy “deepfakes” and other AI-generated synthetic media is going to have to involve journalists. Their purpose – to seek the truth on behalf of the public – is best aligned to this task”. Bearing this in mind, PR professionals should seek to collaborate with these reporters and make their jobs easy by providing them with all the information and facts related to their news story or pitch.

On top of that, newsrooms are increasingly collaborating to prevent and reduce the number of cases of misinformation caused by synthetic media. It’s likely that large newsrooms (such as the Wall Street Journal and the BBC) will be actively sharing their information with smaller newsrooms with fewer resources so that these newsrooms won’t fall prey to bad actors.

As such, PR pros who want to pitch to multiple newsrooms should ensure that their stories, facts and statistics are consistent across all of their different pitching efforts. If there’s even the slightest suspicion that the information these PR pros provide isn’t 100% accurate, then this will severely impact their chances of getting their companies or clients featured.

The rise of synthetic media: What’s next?

While we can hope that the vast majority of synthetic media will be used for good rather than bad, the truth is that synthetic media is a powerful weapon that can be exceedingly dangerous when wielded in malicious hands. As PR pros, the responsibility is on us to develop closer relationships with journalists, influencers and all other stakeholders and to work closely with them – also using technology – to allow them to verify the accuracy of the content and context we’re providing.

Chief Marketing Officer