- Story
“If you want to believe something, always question it”
15.10.2024 Artificial intelligence (AI) is generating ever more realistic media content, also called ‘deepfakes’. In our interview, BFH researcher Reinhard Riedl explains how we as a society can respond to this trend.
Key points in brief
- It is impossible to detect deepfakes reliably.
- When in doubt, consider context.
- Critical thinking is a crucial digital skill when approaching deepfakes.
Can we learn to spot deepfakes?
No. If you think you can reliably identify deepfakes, you are mistaken. Studies have shown that whether or not we recognise something as a deepfake is entirely random. We will simply have to learn to live with this uncertainty.
So now what? Should we start questioning every image, video, and text we encounter, wondering if it is real or fake?
Yes, we need to train ourselves to approach content with a certain general suspicion. Always bear in mind that any media item could be a deepfake. The significance of something being a deepfake depends on the context, however. In Switzerland, for instance, high-ranking officials generally tend to travel on public transport unaccompanied; in other situations, they need bodyguards. In other words, how vigilant we need to be depends on the context.
What can we do then?
The hard truth is: there isn’t a straightforward answer. I know that many will find this disappointing. As researchers, we can share examples and experiences, but there is no one-size-fits-all remedy. Everyone needs to develop their own solutions.
Why is there no universal way of dealing with deepfakes?
Quite simply, good deepfakes work especially well when they target our personal weaknesses. Everyone has their own particular vulnerabilities. Each of us needs to be aware, as an individual, in what situations and contexts we are most susceptible and keep these vulnerabilities in mind. If we do fall for a deepfake, we need to be willing to acknowledge it.
What are deepfakes and synthetic media?
Deepfakes are synthetic media generated by an artificial intelligence. They are multimedia creations based on existing content.
Original data is created by a human, depicting real people (e.g. video material), but is turned into new multimedia artefacts by AI. While their content is fictional, they appear authentic because they imitate the appearance, movements and manner of speaking of the depicted individuals virtually perfectly.
‘Synthetic media’ and ‘deepfakes’ are essentially synonymous, but they tend to be perceived differently. ‘Deepfakes’ carries negative connotations, while ‘synthetic media’ is more neutral.
That’s a pretty bleak outlook.
Well, the few things we do need to be wary of are quintessentially human. We are quick to agree with information that aligns with our own worldview or, as a researcher, provides the missing piece that helps us complete a scientific theory.
So whenever we encounter ideas that we really want to believe, we should probe and question them rather than accepting them blindly. Critical thinking is key when it comes to deepfakes. ChatGPT is a good example, as it has a tendency to hallucinate. Users who can think critically are more likely to notice whether a tool is working reliably or when they are being led up the garden path.
Can deepfakes be used for good, too?
Of course. Off the top of my head: for personalised teaching materials in education and a plethora of applications in the film industry. The deliberate use of synthetic media can be interesting in visual art, theatre, music.
Our studies have shown that some artists consider the AI as their partner or successor. For instance, we recently worked on deepfakes of famous pianists, which can play any piece of music in the style of that musician. Not only is this intriguing from an artistic point of view, it can also help train aspiring pianists.
Even when we know that we are looking at fictional content, a deepfake can still cause harm.
When are deepfakes harmful?
When they are used to deceive. But even when we know that we are looking at fictional content, a deepfake can still cause harm, simply in that it combines things that don’t belong together.
Consider a pornographic video featuring the face of a Hollywood actress, a politician, perhaps an ex-girlfriend. This is doubtlessly damaging, even though – in fact, because – we know that the person in question would never do such a thing. So a fake doesn’t have to be perfectly executed before it becomes dangerous.
How do you feel about the major fashion brand which recently replaced humans with AI-generated models for a campaign?
This case underscores that we need to get used to artificial identities. The way in which we experience the world is becoming more complex: some of the scenarios we face today would have been unimaginable even for Shakespeare. In Japan, artificial pop icons have long been commonplace.
Deepfakes are a fairly new phenomenon for us as a society, however. We are still not quite sure how to respond when a child becomes a fan of a completely artificial character. It is important to explain that an imaginary, fake persona is not a real human being.
You’ve said that we are not in a position to identify deepfakes. Should we even try?
In art and education, I believe that deepfakes should be clearly labelled as such. I generally adhere to an old theatre principle: do whatever you like, but always be transparent. Audiences must be able to tell when they’re being manipulated.
Specifically, anyone generating deepfakes should declare this. And it’s only fair to give credit when using someone else’s visual, audio or written content.