- Story
Hey, AI, we need to talk!
29.10.2024 “The End of Humanity”. The name of the documentary speaks for itself. But will AI really mean the end of the human race? Read our interview with media, religious and cultural studies expert Marie-Therese Mäder.
Key points at a glance
- The documentary “The End of Humanity” raises the question of what it means to be human in the age of AI.
- BFH is organising a screening of the film followed by a panel discussion on 22 November 2024.
- Sarah Dégallier Rochat and Marie-Therese Mäder will discuss with director Oliver Dürr his gloomy vision of the future.
- The motto: it is necessary to take a critical look at AI narratives.
Why should people watch “The End of Humanity”?
The documentary depicts an AI future in which humans lose control of the world. Thus, the film fosters discussion on the limits of AI. It poses crucial questions regarding artificial intelligence (AI):
What do we humans expect from AI? Moreover, what does it mean to be human? “The End of Humanity” reveals several, often gloomy outlooks on these questions and gives theologians and philosophers a chance to have their say.
The End of Humanity: more on the film and the panel discussion
The documentary “The End of Humanity” is an initiative of Oliver Dürr from the Center Faith & Society at the University of Fribourg, Prof. Dr Sarah Spiekermann from the Vienna University of Economics and Business, and the production company Schwarzfalter GmbH.
The Center Faith & Society at the University of Fribourg aims to build bridges between academic theology, various expressions of Christian spirituality and community practice, and social life.
Watch the documentary with us and join our discussion on its implications:
What do you expect from the film debate?
I am interested in the director’s motives. I would like to know why he produced the film and what is so fascinating about a dystopian view of AI. The media professionals and technology enthusiasts I know don’t talk about the end of the world when they refer to AI, but rather about its possibilities. In my experience, people usually focus on the playful side of AI and are really enthusiast.
The gap between the apocalyptic vision depicted in “The End of Humanity” and the positive opinions voiced in my work environment is thought-provoking. Both technological utopianism along the lines of Silicon Valley and the AI dystopia shown in the film relate to the future. As a society, we need to start talking NOW about what a liveable future with AI could look like.
How come?
It’s quite common. At the moment, a lot is being promised and very little is actually being delivered. Of course, some questions need to be clarified.
What questions need to be addressed?
There are many. For instance: what data can AIs access and collect? How can we protect data from AIs? How can we prevent unethical AI practices such as in child pornography? How can we protect human rights in the context of AI? But also: how do we define the human body?
While AI created deepfakes (of deceased people, for example) may be of use, we need to expand our view of humanity to include its digital aspects – and define what is permitted here. After all, not all the principles defined in human rights are applied to AI or deepfakes.
There is still no mainstream narrative explaining AI.
So, why are we talking so much now about what AI will mean in the future, when these questions need to be answered first?
There are two reasons for this. First of all, humans love great narratives. In a technological utopianism, AI renders superfluous all the work humans don’t want to do. In the Hollywood dystopia, AI destroys our planet. And as a substitute for religion, AI shows us what is right and what is wrong. In other words, we like stories that explain events, thereby creating meaning, and that have a community-building effect.
Furthermore, there is still no mainstream narrative explaining AI. Instead, there are competing narratives. However, none of these AI narratives are plausible today. We still don’t know how a narrative should explain AI to us. We have an open mind in this respect, which has a vacuum effect and leads to competing approaches, which is not necessarily a bad thing.
How is that?
Today, a caste of tech specialists conveys the impression that AI is essential for human progress, and the solution to all major problems. At the same time, they have an information advantage over a large majority of the population.
This knowledge gap opens the door to a quasireligious narrative (think “the ways of AI are unfathomable”) that allows them to dismiss critical questions and shift responsibility. Instrumentalised in this way, a narrative is more than questionable.
What exactly is a narrative?
A narrative is a meaningful account of events that shapes the thinking and values of a group or culture. In such narratives, social events or ideas are presented in the form of stories. One example from history is the American “from rags to riches dream”.
Narratives help us orient ourselves in society and understand the world better. They create a sense of belonging, evoke positive emotions and convey a sense of purpose. Narratives go beyond mere historical facts. They have an emotional impact.
Even today, various narratives have a powerful influence on our society. Especially in times of uncertainty, many people tend to revert to simple narratives. The term “narrative” has therefore become a buzzword that also plays an important role in politics.