• Story

Clear rules for AI, including in Switzerland?

18.07.2024 On 13 March 2024, the EU Parliament passed a law regulating artificial intelligence (AI) with a large majority. Professor Sarah Dégallier Rochat shares her assessment of the measure in this interview.

Key points in brief

  • The EU Parliament has been regulating artificial intelligence (AI) since March this year.
  • The AI law is primarily concerned with regulating the applications of the technology.
  • According to Prof Dr Sarah Dégallier Rochat, this regulation brings with it both political and economic challenges - and has an impact on Switzerland.

The European Union recently adopted an AI law. Can AI even be regulated?

Prof. Dr. Sarah Dégallier Rochat: The big tech giants like Google, Meta, Microsoft and OpenAI in particular claim that AI can’t be regulated because it is economically beneficial for them for it not to be regulated. For about 10 years, there has been a growing mistrust of these companies as they are becoming ever more powerful. Among other things, they are accused of manipulating the technology discourse for their own benefit. It is therefore important to critically scrutinise this narrative of AI as an uncontrollable phenomenon. This discourse is more about what AI could be in the future than what it is today. AI is not a magic formula. It requires a lot of creative work and thus a large number of human decisions. AI systems can therefore be designed to respect human values (see box) to the greatest possible extent.

 

But the AI law is actually much more about regulating the applications that use the technology rather than the technology itself. Of course, these applications can be regulated: we can decide, for example, that we do not want to allow surveillance systems with facial recognition. This is not a technical decision, but a political decision that has much more to do with the human values we want to cultivate than with the technology itself. 

What is AI and how does it work?

Artificial intelligence is defined in the AI law as a machine-based system with a certain degree of autonomy that, unlike traditional systems in which the rules are explicitly defined, learns from examples and implicitly derives rules. People also learn by way of examples. We recognise a carrot because we have already seen many of them. To do so, we draw on properties or characteristics that are always present. If I have only seen orange carrots, I will perceive the colour “orange” as a property of carrots. This could cause me to not recognise a purple carrot as a carrot. The way AI works is very similar. It is therefore important to have a large amount and a wide variety of data so that the system can identify the right properties.

AI systems can identify patterns that appear in the data but which are not aligned with reality. This allows unexpected outputs to be generated. This is what people mean when they say that AI is unpredictable. As the properties are not explicitly defined, the term “black box” is used. If the output of an AI was that a purple carrot is not a carrot, the system would not tell me that this decision was made due to the lack of orange colour. If an HR system decides that a person is not suitable for a job, we do not know which characteristics the decision is based on and whether these characteristics are relevant at all. So the decision is not transparent. The field of explainable AI aims to make the characteristics explicit.

So when we say that AI outputs are not understandable, it doesn’t mean we don’t understand how AI works or that we cannot control the outputs. Automation systems have always made mistakes and the outputs can be controlled before they take effect (safeguarding). For example, ChatGPT’s outputs are always checked to ensure that they do not contain hateful messages. The autonomy of the system can also be adjusted. For an AI system that defines the temperature of a machine, an interval can be specified in which the system can operate autonomously. If a value is exceeded, a person has to confirm the decision. What a system is allowed to do is therefore always defined by people. It is part of the design of the system, which is fully predictable and regulated.

One of the greatest challenges is the rapid development of the technology. Regulations need to be flexible enough to keep up with this dynamic situation while at the same time providing clear guidance and safety measures. Have we succeeded?

Prof. Dr. Sarah Dégallier Rochat: AI is not more difficult to regulate than medical applications, a field in which the technology also develops very quickly. The challenge with the European Union’s AI law was to find a compromise between the interests of business, politics and citizens, particularly because there was no regulation before, and because AI is not yet regulated in other parts of the world. Regulation therefore means more effort and costs for European companies and may have a negative impact on their competitiveness.

 

AI systems can be used in areas where a decision – or an incorrect decision – made by the system could have a huge impact on people’s lives. A good example of this is the scandal in the Netherlands, where the government used an algorithm to quickly uncover child benefit fraud. The algorithm was discriminatory, which meant 600 parents were wrongly identified as fraudsters. They were ordered to return tens of thousands of euros. Some of those affected lost their jobs and their homes, and children were even sent to foster homes. As there was no regulation, it took six years for the accusations to be reviewed. The EU AI law pursues a risk-based approach. Its objective is to take steps to minimise the risks of this kind of human rights violation. In Switzerland, it is now possible to request that a decision made by AI be reviewed by a person, which also ensures that the decision can be justified.

The EU AI law is to take steps to minimise the risks of this kind of human rights violation.

Prof. Dr. Sarah Dégallier Rochat
Prof. Dr. Sarah Dégallier Rochat Head of the Humane Digital Transformation

A “risk-based approach” sounds pretty technical. What does that mean exactly?

Prof. Dr. Sarah Dégallier Rochat: The aim of this approach is to minimise risk. Human rights are used as guidelines for defining these risks. This is common in the business world, so there is already a lot of experience with it. The applications are divided into different categories: the higher the risk, the stricter the regulation. Use of AI is prohibited if the risk is considered unacceptable, such as in the case of social scoring, an assessment system that converts individual behaviour and social interactions into numerical scores. In China, a low score can lead to the restriction of civil rights, such as access to education or healthcare.

 

An alternative would be a human-rights-based approach. This would mean that the goal is not only to mitigate the risks but also to actively promote human rights. As the definition of human rights is entirely straightforward, such an approach would be more difficult to implement. However, it must be noted that the AI law is subject to the Declaration of Human Rights and it is therefore possible to take legal action against AI systems that violate human rights.

As a user, I’m not aware of all this taking place...

Prof. Dr. Sarah Dégallier Rochat: Anyone who belongs to a minority can be at risk from these systems as they reflect the interests of the majority as well as the prejudices of society. For this reason, the AI law prohibits AI systems that categorise people by race, religion or sexual orientation. In addition, developers and operators of AI systems are required to reduce the risk that these systems will make discriminatory or unfair decisions as much as possible. Particularly risky applications such as emotional detection systems are subject to additional transparency requirements.

 

According to several human rights organisations, measures to combat discrimination risks posed by AI systems and the monitoring and control mechanisms to ensure compliance with regulations are insufficient. The draft law also contains a number of exceptions and loopholes that continue to allow the use of invasive and monitoring-intensive technologies. For example, AI systems developed or used solely for national security purposes are less strictly controlled, regardless of whether it’s a government authority or a private company. In addition, the EU AI law allows the use of technologies such as predictive law enforcement, live public facial recognition, and biometric categorisation against migrants, refugees, and other marginalised groups.

he draft law contains a number of exceptions and loopholes that continue to allow the use of invasive and monitoring-intensive technologies.

Prof. Dr. Sarah Dégallier Rochat
Prof. Dr. Sarah Dégallier Rochat Head of the Humane Digital Transformation

What about AI systems that are used in everyday life, such as language assistants or recommendation systems?

Prof. Dr. Sarah Dégallier Rochat: These systems usually fall into the category of minor or moderate risks and are therefore subject to less stringent regulations. Nevertheless, they need to meet certain transparency requirements. For example, users need to be informed when interacting with an AI system. In the case of high-risk AI systems, the underlying data and algorithms must also be disclosed in order to allow independent review and assessment. These measures are intended to prevent AI systems from making unnoticed and uncontrolled decisions that affect people.

There is currently a lot of talk about all the rules the law puts in place. Are there also measures that promote innovation in the field of AI?

Prof. Dr. Sarah Dégallier Rochat: The law envisions “AI sandboxes”, i.e. controlled environments in which innovative AI systems can be developed, tested and validated. The purpose of these sandboxes is to test and improve AI systems under real conditions. The AI law could also promote innovation by creating clear standards and legal security.

All of this only applies to the EU. So no changes in Switzerland, then?

Prof. Dr. Sarah Dégallier Rochat: The Federal Council will review possible approaches to regulating AI by the end of 2024 and will take action on that basis. Although Switzerland is not a member of the EU, the AI law will also have an impact on Swiss companies that interact with the EU market. Swiss companies that want to sell AI systems in the EU or provide services there must ensure that their products comply with the new EU regulations. This means that Swiss companies that develop applications that present risks for people must also comply with the security and transparency standards.

 

The association AlgorithmWatch CH has issued an appeal to the Federal Council with the urgent request to put protection against discrimination by algorithms at the forefront of upcoming AI regulations. They believe that the issue of discrimination is only partly addressed by the EU AI legislation and that the draft law does not provide sufficient protection for individuals.

Although Switzerland is not a member of the EU, the AI law will also have an impact on Swiss companies that interact with the EU market.

Prof. Dr. Sarah Dégallier Rochat
Prof. Dr. Sarah Dégallier Rochat Head of the Humane Digital Transformation

At BFH, we are not only a knowledge partner, but also an educational institution. What does the AI law mean for the education landscape in Switzerland?

Prof. Dr. Sarah Dégallier Rochat: The education landscape in Switzerland could benefit from the new regulations as demand for specialists in AI and compliance will increase. In the future, it will become increasingly important to focus on the human factor in these developments in order to ensure that graduates have the requisite knowledge of the legal and ethical aspects of AI. The fact that the EU is dealing with the issue so extensively shows just how important it is. Research in this area is therefore key to developing innovative solutions that meet the new standards. The thematic field “Humane Digital Transformation” aims to promote research in the field of human-centred digitalisation and to transfer the results of that work not only to the classroom but also to the general public and the economy.

About us

Professor Sarah Dégallier Rochat is head of the Humane Digital Transformation strategic thematic field at BFH and a researcher in the field of human-machine interaction at the Institute for Human-Centred Engineering (HuCE). She has a BSc and MSc in Mathematics and a PhD in Robotics from EPFL. She was awarded the Industry 4.0 Shapers Award in 2019. She works to promote the development of digital technologies that cater to human needs and aim for an inclusive and fair future.

Find out more