Artificial Intelligence law: the importance of fair regulation

Podcast
November 30, 2023

Picture of the Big Thinking Podcast microphone with a headset

Description | About the guest | Transcript | Follow us

 

Description

Does ChatGPT, DALL-E and Midjourney ring a bell? These platforms are just a few examples of the many new artificial intelligence software applications currently on the market and available to all.

In the recent years, AI has been at the center of many debates and discussions, especially in the academic world. As AI is starting to be part of our lives, both professional and personal, it is important to discuss the issues and risks associated with the explosion of these technologies, and how and why they should be regulated.

For this episode, Camille Ferrier, Vice-President, Membership, Communications and Events at the Federation for the Humanities and Social Sciences is joined by Céline Castets-Renard, professor at the University of Ottawa.

About the guest

Headshot of Céline Castets-Renard

 

Céline Castets-Renard is a full professor in the Faculty of Law at the University of Ottawa and holds the Research Chair in globally responsible artificial intelligence.  

Céline Castets-Renard holds a master's degree in Business Law from l'Université Paris 1 Panthéon-Sorbonne and a doctorate in law from Université Paris-Sud.

Her research focus on digital law and regulation in various areas of private law, ranging from the impact of technology on contracts and civil liability to intellectual property, personal data protection, e-commerce, ethical issues related to the regulation of autonomous cars and cyber security.

 

 

 

Céline Castets-Renard in the news 

[00:00:00] Camille Ferrier: Welcome to the Big Thinking Podcast, where we discuss the most important and interesting issues of our time with leading researchers in the humanities and social sciences. My name is Camille Ferrier, and I'm pleased to host this episode in my capacity as Vice-President, Membership, Communications and Events at the Federation for the Humanities and Social Sciences.

[00:00:26] Does ChatGPT, DALL-E and Midjourney ring a bell? These platforms are just a few examples of the many new artificial intelligence software applications currently on the market. Together, we're going to explore the issues and risks associated with the explosion of these technologies, and how and why they should be regulated.

[00:00:48] For this episode, I'm pleased to welcome an expert in the field, Céline Castets-Renard, who holds the Research Chair in globally responsible artificial intelligence and is a professor and researcher in Civil Law at the University of Ottawa. Are you ready? Let's get started!

[00:01:08] Camille Ferrier: Hi Céline and welcome to the Big Thinking Podcast. We like to get to know our guests a little before we start talking. You studied in France and are now a professor and researcher in Civil Law at the University of Ottawa.

[00:01:26] You're also head of a global Research Chair on responsible artificial intelligence. Can you tell us a little about how you came to choose this area of expertise, tell us a little about your background.

[00:01:37] Céline Castets-Renard: Thank you Camille for having me. I'm very happy to be on the podcast.

[00:01:42] Hello everyone. I actually did my studies in France, so my doctoral thesis in 2001 in Paris on intellectual property and then I was recruited as a professor at the University of Toulouse in 2002 and until 2019, until arriving at the University of Ottawa.

[00:01:59] And throughout my career, in fact, the study of law has followed the evolution of technology. So at first, we were talking about computer law, then Internet law, digital law. And now, of course, we've come to the law of artificial intelligence. Of course, this doesn't necessarily mean that specific laws are needed for each new technology.

[00:02:21] But in any case, that's precisely the question we're going to ask ourselves every time a new technology comes along: are the laws [...] of common law or ordinary laws sufficient, or can the previous laws we may have had on previous technologies be adjusted? Or, if not, whether there are particularities such that, in the end, we need to adopt new laws specific to the new technology that has just emerged. And so this is the question we're asking ourselves for AI.

[00:02:49] Camille Ferrier: Was there anything in particular that attracted you to this field of study and research?  

[00:02:54] Céline Castets-Renard: So when I was studying, and in particular when I was looking for my thesis topic and when I was looking for my specialization, in the end, I didn't really want to go into the classic subjects like contract law and liability law, because there were already a lot of people there, and a lot of things had been said.

[00:03:10] And I didn't have the impression that there were many developments, that we were really able to ask huge new questions, even though the technologies were asking a lot of them.

[00:03:21] So, I wanted to go into a moving field, with constant questioning. That was my first intuition, to have a rather new field to be able to build and say new things. And then today, I also see the social impact of technologies, and I'm more and more interested in adding to the legal dimension.

[00:03:46] Camille Ferrier: You mentioned information rights, computer rights, but what exactly is artificial intelligence? Could you define it for us?

[00:03:54] Céline Castets-Renard: Artificial intelligence has many definitions.

[00:03:58] So I'm going to give one that's a choice I'm making. I think that what characterizes artificial intelligence today is that these are systems that are capable of taking autonomous action based on objectives set by human beings. But increasingly, they are also capable of detecting their own objectives, ultimately, by improving their knowledge of the environment and improving their processes.

[00:04:26] And what strikes me most today, and I think this is the most disruptive thing about artificial intelligence, is that these AI systems are now able to have cognitive functions, they can predict, make recommendations, make decisions and also generate content.

[00:04:48] And then we talk about generative AI. But it's true that in the end, these are intellectual functions that are going to have an impact on the more intellectual professions, whereas it's been a long time since we automated assembly lines that we've already had industrial automation, and even automation of certain decisions [...] via software.

[00:05:09] What we understand here is that this autonomy will enable the system to learn from its environment and evolve with it.  

[00:05:17] Camille Ferrier: What exactly is generative artificial intelligence?  

[00:05:23] Céline Castets-Renard: Generative simply means that it generates content, so it creates content. And when we say content, that's very, very broad, we're of course thinking of texts and language models like ChatGPT.

[00:05:34] We're thinking of images like Dall-E, Midjourney and so on. And we're also thinking about content generation like videos, like music, but also coding. So today we're getting a lot of questions from the field, from companies that come in, from IT people who say, well, I've picked up a bit of code left and right.

[00:05:56] And then of course there are issues of intellectual property or, on the contrary [...] Open Access and free software, where in principle you have to leave free the code you've already used, or on the contrary you have appropriation rules. So all these rules are going to be very difficult to comply with if you take bits of code left and right.

[00:06:20] Or if the code is generated by software like ChatGPT, we don't really know, we know that he took it from somewhere, especially from Getup in particular, which is a platform that brings together a lot of coding programs, but in defiance of the rules applicable to their reuse.

[00:06:41] Camille Ferrier: You recently co-wrote an article that appeared in the magazine Option Politique and you said, and I quote, "the current legal framework for artificial intelligence is scattered and insufficient" and that it's time to propose a law on artificial intelligence. Could you explain to us, for those listening, the context in which we find ourselves?

[00:07:04] And what's at stake in the recent explosion of artificial intelligence technologies. And why do you think it's important to have regulation in this field?  

[00:07:19] Céline Castets-Renard: When studying computer law, the digital Internet and technologies in general, the first question we ask ourselves is: what are these technologies, what are their characteristics?

[00:07:28] So that's where the previous question came from, to identify breakthroughs, new features compared to previous technologies. So, I mentioned the part of autonomy and cognitive functions that are part of these characteristics of AI.

[00:07:42] So from there, we say [...] what's going on with these systems? Well, if systems are used, for example, to help take decisions about individuals, about individual cases, for example, to decide whether to grant a loan or not, to decide whether to grant a visa or not, to decide whether to create new content, like images for example.

[00:08:05] And of course, we're going to ask ourselves whether it will affect the rights and guarantees we already have, whether it will upset the social balance we already have, that's the first question. And if there are new problems, new risks, do the rights we already have allow us to respond?

[00:08:22] So on these two points, [...] AI is disrupting us, first of all on the first point, respect for rights that we already have, we see with generative AI in particular, there are challenges in relation to the protection of intellectual property and in relation to the protection of personal data for example.

[00:08:39] So we're saying, ah well, this AI could violate our rights, violate the balance we already have, that's the first point. The second point is whether the legislation we already have is sufficient to capture the particularities, and I mentioned prediction, I mentioned the fact of having a share of evolutivity, of autonomy.

[00:09:01] We'll see that the principles of transparency and explicability, for example, may not be respected if we use a system that is opaque, that we don't fully understand, that will evolve on its own, etc. This then leads us to the next stage, where we say: "Obviously, not only does this violate the rights we already have, but it's precisely these rights that are insufficient to capture the particularity of AI".

[00:09:28] And so that brings us to the next stage, thinking about legislation, specific legislation, and among the issues to be considered, to be addressed, there are many issues of, as I was saying, transparency, the end of the lack of transparency, opacity, complexity of systems, but also issues linked to bias, discrimination, infringement of the right we want to protect, fundamental rights in particular, and so that's the whole challenge we're facing today in the need to adopt specific legislation.

[00:10:03] Camille Ferrier: Canada has tabled Bill C-27: what does this bill consist of, and what do you think of Canada's approach to this? And how does it differ from other countries, for example?

[00:10:16] Céline Castets-Renard: So Canada's approach is to look at AI systems in terms of risk, and in terms of two categories of risk: risks of harm, and risks of bias and discrimination. And when we talk about harm, we mean it quite broadly.

[00:10:33] We're talking about economic, moral and psychological damage, loss, risk of loss and so on. This seems to be quite an interesting approach. There are, however, critics of this approach, because the risks targeted in this way are more individual risks, rather like civil liability or [...] in Common Law.

[00:10:58] But it doesn't capture collective risks very well, for example risks to language, to culture, to fundamental rights, which can be more collective risks. So that's the first limitation. And then there's the reference to discrimination bias and anti-discrimination laws.

[00:11:17] It's quite an interesting approach, but [...] the risks of discrimination are not the only risks of undermining fundamental rights. There is also the risk of undermining freedom of expression and freedom of opinion, particularly in the event of misuse, disinformation, deep-fake or hyper-fake, or manipulation of information and opinion.

[00:11:42] Here we see that it touches on rights other than the simple, end of discrimination. So, we feel that this approach is perhaps a little too narrow, and there's also another limitation, which is that we want to target only high-impact, high-incidence AI systems, but we don't know what the threshold is.

[00:12:04] There's no definition, there aren't really any criteria. So, these are things that will have to be clarified. The House of Commons and the committees are currently hearing from experts, and I think that clarifications will be made through amendments, through parliamentary discussions, at least that's what I hope.

[00:12:24] So those are the weak points of the Canadian regulations. But there are also strong points, particularly in comparison with the European bill, because for the moment, we have among the proposals that can be found internationally, Canada and Europe are the first to have somewhat broad legislation.

[00:12:45] There is already legislation that is a little more specific, but broad legislation is the first. So, compared to Europe, I think the Canadian approach has an advantage in that it leaves open the hypotheses of technological evolution, the hypotheses of cases and the hypotheses of risk.

[00:13:05] The European Union is going to be more precise, but it's more restrictive. We're going to list case studies, and the risk is that the regulations will become obsolete too quickly. That's what's at stake with these new regulations in a new field of law: finding the right balance between precision, legal certainty and understanding what's at stake.

[00:13:27] We're well aware that technology will continue to evolve and that new challenges will emerge. So there has to be a certain degree of evolutivity, and some people talk about legislative agility.  

[00:13:39] Camille Ferrier: On the Big Thinking Podcast, our listeners include Canadian political decision-makers. If you had a message for them, today what can we do next? Where do we go from here?

[00:13:50] Céline Castets-Renard: I'll be honest about my personal position, because not all my colleagues agree with me. Many feel that Bill C-27 isn't specific enough, isn't good enough, because we're deferring regulation to a later date, which would be handled by ISED, the Ministry of Innovation and the Economy, and so we're saying, "Oh, but we're writing a blank cheque with this bill?”

[00:14:18] Nothing is said in the law, it's not precise enough, it's insufficient. We can't, we can't make this delegation of power. So, C-27 is not the right law, it's not good enough. I don't really want C-27 to be set aside, because if we do that, I'm afraid we'll be back at it for another year or two before we have another bill.

[00:14:36] And it seems late to me. So, I prefer that we work with what we already have and that we work in parliamentary session, in parliamentary committee, to improve this law. And what can we do about that? Well, there may be too much delegation to regulations, to things we don't yet know.

[00:14:53] And I think that some of the details could be in the law and in particular give a definition of what is a high incidence at which measures must be taken and obligations met.

[00:15:04] I think that should be clarified. So, what is the level? What is the threshold to meet the obligations? And what are these obligations going to be a little more specific? I'd also be in favor of having a governance and control system that's a little more precise and more independent.

[00:15:24] At the moment there's an AI and data commissioner planned but he's attached to the Ministry of Innovation, so attached to ISED, so will there be enough power and independence? We have our doubts. So we'd like to go further with this. And I would also add that we should extend the notions of harm to global harm and harm to fundamental rights more generally.

[00:15:49] And I would add that taking the position that certain uses of AI are unacceptable in our democratic society and in light of Canada's values would also seem very important to send clear signals to AI designers and to what is not socially acceptable today.

[00:16:09] Camille Ferrier: You said […], if we put Bill C-27 aside, it would be one or two years before we had a new bill. So when you say that, I get the feeling that time is running out. Would you say that we're in an emergency situation? Do we really need to move fast on this regulatory front?

[00:16:28] Céline Castets-Renard: Here too there's a debate, some people say we don't know, we don't know enough about what's going on, so it's too early, we shouldn't regulate too quickly, so we just need general principles, that'll be enough, we just need good practices, ethics, that's enough. We don't need regulation at this stage, it's too early.

[00:16:48] Generally speaking, those who say that may not be the ones who want to see themselves regulated. I think it's time to establish a minimum base, and we agree that technology will evolve, legislation will have to evolve and I think it's more a question of knowing "how do we do it?" than "should we do it?".

[00:17:06] And I think we need to be quite flexible, to be able to make legislation evolve and not have any certainty today because we don't have any. And so we need to have the means to keep up with these technological developments, so that the law doesn't lag too far behind. But I think we need to establish a framework today, and I would add that there is sometimes a double discourse.

[00:17:32] So, I mentioned two camps, but in fact, we sometimes have a camp on each side. For example, we hear a lot from people in Silicon Valley, OpenAI, Sam Altman who is the CEO of OpenAI, who says on the one hand "we need to be regulated, it's dangerous what we're doing, we have existential risks, etc." That's what we see in the press.

[00:17:53] And behind that, there's a lot of lobbying going on with legislators in the European Union, there's a lot of lobbying going on right now saying "ah but be careful, you're going to break innovation, you're going to break European companies."  Well, it's always the same arguments, the same rhetoric.

[00:18:14] But what's interesting is that it's twofold. There's an official discourse in front of everyone saying "but I'm nice, I'm willing to be regulated" because it's also to say "I have great technology, I'm hyper efficient, so yes, you have to be careful, etc." and on the other hand "well don't break my business model, and be careful not to constrain me too much whatsoever."

[00:18:32] Camille Ferrier: Yeah, it doesn't seem obvious. You mentioned once or twice some unacceptable situations that need to be particularly regulated. Can you give us an example?  

[00:18:44] Céline Castets-Renard: I'm going to rely on what Europe has in mind [...], it's always to start a conversation.

[00:18:50] And there are certainly other unacceptable uses. There are others that are unacceptable, that should have been on the list. But it's an exhaustive list, so we can rely on it. We can start with this list. And so, among the uses considered unacceptable, there's the use of anything that's subliminal manipulation.

[00:19:09] To manipulate opinions and in particular vulnerable people such as children, the elderly or people with mental deficits. We want to avoid manipulating people without their knowledge and making them do things they wouldn't have done and which would be harmful to them.

[00:19:28] So that's the kind of action we're thinking about. I don't see Europe and Canada moving towards that yet, but hey, why not say so, after all. And I would add another example which is very interesting because for the moment, there is no compromise on the European text which has not yet been adopted, there are still a lot of negotiations, debates within the European institutions.

[00:19:50] And it’s facial recognition in public spaces used by police forces. And here we have completely different versions from the European Parliament, the Council and the European Commission.

[00:20:02] And it's very interesting because the Council is defending the member states and obviously, the states want facial recognition for their police. So they're more interested in extending the exceptions to the principle of banning large-scale facial recognition in public spaces.

[00:20:21] But they're saying "ah yes, but there are plenty - we still need exceptions and then try to want to extend them." And the European Parliament, which would like to ban facial recognition in public spaces by police forces, and even more would like to ban facial recognition in general.

[00:20:37] So it's an arm wrestle, isn't it? And the European Parliament is directly elected by the citizens, so it's more to satisfy the citizens. I don't know what will come out of the Trilogue, which is the meeting of these three institutions. The last meeting takes place on December 6.

[00:20:53] We'll see whether we reach a compromise or not, but it's one of the most controversial subjects on which we disagree the most in this European project.  

[00:21:04] Camille Ferrier: You talked about disrupting social benefits. I'm curious, what are the possible repercussions on our daily lives, in the personal sphere?

[00:21:15] You, me, my family, my friends, in our work do we have to worry about anything, especially in the very near future: violation of our privacy, our personal data, access to information. Can you talk about this on an individual level?  

[00:21:30] Céline Castets-Renard: From the moment AI spreads to all areas of social and economic life, I think we need to be concerned, or at least vigilant.

[00:21:37] And assume that more and more systems are automated and incorporate AI to a greater or lesser extent. But in the end, I'd like to say that even if it doesn't incorporate AI, depending on how the systems are designed, there are still a certain number of risks for privacy, data, respect for personal data and for risks of bias and discrimination, because very often these systems are trained on data sets that aren't necessarily entirely representative of society as a whole.  

[00:22:07] We can see that there are gender biases, race biases, sometimes socio-economic biases, so anyone can ultimately be impacted by an AI system that would decide badly or that - well, by an automated system let's say - that would decide badly or that, ultimately, would make a bad decision on the basis of bad training or on the basis of biased training.

[00:22:29] We have to worry, I think, about these risks of discrimination. The risks of invasion of privacy and personal data, there's always the question of whether we're capturing a lot of information about me, how much, and whether it still respects the principles of necessity, proportionality and purpose, which are key principles in the use of personal data, and very often in these systems, which are very data-intensive for both training and deployment, well, we can see that it's completely at odds with these principles.

[00:23:04] When you think of the language model that trains with billions of data parameters, it means you have to capture a lot. So we're not at all in the business of limiting, finalising etc., and controlling what these legislations were trying to implement, I'm already speaking almost in the past tense.

[00:23:24] In any case, their vocation is to implement a certain amount of control over how individuals use their data. The more things go, the less guarantees we have, with the deployment of AI. So we really need to couple this with respect for legislation on the protection of personal data.

[00:23:40] And then at work too, we see AI being deployed more and more at work. So without necessarily going into the risks and issues that exist, but the replacement issues, even without going that far, we're going to have to start interacting with machines integrating more or less AI.

[00:23:57] And understanding what the machine can do, but especially what it can't do, and understanding what you can rely on and what you can't rely on, and what you can and can't trust. And human control is important, but only if you understand and can really control.

[00:24:12] So that's an important issue, it's not just "oh yes, there's a human who looked, he ticked the box and that's fine" without really understanding what the system had done, without understanding the consequences. That doesn't work, you really have to go further and really be in intellectual control of what's going on, even if you don't necessarily understand why it made the decision it did.

[00:24:32] But at least we should be able to review the decision, challenge it and have enough information to say "well, this decision for this person isn't normal, so I'm going to review it", that's important. AI concerns everyone, so don't think that it's an expert subject. You need to have years of technical skills in data science, and so on.

[00:24:56] That's for the creation/design part of AI. The social, societal issue today is that AI is going to be applied to everyone without us always being aware of it. So there's also the question of transparency, knowing whether an AI system has been used, and so I think we need to demand more knowledge, more understanding, more transparency on the use of AI.

[00:25:18] And that's something anyone can ask for. And we're not asking for technical explanations that we don't understand, what we want is to know why AI is used, why it performs better than when humans do it, so why it's more interesting to have AI.

[00:25:34] And what measures have been taken to minimize a certain number of risks? And what understanding can we have of this, what individual recourse can we possibly have or in any case what individual knowledge can we possibly have to contest a decision. These are global social issues, in the sense that they really concern everyone, and everyone needs to get to grips with them.

[00:25:56] We mustn't let the Silicon Valley narrative tell us about existential risks, very, very serious risks, saying at the same time "oh la la, it's very, very dangerous what we've done" but at the same time, "don't worry, we'll take care of it" no, no, don't let these experts talk amongst themselves, make tribunes, it's all very well, I do it, I sign it, but it's everybody's subject.

[00:26:19] Don't hesitate to get informed, to come and ask us questions. I respond to all requests [...] from the media, because we need to talk to everyone about it and get everyone on board.

[00:26:29] Camille Ferrier: Ok thank you very much Céline, it's really important work that you’re doing and [...] it was really an honor for us to welcome you to the Big Thin Podcast, thank you again.  

[00:26:42] Céline Castets-Renard: But the honor is mine, the pleasure was mine. Thank you very much for your welcome.

[00:26:47] Camille Ferrier: Thank you for listening to the Big Thinking Podcast, and thank you to our guest Céline Castets-Renard, professor and researcher in Civil Law at the University of Ottawa. I would also like to thank our friends and partners at the Social Sciences and Humanities Research Council and the production company CitedMedia, without whom this podcast would not be possible.

[00:27:06] You can find all episodes of the Big Thinking Podcast on Spotify, Apple Podcast, Google Podcast or your favorite podcast platform. Let us know what you thought of this episode by connecting with us on social media. À la prochaine!

spotify logo

Spotify

Icon of Apple Podcast

Apple Podcast

Icon of Google Podcast

Google Podcast

amazon music logo

Amazon Music

Podcast addict logo

Podcast Addict

iHeartRadio logo

iHeartRadio

Podfriend logo

Podfriend