How worried should we be about ChatGPT?

Podcast
April 19, 2023

Picture of the Big Thinking Podcast microphone with a headset

 

Description | About the guest | Transcript | Follow us

 

 

Description

Artificial Intelligence is on the rise, and ChatGPT is one of the most prominent examples of a new technology that is changing our lives. But what do we really know about ChatGPT and how is it affecting higher education and research? 

Miller is joined by Lai-Tze Fan, Assistant Professor of Technology and Social Change in the Department of Sociology and Legal Studies at the University of Waterloo. 

 

About the guest

Headshot of Lai-Tze Fan

Lai-Tze Fan is an Assistant Professor of Technology & Social Change in the Department of Sociology & Legal Studies at the University of Waterloo.

She holds a Master’s in English and Film Studies from Wilfrid Laurier University and a PhD in Communication and Culture from York university and Toronto Metropolitan University. 

Lai-Tze’s research and teaching interest include interactive and digital storytelling, research-creating and critical making projects, systemic biases in technological design, media archeology, the Anthropocene and sustainability digital and “smart” culture, critical infrastructure studies and the digital humanities. 

[00:00:00] Gabriel Miller: Welcome to the Big Thinking Podcast, where we talk to leading researchers about their work on some of the most important and interesting questions of our time. I'm Gabriel Miller, and I'm the president and CEO of the Federation for the Humanities and Social Sciences.

[00:00:22] Artificial intelligence is on the rise and ChatGPT is one of the most prominent examples of a new technology that is changing our lives, but what do we really know about ChatGPT and how is it affecting higher education and research? Today I'm joined by Lai-Tze Fan, assistant Professor of Technology and Social Change in the Department of Sociology and Legal Studies at the University of Waterloo.

[00:00:51] Let's talk a little bit about the moment we're in right now, and nothing is more of the moment than ChatGPT. Let's just start with some basics because as much as everyone seems to be reading about this on the internet and in the newspaper, we probably have different levels of understanding about what we're all talking about here.

[00:01:17] So this year there's been a momentous sort of development when it comes to this application of artificial intelligence known as ChatGPT. Can you just describe what it is for us briefly? 

[00:01:31] Lai-Tze Fan: Sure. ChatGPT is a large language model. It's a form of a, it's an AI system that has been trained on a gigantic corpus of trillions of words.

[00:01:43] The words are weighted differently, so some of them have specific significance that the designers, the data scientists have decided need to be foregrounded in the construction of sentences. I should maybe start by saying that ChatGPT is possible because of a branch of computer science called natural language processing, which allows computers to recognize patterns in language and to reproduce them with the same syntax, the same grammar.

[00:02:12] And that's why something like ChatGPT or Bing, for instance, and these other large language models that are coming out, that's why they're able to replicate human language so well, because they're basically identifying of the formula, the formulas of language, and reproducing them.

[00:02:29] So in that way, they're mirrors for language, but the ways in which they're trained, I think are just as interesting. The ways in which they're trained and what they're trained with, the corpuses or the, or what you could also call the data sets of a language that are being used to train a system like ChatGPT, first of all, they're secret.

[00:02:50] The company Open AI is private about what their training methods are. They, of course, the algorithms, but also what the, what they've included in those corpuses. There have been, there's been sufficient research done to show that certain corpuses will be included, for instance, Wikipedia is definitely in there as a training model and something like Wikipedia, which is a fairly reliable source or resource of information will be weighted more heavily than let’s say, um, random internet searches and findings.

[00:03:27] I should also say GPT four is also trained to now integrate computer vision, which is a form of computer science in which computers are able to recognize objects, and then it will actually tell you what's going on in an object as well.

[00:03:41] Gabriel Miller: Oh Lord. 

[00:03:43] Lai-Tze Fan: Yeah.

[00:03:45] Gabriel Miller: Okay, well, so l, before we get to what's next, because of course so many people are having trouble processing what's already happened. I want to just make sure I understand, based on what you've said it, part of the reason, or maybe the reason why this has struck people as being so different than what we've encountered before is it's this marriage of an ability to search and access massive quantities of data, but also to answer and create in language.

[00:04:18] To either speak directly back to us in when we ask questions or to create sort of stories or accounts or summaries of information in a way that something like Google, while it can search all sorts of information for us, never could communicate or create in a language form that way, is that fair?

[00:04:39] Lai-Tze Fan: I think it is as a, in terms of its ability to assess what information is relevant, it's very similar to Google, but the difference is the, definitely the representation of how that's done. As you've said, it takes a form of discourse as opposed to tiered search results. So the first page of hits, for instance.

[00:05:01] So there's where the interesting, I guess, twist is with something like ChatGPT, because it can communicate in ways that look like meaningful language that often gets mistaken as being meaningful content. It's not necessarily meaningful. 

[00:05:17] Gabriel Miller: So it seems basically that once the world saw ChatGPT emerge a few months ago, most of us have been playing a sort of game of “how panicked should I be?”.

[00:05:30] I haven't decided personally yet how panicked I'm gonna be. I've found talking to you and listening to you somewhat reassuring, but I wanna make sure that's based on actual understanding. It sounds like you're saying people shouldn't be making the mistake that this is sentient, that it is going to achieve, or it's close to achieving [...]

[00:05:51] A sort of self-awareness and an ability to form motivations and I guess decisions based on motivations. Is that a fair assessment of how you're viewing this and I guess what the facts are telling us? 

[00:06:04] Lai-Tze Fan: Absolutely, and I think one way to describe this is to explain the difference between the AI we have now and the AI we could describe as potentially sentient as potentially passing a Turing test.

[00:06:16] For instance, the test, the thought experiment by Alan Turing in which a computer could trick a human into thinking that it's actually human. Arguably, for those who think that ChatGPT is humanesque, we could say that there's a, a potential for passing. But let me just, I guess back up and say that the type of in artificial intelligence that we're dealing with today, all existing artificial intelligence is described as artificial narrow intelligence.

[00:06:43] It is there to complete a task. It has specific objectives, very similarly to the AI assistants that I look at all the time, like Siri and Alexa they perform tasks that get things done. ChatGPT is a much more sophisticated version of that. You ask questions, it gives you the answers, but they're not the kind of AI that, for instance, we see in sci-fi, which are called artificial general intelligence.

[00:07:09] It doesn't exist yet. We should perhaps point out that companies like OpenAI are very outspoken about how they'd like to be a part of manifesting it. That's interesting. Especially if because of the potential power that would, that would permit a company like that. And for me, that it enters into issues of regulation and what we, what kind of potentials we, what kind of possibilities we should allow or not allow corporations to have.

[00:07:39] Just as an example, and I'm not the first to say this, but we don't let oil companies self-regulate, so it's a little strange that we're allowing tech companies to do a lot of decision making without stepping in and saying yes but beyond, not beyond X or Y parameters. I think part of it is just that governance and legislation cannot catch up with the speed at which things are changing, so the things are happening very quickly, especially with more investor interest.

[00:08:13] Gabriel Miller: Are there potentials in this technology that actually excite you or that you feel we there's real potential to pursue for constructive ends? 

[00:08:23] Lai-Tze Fan: Yes, I think it will require a lot of tweaking and it, this will be an issue in regard to how quickly people are willing to adapt. I don't think that there [...].

[00:08:36] That I don't think everyone will be in on board even with potential constructive outcomes of this AI system or, and many other AI systems. So when we're talking about automated systems in particular, a lot of AI are used in everyday life. Not all of them are on the same level of intelligence, but checkout counters, self-checkout counters, robot vacuums, voice assistants that are on our personal devices and are in our homes, they're all forms of AI that we increasingly rely upon.

[00:09:07] And I would say where people might have some resistance is when they start to look at, when they start to replace certain kinds of jobs. In regard to the constructive ways to use ai, especially something like ChatGPT, we could think of it as a potential collaborator. A paper I delivered in January, which I, as a performance piece, I “co-wrote” quote unquote “co-wrote” with ChatGPT in that I fed in parts of my paper and some of my thoughts..

[00:09:41] And it gave me some feedback and we kind of went back and forth. Sometimes I asked for background information on artists, including works that they may have been known, that they may be known for. I see that as potentially constructive, mostly because we're not gonna get rid of these AI systems ChatGPT is not going anywhere.

[00:09:57] And if I don't account and adapt for the ways in which it is changing the ways in which we think about information and the representation of information and where it comes from, including sources that are valid or trustworthy, then I think we're making potentially a mistake in regard to the ways in which people will adjust, even if it's not institutionally or formally.

[00:10:24] Our expectations will change. So in that regard, not accounting for the ways in that can impact for interest. For example, the work industry or even assessment of in, in curricula allows those things to be used against us, especially as they move on and on. My, my objective, I guess, was with collaborating was to show that maybe in the same way that after photography was invented.

[00:10:51] Painters had to rethink what painting was by saying, maybe it's not realism now, cuz photography seems to do that very well. Not to say that everyone stopped being a realist painter, but maybe it's time to explore something like cubism. See what else can be done with painting in itself.

[00:11:08] So I know that there were questions about, for instance, the future of the essay, the academic essay, the college essay. And I'm not suggesting that everybody adapt and integrate ChatGPT, but understanding that conditions will change as we think about information and also knowledge differently. I wanna almost tongue in cheek embrace that a little bit.

[00:11:32] Maybe it was a little much at the time, but my point is not to say that we are writing a better essay. Me and ChatGPT. It was to say that we have the potential to rethink what an essay does or what it is, especially now at this moment. What does it mean now to write an essay? That doesn't mean that I did it the correct way, it's just to show that this is a potential and for me, like that conversation is constructive to not shy away from that conversation.

[00:12:02] Yeah, I wasn't trying to poke fun at anyone, but more so maybe poke fun at myself with the way I end the paper is to have a slide of me asking ChatGPT, who's Lai-Tze Fan - me - and ChatGPT says, I don't know. I don't know who that is. That doesn't seem to be a very important person and like I'm using it as a way to show not that ChatGPT is behind the times.

[00:12:23] And also, not that I'm not taking it seriously, but I'm, I'm conscious of my ongoing and dynamic relationship with these technologies as they continue to help me navigate my own research questions, my research, my questions about how we think and what we're gonna do with all these tools that we're making.

[00:12:41] Gabriel Miller: Yeah, and I think that there are questions that have come up at moments in history when technology changes people's identity and their role and what they do and how they have to do it. My feeling about why has this one resonated in the way it has, has been that, at least in my case, I feel it touches on my world in a way that, that a lot of those other advances maybe haven't.

[00:13:11] The assembly line has been completely reshaped by automation and then robotic. But those were in many cases, blue collar jobs that were displaced, and now we're seeing people who, whether you want to call it white collar, whether you want to call it office work, folks who probably thought in a certain way that they were outside of the reach of this, or certainly within this timeframe outside the reach, are suddenly thinking, wow, what could this mean for me?

[00:13:45] What could it mean for my kids? I'm really interested in knowing have you had opportunities to talk with students about ChatGPT? And if so, what kind of feeling do you get from young people? Do you, are they reacting to this in a way that's anxious and worried, or are they taking it more in stride? 

[00:14:04] Lai-Tze Fan: Great question. I think it depends on, what their expectations and existing relationship are. I will say that there's a little bit more of explicit concern from the humanity students for whom a lot of what ChatGPT does seems to resemble the work that they do or want to do, including writing, journalism work, scholarship, art, even.

[00:14:31] Like people in fine arts I know that there's, that's been a big deal for artists, not ChatGPT, but Midjourney and DALL-E as AI systems that can, and also Lensa, I think is the other one. The ones that can replicate artistic styles. That's been a really big deal with copyright and there is concern across the board in regard to the automation of that kind of work.

[00:14:53] I will say so in, in the classroom itself as a, as a suggestion for a modified way of assessment, if ChatGPT is producing B essays, or if it's producing workable foundational [...] programs for coding and for coding projects, then why not give those foundations to students and ask for instance, with the essay, okay, this is a B, turn it into an A plus, which some, which is something ChatGPT can't do.

[00:15:22] You can't ask it to improve upon its own model except to a certain degree. Like it doesn't have the variation and the style and the rhetorical methods that a human can have and it doesn't have the, potentially the [...], the ability to make someone feel something when they're reading as opposed to just relaying information.

[00:15:42] And with code, i's a great starting place, but if I were to teach creative coding as I, I do, maybe I'll just say personalize it. This is just the base model as I was, I was mentioning with the websites that I used to build with friends in, in, in undergrad, or sorry, in high school, that we just personalize it to ourselves.

[00:16:00] We take base code and just say, how do I make this look more like me? And it's great that I have the foundation, but that's often how learning happens. I take a model and I improve upon it with my signature, more humanistic stamp.m that feels like something that a human wrote. I guess that there's another, there's other grounds there for thinking about whether or not something passes a Turing test, but beyond the classroom, I think students are concerned.

[00:16:29] I try not to encourage deterministic thinking and want them to know that there is still always the role for their potential for critical thinking, which is not going to be exhibited in ChatGPT, including critical thinking about the tools themselves to OpenAI to make all those decisions. 

[00:16:50] Gabriel Miller: So let's talk about the nature and the design of our technology and the tools that we use.

[00:16:57] You've said that these artificial intelligence tools end up being mirrors of ourselves, and I'm really interested in this idea that the technology reflects the people who are designing it, and it will reproduce their biases. It will reflect their priorities, it will reflect their view of the world. This is a question you've looked at, not just in this, the context of what's happening today, and I wanted to start just by asking you conceptually, this notion of gendered design, what is it? 

[00:17:32] Lai-Tze Fan: Gendered design is the deliberate design of not just technological products, but I'd say cultural products that has in mind a modeling of that object after traditionally gendered roles or subjects or bodies or forms of labor. You mentioned, yeah, that it's a mirroring.

[00:17:55] If I could just step back to the larger question about all these different forms of design that are modeling and mirrored after our society. You also mentioned the difference between blue collar jobs, representing blue collar jobs, and as automated essential labor versus white collar jobs. I just wanna point out that you, we've been living with self checkout counters and AI robots and or vacuum cleaners and AI assistants. 

[00:18:24] And this is something I picked up on with a professor at the University of Victoria, Dr. Jentery Sayers, he pointed out to me also that, that the labor, the type of labor is what matters here when we're, when we are starting to express technophobia or techno anxieties, cuz it's not that we and not lot to him here, we didn't seem to care when it was blue collar jobs that were being automated and replaced.

[00:18:48] We being a proverbial, we actually don't wanna generalize for everyone. When it was only when like jobs, like journalism or white collar jobs that started to be at risk for being replaced by these more sophisticated forms of AI, that's when we started to question, and on a larger scale, especially in in, in media, we started to question whether or not we'd gone too far or what too far might be.

[00:19:11] But it's not. If it's jobs that we consider to be quote unquote “below our pay grade”. That to me is a mirror of the models of exploitation that are already the sort of cultural logic, the foundation of techno capitalism. And here I'm riffing off of another Canadian Dr. Sarah Sharma at the University of Toronto, who's written upon this extensively in regard to gender.

[00:19:34] So for myself, with gender I'm very much concerned about the ways in which a long history of exploiting women's labor has continued on in the abstraction of that labor and the performance of it with different machines. 

[00:19:49] Gabriel Miller: I'd love to talk about an example that really stands out for you. 

[00:19:53] Lai-Tze Fan: Primarily AI assistance such as Amazon's Alexa, aApple’s Siri there, that's the most obvious one to me

[00:20:02] Gabriel Miller: Right down to the names and the voices. 

[00:20:05] Lai-Tze Fan: Down to the names Alexa, I probably refers to as the Library of Alexandria, the library of all knowledge. And Siri means a woman who will lead you to victory in Norse. Some of them are not explicitly gendered in name like Google Assistant and so on and so forth, but sometimes it's voice.

[00:20:22] Or aesthetics or even discourse like diction, choice of words, the ways in which they'll respond to certain things. And some of the research that I've done with Alexa, for instance, if you and some users have done use sort of flirty language, a bordering for some people in some experiments upon like verbal sexual harassment, the responses are often much more gendered in a way.

[00:20:47] Siri seems to think as, and I'll use this pronoun deliberately “herself”, or Alexa seems to think of “herself” as female presenting, even though Apple and Amazon do not ascribe pronouns to her. I think the design there is still deliberate as just another example of this exploitation, the UK Siri is a very posh sounding man.

[00:21:13] Which to me suggests that it's modeled after a butler. 

[00:21:18] Gabriel Miller: Is that right?

[00:21:18] Lai-Tze Fan: It's a yeah, it's a classist issue all of a sudden. Not necessarily a gendered issue. I guess it's a, the sort of, the type of per, like this type of figure that's standing in for is what they're imagining if it's gonna do housework for you.

[00:21:30] So it's watching after your child, taking care of transactions and other domestic duties or secretarial work. Who do we imagine is doing that work? And if it's female presenting, there's my answer versus sort of [...] figure in the UK. 

[00:21:47] Gabriel Miller: What I'm struck by when we're, we're talking about this, is this kind of, uh, mixed relationship we have with this technology that the way it can reproduce biases and inequities in our society.

[00:22:02] And then at the same time, particularly I think when it comes to the Internet, the potential for it to bring down certain barriers or to liberate people from certain circumstances, and I'm thinking for example of the electronic novel and which I think from what I understand about your work and description, this ended up being a vehicle for women writers in particular who couldn't get published in the established system to take their writing and their publishing into their own hands. Is can you just tell us a little bit about that?

[00:22:37] Lai-Tze Fan: This is something that's really important to me. Yeah. I, uh, I learned a lot about this from one of the pioneers of electronic literature, Marjorie Luesebrink, who goes by the pen name of M.D. Coverley.

[00:22:51] It's a time when you're still wanted to write in gender neutral names, so she taught me a lot about publishing and being a part of the publishing industry in the sixties. But her and so many other women, Shelley Jackson, Stephanie Strickland, Judy Malloy, they shifted to online spaces in, let's say the late eighties, early nineties when they could use computers to write things on their own terms and to self-publish or to publish with an early electronic literature publisher, Eastgate Systems.

[00:23:23] Yeah, I just, there are not so many gatekeepers. 

[00:23:26] Gabriel Miller: You have told us that you urge your students not to think deterministically about the changes in technology, which for me means let's not make the mistake of thinking this has all been written for us on tablets. That, and it's set in stone that we have a power to choose and influence where these technologies go and how they affect our society.

[00:23:54] We can see the potential for new technology to, to liberate and create opportunity and even to counterbalance inequities when we talk about something like electronic literature and we see the tendency for them to reproduce the same biases that we have been burdened with or that have marked our society in the past.

[00:24:19] For you, as someone who teaches in thinking and thinks about technology and it's a relationship to our lives, especially at this moment, what's important for people to know or think or what's the message you'd leave with them with about what we need to know about these, these new technologies so that we can go in a direction of our choosing rather than finding ourselves just pulled in a direction that we might not want to go? 

[00:24:50] Lai-Tze Fan: That's a great question. Particularly because the tools that will be given are not always up to us when those tools are presented to us because of developments and technology so are we gonna see the end of large language models anytime soon? Absolutely not.

[00:25:09] So what do we do in that sense? I think for me, the biggest prompt and encouragement and suggestion I would have is for people to try to learn more and to self-educate or, and also to seek out valid resources for information about what these tools are, what AI is, and what companies are proposing to do with it.

[00:25:33] A lot of this has to do with efforts and for researchers a lot of this has to do with not knowledge mobilization, not keeping those secrets among us. This is both a prompt for an everyday audience, but also for researchers to make more of an effort to sort of meet in the middle and talk about what changes are being done and how they will impact people, but also how to do that as responsibly as possible so that changes, for instance, expertise as like undergirding those conversations can happen in a way that is inclusive.

[00:26:06] that doesn't exclude certain voices and people when trying to make design decisions going forward. I'm just thinking about the ways in which putting pressure a lot on some of these big tech companies has made a difference. And a lot of that is about scholars pushing the boundaries, but also people learning about what the implications are.

[00:26:24] Trying to be as aware as possible that these things are not out of reach that we can learn about them is a really important way to understand that, that we can potentially shift the directions of those tools as they go forward. But it certainly doesn't come from things like, I think one-sided fear baiting, that sort of thing.

[00:26:48] I think that's where we could run into some issues with legislation and regulation is gonna be a big part of that, both within those companies and from external sources, but it has to be done in a way where everybody's actually getting to talk. And I know that's also really difficult.

[00:27:10] Gabriel Miller: Thank you for listening to The Big Thinking podcast and to our guest, Lai-Tze Fan, Assistant Professor of Technology and Social Change in the Department of Sociology and Legal Studies at the University of Waterloo. I also want to thank our friends at the Social Sciences and Humanities Research Council whose support helps make this podcast possible.

[00:27:31] Finally, thank you to CitedMedia for their support in producing the Big Thinking Podcast. Follow us for more episodes on Spotify, Apple Podcast and Google Podcast, à la prochaine!

spotify logo

Spotify

Icon of Apple Podcast

Apple Podcast

Icon of Google Podcast

Google Podcast

amazon music logo

Amazon Music

Podcast addict logo

Podcast Addict

iHeartRadio logo

iHeartRadio

Podfriend logo

Podfriend