AI in the media industry: Changing practices and big questions

June 27, 2025 01:06:55
AI in the media industry: Changing practices and big questions
CPI Podcast
AI in the media industry: Changing practices and big questions

Jun 27 2025 | 01:06:55

/

Show Notes

In this episode, the Community Podcast Initiative (CPI) presents a panel discussion 

Where four diverse voices from the media and podcasting landscape consider the impacts of Artificial Intelligence (AI). Ranging from repercussions in the classroom to its sometimes invaluable benefit in the journalistic field, this episode explores how AI is changing standards and practices across fields. 

CPI’s Meg Wilcox, is joined by Amanda Cupido, Anis Heydari, Mia Lindgren, and Tim Magee. Meg helps moderate this panel of professionals through thought-provoking prompts and questions that help guide understanding towards the ever-evolving topic of A.I.

This series is a collaboration between the Community Podcast Initiative at Mount Royal University and J-Source.

To learn more about the Community Podcast Initiative, you can visit the website at thepodcaststudio.ca or on socials at @communitypodyyc 

To learn more about the J-Source, you can visit the website at j-source.ca or on socials at @jsource

View Full Transcript

Episode Transcript

[00:00:04] Speaker A: Welcome to the Community Podcast initiative where we explore diverse and inclusive ways of audio storytelling. Our goal is to connect community members through sound while providing an alternative for those underserved or misrepresented in traditional media. The CPI is based out of Mount Royal University on Treaty 7 territory. I'm your host, Meg Wilcox, and in this episode, I sit down and chat with four media professionals to answer some big questions about how artificial intelligence is affecting the industry and standards of practice. In this conversation with Amanda Cupido, Anise Hedari, Mia Lindgren, and Tim McGee, we'll talk about how AI is being used by journalists in the field, how it's affecting jobs and careers, and how institutions are starting to train future journalists with these tools. So, hi everyone. Thank you so much for being part of our panel today, which we're calling AI in the Media, Changing Practices and Big Questions. And you're all experts in your field and you bring such different perspectives, you know, a mix of practice, education, sometimes a bit of both. And so I'm really looking forward to hearing from you today and learning from you. I'm Meg Wilcox. So I'm an associate professor with Mount Royal University's Journalism and Digital Media program. I'm also co director of our Community Podcast initiative. And you may have guessed, I may be moderating today's session. I am moderating today's session and I was hoping that after I introduce each of you, you could take about a minute or so to share an opening thought about what's front of mind for you right now in terms of AI, sort of what you're seeing, what you're thinking. And so, yeah, give us, give us your thoughts. We'll, we'll go alphabetically. So we'll start with Amanda. Amanda Capito. I did. Yeah, that was. Right, never mind. Amanda Capito is an award winning podcast producer, TEDx speaker, author and entrepreneur. She's the founder and CEO of Lead Podcasting. She's also an adjunct professor at Seneca Polytechnic School of Media and she teaches a new class called Generative AI for Communicators. Amanda also has two Amazon bestselling books, let's Talk Podcasting and let's Talk Podcasting for Kids. Thank you so much for joining us, Amanda. And what's on your mind right now with AI? [00:02:16] Speaker B: Thanks for having me. Well, you just mentioned that I'm teaching this class that we just wrapped up our semester, Generative AI for Communicators. So of course I've been, I've been really experimenting a lot with this world. But the thing that Excites me the most is, is the capabilities around translation. And so I've been seeing a lot in specifically the podcast space where I work, but also beyond just with, with audio and voice. But the translation capabilities are pretty special and it's allowing us to have a more global market when it comes to content creation. And I think as English speaking content creators, sometimes we think that the content we're making is king when there's so much other incredible content that we can't even really engage with fully because we don't understand it, and vice versa. Some of the content that we do make can't reach corners of the world or isn't able to be engaged with the same way because of a language barrier. And so I've worked on several podcasts specifically where we've been able to translate it into different languages because of generative AI and we would not have been able to do so otherwise. And it's just the, the poss and accessibility that comes with all of that is really exciting to me. [00:03:36] Speaker A: Thank you. And next we have Anise Hidari. He's the prairie director for the CBC branch of the Canadian Media Guild, which is Local 30213 of CWA Canada. He recently served as the subject matter expert on AI as his guild bargained a new contract with the cbc. And he helped win one of the first agreements in Canadian media that gives workers a say in how AI can be used when it comes to their likenesses. In his day job, you may have guessed he's also a reporter. He's a national senior reporter covering business and economics for CBC News. What are you thinking around AI right now, Anisse? [00:04:12] Speaker C: With both my hats on, with the union hat and with my reporter's hat, I'm paying attention to what it means for people's jobs, the economic activity it can generate, the new things that we can do, or the work that can maybe be redirected elsewhere. If somebody doesn't have to spend their time, as Amanda mentioned, translating, for example, or transcribing something, maybe that's work that can be better put to use elsewhere. So that can generate a lot more economic activity. But from a worker's perspective, and probably my main concern is actually what does it do to people's jobs? Does it replace humans with automation? Is it accurate? Does it lose nuance if we have a computer doing these things, who decides and programs the computer? If we're talking about things like transcription or translation, does it make things less accessible for people who don't speak English or any other language? In a certain way or don't understand the spoken word in a certain way. So I carry a lot of concerns. I guess my eyebrows are raised and I make a lot of March Simpson noises at what we see coming these days. [00:05:24] Speaker A: Well, we'll talk about those Marge Simpson noises or maybe more. The questions might be more. More appropriate. But next we'll move on to Dr. Mia Lindgren. And so she's currently at Mount Royal University as one of our visiting international scholars. And she's a leading international podcast researcher and adjunct professor at RMIT University as well as the University of Tasmania in Australia. She's formerly a journalist with the Swedish and Australian public broadcasters. And Mia creates interdisciplinary and practice based research through award winning podcasts, documentaries and websites in partnership with academics, media organizations and audio producers. And Mia, you wear so many hats in your work. What are you thinking about right now with AI? [00:06:08] Speaker D: Oh, thanks very much for having me on this. I'm very excited to be in this room together. What I've been thinking about is really the podcast medium as such. So research. Much of the research about podcasting is about intimacy, authenticity, and how podcasting creates this parasocial relationship with the listeners or more, more recently, the viewers as well. So what will happen when we've got more AI hosts, for example, does that matter to the listeners and the viewers and the kind of importance of this relationship that we all talk about? What will happen with that when we get more AI involvement in podcasting? [00:06:58] Speaker A: Very interesting, and we'll talk about that in a second. But we also, last but not least, want to introduce Tim McGee. He's an instructional designer at Mount Royal University and has nearly 30 years of experience in higher education, which includes leadership, faculty and administrative roles. He holds an interdisciplinary master's degree in adult education and organizational leadership. And Tim's work right now is focusing on faculty development, course design, and the ethical integration of generative AI in teaching and learning. So, Tim, what are your thoughts right now around AI or what's on your mind? [00:07:33] Speaker E: Yeah, AI really at this moment in higher education and education in general is really posing an existential challenge because it produces human like output that blurs the line between original work and machine generated content. And it's really forcing us to rethink not only how we assess student learning, but also what it means to create and own and share intellectual work in an academic context. It's really raising complex questions about authorship and copyright and scholarly integrity, especially as students and researchers alike are integrating AI into writing and media and research production. And at Mount Royal I mean, not just Mount Royal. I think all institutions are wrestling with this currently. We're focused on developing ethical, intentional approaches to teaching with AI. So it supports learning as opposed to undermining it. But it's a complex landscape and it's moving very quickly. Like every day is something new. So. [00:08:40] Speaker A: Absolutely. And I think those are. There's a lot of crossover there for those who are making media, too. Right. That idea of, you know, authenticity and ownership. And I'm sure we'll get into that before we talk a little bit about how people are using AI right now or how it's being used in the industry. Mia and I were chatting just before we started this recording session about an example of a generated AI voice that's actually becoming a podcast. And, Mia, I don't know if you want to explain that and sort of. Yeah, walk us through that. [00:09:12] Speaker D: Okay. And of course, there's lots of these now, but I think a really famous one to start with is the podcast called Virtually Parkinson. And for those of you who know who Michael Parkinson was, a very famous British interviewer who has interviewed over 2,000 people, including really famous people like Muhammad Ali, Billy Connolly, Paul McCartney, Madonna, various prime ministers, etc. Now, Parkinson died in 2023, and before he passed away, he and his son discussed the idea for creating a podcast. And this is what's been happening. The first episode was launched in January this year. There are up to, I think, three or four episodes. The trailer is out. It does sound like Michael Parkinson kind of. If you didn't know his voice, he was mostly in television, and I think a lot of people would recognize him. So if you weren't completely across his voice, you might think that this is Michael Parkinson. So what's extraordinary is that obviously, the interviews that he's done, any other content that's been relevant has been fed into the AI to train it, and it is now an AI that does the interviews. So the podcast is built with real people coming in to be interviewed by the virtual Parkinson, and then the producers use the interview and reflections by the interviewee to improve the training of virtual Parkinson. So by the end of the season, the anticipation is that it's going to be much better. Better at doing interviews for each time it improves. Right. And I was reading an interview in the Guardian with one person who has been interviewed by the virtual version of Michael Parkinson, and it's a guy named Monty Don who has a television program called British Gardens. And he said in the interview with the Guardian newspaper that he thought that the interview, the Virtual version of Michael Parkinson. It was just less satisfactory. He said it was a much less interesting interview. There were more questions being asked, but there wasn't this natural conversation that we have during an interview where you might have the interviewer listening, following up with questions, having more like a conversation, which, of course, is the hallmark of podcasting. So that's kind of an early. I think that was episode two or three. So that's an early report card on the podcast. But I'll be very interested to see what it's like at the end of the season. [00:12:17] Speaker A: So it's just more an idea here is obviously generated podcasts with, let's say, voices of deceased iconic journalists are not totally the norm at this point, but we are hearing generated voices. We can obviously generate text that can be used, scripts, other things. And so I'm curious, you know, just to hear from all of your perspectives, how AI is being used either directly in audio production or in other ways that could be associated with it. And, Amanda, I was hoping to start with you. What are you seeing, especially with your experience in industry? [00:12:52] Speaker B: Yeah, I mean, there's lots of people experimenting, so I think there's lots happening. And this changes day to day. The celebrity voice is a. Is a big one that tends to get mainstream media coverage as well. So we've been seeing that locally here in Canada, too. But I would say in the podcast production realm, where I'm working day in, day out, the translation piece, which I mentioned before, is something that's alive and well in my company. Not only are we doing that for clients, but we're doing that for some of our original series. In Canada, we have two national languages, English and French, and so it has been helpful in helping content go both ways. With some of the. Our national shows that we're creating, we obviously were already using AI in a lot of ways. And I think generative AI, which is new and getting better, is what we're. We're mostly talking about. But just to call out that artificial intelligence is not new. And a lot of the technology we've been using in the production space, even for generated transcripts, is not that new, and lots of people are a fan of that. And I think some of the privacy concerns actually should be for those pieces of technology, too. And maybe we. We kind of skipped over that. All of the audio that we're just throwing into technology, like Descript or Rev, you know, who are they tapping into, those kinds of conversations and the data that we're feeding it, where does that live? What are those terms and conditions. Has anyone read the fine print? I think it's all very important and it's something that maybe has been overlooked for years now and tends to be overlooked in general with technology. But so we're continuing to use technology like that. We're definitely using it in the areas for voice generation, for translation. We have also used it for voice generation to help with pickups when doing celebrity shows. So we did work with like a reality TV star on an audiobook, and she had to just do, you know, this one sentence here as a pickup and to get her back in the studio was tough. And so we got her permission to auto to use her voice and generate that pickup line. So we. But, you know, what was that deal like? I mean, we were up front and asked before we did it, we told her exactly which sentences we were gonna run through and then we let her listen to it on the other side. And, you know, it made it into the audiobook. And so that helped us meet our production deadlines. It helped make it easier for her. And then we were only getting her in for really big chunks and not having to get her in, you know, a bunch for sometimes a very little thing. Um, and then the other use case I've seen is for talent also in. In rough scripting. So if you're doing a narrative podcast and you're having a talent who we want to get them into the studio once, but maybe as a producer, we still want to iterate on this script. Generative AI, using that to get your script down, have the hoist host saying it, quote, unquote, saying it, reviewing it, putting it in front of an internal focus group of an audience, and then iterating on that script before it actually gets recorded for real. So a podcast called 20,000 Hertz, which I'm a big fan of, Dallas Taylor is the host and actually was just awarded as the best host from the amb's a big podcast award. He has, you know, openly talked about using this as part of his production methods, which I find really brilliant and a good use for him and his production teams. Time. And then one last thing is on the creative side, using something as simple as chat GPT to help us with initiating ideas or compiling and reorganizing research, which, again, if used intentionally, I think is really helpful. And it's not replacing any of our producers, but it's helping us to have our workflows move faster. [00:16:34] Speaker C: Wow. [00:16:34] Speaker A: Yeah, a lot of really interesting ways that that's coming through. Anise, what are you seeing? [00:16:40] Speaker C: So, I mean, a Lot of what we see isn't quite happening yet in the worlds that I'm in because they're mostly journalistic worlds. And there's still a lot of reticence, I would say, to lean on generative AI because there is already a lack of trust in mainstream media outlets and non mainstream ones as well. And I don't want to lean too hard on that. But there's already a fear of like, oh, well, where does it cross the line from being computer generated to fake? You know, not to use too much of a trigger word there. So there's a lot of worry about that. My eyebrows shot way up when Amanda was talking about reproducing someone's voice using an AI tool with their permission. Partly because that was is the memorandum of agreement that the Canadian Media Guild came to with cbc, Regio Canada in our last round of negotiations where we insisted and one language that says the corporation requires the express permission of our members to recreate or alter their likenesses using AI for any purpose. Now, that hasn't come up yet at cbc. And I will say I do not think, and this is just my opinion, I do not think CBC or Hedgehog managers would want to do that for the trust reasons that I was talking about. But I also know that the idea of using tools like that has come up a lot in terms of cleaning up audio. You know, I worked as a radio technician for a really long time. It's still a world I dance in quite a bit. And you know, there's often a joke that someone will be reporting from the field or even from a room that maybe doesn't have the nice padding on the wall that the room I'm in has. You don't want to sign off from a story where it's Enise Heydari, CBC News, the bathroom. Because you just sound like it's awful quality and you're in a very echoey room. And there are. How much artificial intelligence they have versus just regular old sound processing is hard to say. But there are tools now where you go, here's my file. AI magic happens. And it comes out sounding like you are not in a bathroom. Like you are in the old fashioned. I have a blanket over my head in the trunk of my car. And so that's where it's been coming up the most for us right now as a tool to like zhuzh up what we're working on. But there's fear around. Like, where does it cross that line from zhuzhing to replacing? And then where is the ethical Line around telling your audience, hey, a computer made this. A computer helped with this. What do we tell people? What don't we tell people? Do they care? I don't have good answers to those questions. That's mostly what we're looking at these days. [00:19:35] Speaker A: And it's interesting, that idea of transparency, of letting people know what the systems are. But. But then, yeah, like if you're starting an article with a whole paragraph of details of how you're using things, are people reading the article or are they reading the transparent side? Which I think is kind of interesting. Mia, we'll throw to you. What have you been thinking about or how have you been using it? [00:19:55] Speaker D: So I use Genai a lot in my everyday work. I haven't so much yet in terms of audio production, I must admit, in terms of that actual technical. But I'm across obviously all the things that have been said so far. But I use it in terms of preparing my scripts and mostly to prompt questions. I find it really useful as an assistant because that's what it is, right? I'm not asking it to do my job job, but I'm. I'm kind of throwing ideas together with the AI, so it might be, you know, I'm thinking about these sorts of themes for an interview. What do you think? What's missing? Or what are you. So it's actually getting some questions and it's like a conversation, it's quite seductive, I have to say. I have to remind myself ever so often when I. One of this, one of the softwares I use, I mean, there's so many, they're very personable, you know, they'll write, oh, hi Mia, it's so nice to see you again. Or it will say, oh, that's such a smart question. And you know, it's an AI, but you can't kind of help but feel just a little bit flattered. Right? But it's saving a lot of time. It's just also keeping me on track. So I can't quite imagine not working with AI. I'm at the same time very, very careful and, you know, in terms of what Anise was saying, I think it's really important, obviously that we need to be clear around transparency and disclaimers. So, you know, getting into a habit of always making sure that you tell the listeners and the viewers that AI is involved in the production and exactly the way that it's done. I'll just share a story in terms of how AI is used, used in teaching and learning at higher education institutions in Australia, because It's used extensively and usually we ask the students really clearly to articulate how they've used it. They actually need to include the prompts. I do that a lot as well. And that's part of the transparency. I actually tell people what was my prompt, and I think that's one way of dealing with the concerns around it. [00:22:24] Speaker A: Thank you. And Tim, in your case, I'm thinking Mia's already brought in the idea of sort of higher education and what we're seeing there. But. Yeah, how are people using AI? How are you or people you work with? [00:22:34] Speaker E: Yeah, well, it's wide open. About 10%. Recent research indicates about 10% of institutions have any kind of policy around AI. And so through math, we know that 90% of institutions do not. And that has left an exceptional gap that faculty and students and researchers are trying to navigate around what's okay and what's not okay. And those questions remain, by and large, unanswered. Mia, I had read something recently about an initiative in Australia to codify some of the nationally about some of the work that's being done around AI and how it's being used in education and steps that are being taken to try and mitigate some of the end runs that it might be doing in education. And we don't have such a thing here in Canada. But it wouldn't surprise me if that comes along at some point. And unfortunately, the name of the initiative escapes me, but it was definitely. It had political origins at some point. But in terms of what's happening in higher ed, like I said, it's wide open. It's the Wild West. About a year ago, research seemed to indicate that about 50% of students were using AI, and today over 90% are using AI. That's research out of the UK, but I don't see why it wouldn't apply here in Canada. And they seem to be using it in about roughly equal ratios or percentages for direct problem solving, direct output creation, collaborative problem solving, and collaborative. Collaborative output creation. The big worry, of course, from faculty is around academic integrity. Are students using this to shortcut learning or are they using it to enhance their learning? And there's no easy way to tell. I mean, it's ubiquitous. It's everywhere. It's built into your PDF viewer, you know, a big colorful button up in the corner that's just waiting to be clicked on. Can you summarize this document for me? But on the plus side, it also allows all kinds of opportunities to enhance learning and provide other ways for students to show that they're learning as opposed to producing the standard essay. There's a great tool that I recommend to faculty actually speaking of human voice and podcasting, a product called Notebook. I see some nods in which you can take any kind of, well, I shouldn't say any, but pretty much any kind of document or website, feed it into NotebookLM and have it produce a podcast style discussion about the item. I always caveat my advice to faculty around this that this is a great opportunity for critical reflection on whether this is actually accurate or not. Because I've seen it where it's introduced in corrections into its analysis and at the very core of what we are trying to get students to do is to be critical thinkers around this. Because if nothing else, this is a great opportunity to, to really look at, hey, what are we actually seeing here? And is this to be trusted? But it's not just AI, it's kind of everything these days. [00:26:31] Speaker A: I see a bunch of nodding as Tim's talking and I'm thinking, particularly Amanda, you make podcasts, but you're also teaching this generative AI for communicators course. And so I'm wondering if you have any thoughts on that sort of how you're seeing students work with it or questions that are coming up there. [00:26:47] Speaker B: Yeah, so I will say Seneca Polytechnic where I teach is one of the 10% that has a AI policy. So I didn't realize the number was so low though. So that's, you know, anyone who's listening, that's a real like call to action. But it was great because it actually got released and publicized college wide or polytechnic wide while I was teaching. So it was a great teaching moment where I said, all right students, let's all like read the policy and have a discussion about what we think about it. And so a lot of it has been interesting because I, I went into this class thinking that this, this is gonna be co learning. The students are probably running with this. And when I went around the room and this was, is part of a communication degree and most of the students consider themselves beginners. Some of them said they've used nothing other than chat GPT, haven't experimented with any other AI agent. So I thought that was very interesting. And I think sometimes people who have gotten a bit older think that the younger generation must be miles ahead on some of these things. That's not always true. And they, and they are looking for guidance they didn't really know. I think a lot of them were nervous and that surprised me and it was good grounding to hear and See that? And it was a lot of us navigating it together. I was able to bring a lot of agents into the classroom for us to experiment with, to talk critically about. On the first day we had a chat with Boardy AI. I don't know if you've heard about this agent where he's a networker and you can connect with him on LinkedIn, send him an email, he calls you, you have a conversation, live responses. And so I like had this call live in front of the class saying I'm talking to my class. What advice do you have for students using AI agents? And it was just really neat to engage with that live and see their reactions and then I encourage them. And then what 40 this app does is connect you with other people in his network. That might be a benefit to you based on what you're looking for and what you're trying to accomplish. So very neat thing to experiment with if anyone listening here wants to give it a try. And right now it's free. So but of course read the privacy policies and make sure you're comfortable with it. But you are having a live conversation on the phone with someone who feels very real. And so that's one example. My assignments also have them like you would submit a reference page, you would submit every AI agent you used and what part of the assignment you used it for. And in some instances, yes, the entire full like prompt revision reprompt. I'm asking for the entire transcript of what you put in as part of showing your work. And some students were so neat how they submitted it. Some just did a screen record, screen record their entire, you know, conversation with chat GPT. And so it's very interesting to see their thought process and what they were picking up on to. To try and and improve upon. And so I thought that was really important and for them to to have at the end of their assignments always. I had a little trick assignment which my students will know. I called them out that none of them had had caught this. But one of the assignments was write a letter. You're working at a company in the communications department. It's a no AI policy. And obviously in the class they know they can use AI. Write an email to the director of communications, your boss, trying to convince them to give a AI a shot and how are you proposing they give it to them? And all of them used AI. In this email none of them put a disclaimer saying that they used AI. And hey, we just said this is a no AI policy workplace. Right? So but it was A good catch. That could be a very real world assignment moment, right moment to say, if you are going to be using it, be mindful, be intentional, be clear and, and really have to respect the policies of the workplaces you are also going into. Because, yeah, it's a, it's a, it could be a real deal breaker for people trying to enter and disconnects on what is, what is seen as appropriate where they're entering. But Mia, I know you've, you've got something to add to this. Go for it. [00:30:32] Speaker D: Well, I just, I just couldn't help by telling a little story that relates to my own work. And this actually really demonstrates that you have to keep an eye out. So I'm thinking about it as a actually great way of, of ensuring people have a critical lens on what they're doing. Don't trust the AI. So I worked, you know, there's lots of them, but I was working with the, with the software or the platform called, called Claude. And Claude was helping me summarize my own text and there was a reference that I needed to check and I was saying, claude, can you just, you know, go and find this reference for me? And Claude hallucinated, which of course AI does all the time, and made up a reference which was my own work. And of course I knew that it wasn't my work because it wasn't something I had done. So I wrote back and said, hey, Claude, stop it, you're actually hallucinating. And Claude came back to me saying, oh, I'm so sorry, I'm so sorry, I'm sorry, I will not do this again. And next time I ask Claude to do something. Oh, yes, yes, I know I was a bit sloppy last time, but I will not do this again. So for me, and I use that example when I talk about AI in teaching and learning. And I think it's a really good reminder. So you can use it in certain ways, but you have to be really careful and not completely trust it. But for me, it reminded me of the importance of always checking your references, which is something that academics know really well. But, you know, we also become complacent. So anyway, all these are learning examples. [00:32:13] Speaker E: So that's a perfect example of, in academia, how this kind of hallucination can. And this occurs with frightening regularity. One of the things, one of the catchphrases I've heard is that AI is confidently inaccurate. And this happens a lot when we are looking at AI and its use, at least in higher education. Something that I've indicated with Faculty is that, look, we can't reliably detect it. Like, you can't spend your time policing all the references that are in a typical paper. We can't. And the software that purports to detect AI generated content is also not reliable. Sorry. I know if anybody's listening, that may be. I may be throwing cold water on that. But it's not reliable. And it's so ubiquitous now. Like, it's woven into everything. It's on your smartphone, it's built into your browser. You can't really escape it. Where does that leave us in higher education? Well, I would argue that it really puts us in a position where we have to teach from a position of ethics and morals and critical thinking. Because when you can't detect it and it's so ubiquitous, we have to give students the tools to be able to think through these things themselves, because 70 times a day they're going to encounter a situation where they can use AI or not use AI, and they have to be able to navigate that without input from somebody else. [00:34:09] Speaker A: Anis, did you have any thoughts on this? [00:34:12] Speaker C: I mean, the place that I approach this from has the journalistic lens to it as well. And that's not to say that that's entirely different from academia or where students are coming from, but there's, I would say, a little bit less, maybe margin for forgiveness or retroactive corrective action with a news or a current affairs story, especially a news story these days, it's already difficult for journalists to have to fact check and verify everything that they're putting into their stories. The level of confidence, to use the term, that AI can present when putting out information can really throw a wrench into the works in that regard. Certainly it's something that both the Canadian Media Guild and one of our parent unions, the News Guild and CWA Canada, all of our members, are concerned about this in a really big way because we do want to be able to use the newest tools that become available. There's a push and pull happening where it's like, wow, you don't want to be the person who insists on using a typewriter now that a word processor is available in the 80s or 90s. Nor do you want to be the person ignoring a powerful tool that can both compile and generate information for you. But it's wrong a lot of the time. And it is a new work practice to have to do things like Amanda was describing her students doing, keeping track of not just maybe where you found the information, but how did you ask for this If I compare it to a Google or. I mean, nobody uses Bing. I shouldn't say that. It is my opinion, as a union representative and not as an employee of the cbc, that very few people use Bing. But if you're Googling or Binging or Yahooing or whatever, no one remembers, like, oh, I searched for this in quotation marks. Just like, yeah, I searched for. I don't know how much GDP went up over the last five years. But if you're asking that to Deep SEQ or chatgpt or Google Gemini, you really have to remember, how did I ask this question? What did I ask it before? Even, what did I ask it like five questions before? Because it remembers all of them and it tends to use those contextually. You will get different answers depending on how you asked it and what you asked it before. And you need to start keeping track of all of that to be able to show your work in a way that we did not have to do. Certainly in recent memory. I don't know that we've ever had to think about showing our work in the way that we do with generative AI these days. And I don't have a good way to keep track of that myself. We're still learning how to do this. CBC Hedgehog has policies around it, but many of those policies specify we're approaching it with a critical eye and we're trying to continuously learn without giving up on journalistic standards and practices. I think everyone's still trying to figure this out. [00:37:46] Speaker E: That's really interesting about tracking your work. So one of the things that we've encountered here, we're contemplating rolling out at Mount Royal, a platform for AI usage to all students and faculty. Contemplating. It's not done yet. And one of the features of it is that if you use it, you can't delete the conversation that you've had with it. So it's frozen in there. Like, you can go back and continue to have the conversation, but there's no delete option for that conversation. And that. That lends itself to some really complicated reflection around what do you want to say to it if the institution won't allow you to delete it, if the platform won't allow you to delete it. I mean, there's pros and cons from an academic standpoint, but in terms of showing your work, you could not show your work work if somebody asked, you know, it's. It's a fascinating concept. [00:39:00] Speaker A: We've talked a lot about how, how people will use AI to put things out into the world. And I'm curious if any of you have any thoughts, and I know, you know, we're still early days on this, but how audiences respond to content that is generated by AI. And, and I just think of myself thinking, oh yeah, I'm glad to, you know, generate myself a bio or something for myself. And I think it's great. But if someone later was to find out that that's how it was generated, would they think it was as valuable as if I'd done it myself? Or is it fun and novel to hear an AI generated voice in a podcast? But when it really comes down to where you want to spend your time in the series, you want to listen to, you want it to be a human. I'm just wondering if, if any of you want to jump in if you have any experience or have heard anything around sort of audience reception around AI content. Amanda? [00:39:48] Speaker B: Yeah, I could start us off on this. I, I, I'm very in, I'm paying very close attention to this because it's very important for how I'm even advising my production team and clients. And so first I'll say there's a lot of memes already online about the over usage of certain words or like the M Dash recently has been the, the one that's getting overused with Chat GPT and everyone knows now there's a time which I naturally use, so I'm upset now. I'm like, oh gosh, I have to stop. But, and the rocket emoji or whatever. Right. So, and it's going through iterations in and of itself like this. These trends that Chat GPT spits out will also change. So I definitely think we're, people are leaning into the fact they're paying attention, they're calling it out. And that's, that's, it's funny, but it's also great. It's, it's encouraging that conversation of that critical lens. I do think that the, the overall public perception around voice specifically and, and in other, in other ways, I think people in North America and in most of Europe will choose human over AI agent. Right? So if you're like, do you want to listen to a podcast with a human host or an AI host, they'll choose the human voice. But what we're seeing in Asia is that people really, they are very open to the AI agents as celebrities, as singers, as influencers. And so there are full concert halls being filled with people who are wanting to listen to an AI like, you know, generated image and songs and singing that is completely AI generated And so they are in a different social acceptance phase than where we're at right now. It's a bit novel and fun and it's a conversation starter. But when given the choice now, now that public perception I think is going to change, especially even in like health care, for instance, if someone says, you want to see a robot doctor, a human doctor, most people are going to say human doctor. But when they say, well, if they're looking at your scans, the AI agent is going to be 99 right and the human doctor will be 90% right. Which one do you choose? Now, like, once we have enough data that proves that the initial review of scans are more correct, when an AI agent does it, which I do believe that time will come, then it's the public perception will, will start changing when it's from a more data driven industry and then who knows how the creative industries will be impacted. I was lucky enough to just have a, to meet and ask questions to Geoffrey Hinton, who is known as the godfather of AI and he was saying the three industries specifically include healthcare as one of the one, mathematics, and actually this third was education, where he's seeing it to be the biggest positive changes. And so I'm really keen to see how socially this evolves. But right now what we recognize is people are still opting for, from a voice perspective, human, but are open to the translated version. And I did a personal experiment with my own voice. My husband's first language is Russian. He's born in the, in Ukraine. And you know, his mother tongue is Russian. I don't speak Russian. But how neat that now he could hear his wife's voice, speak his, his first language. And so we've made a podcast together. I used 11 labs, which is an AI agent to help with these translations that I've been talking about. And we've re released the podcast that we've created together completely in Russian now with his ear on it. We've tweaked it, we've changed some of the translations to make sure it's accurate. But he says, I would not know. This is a robot, it sounds like you speaking Russian. And it's actually been quite special for us as you know, to hear and experience together. So I think there's a lot of potential there. [00:43:44] Speaker D: Yes. And I just wanted to say that I listened to the trailer for that podcast, Amanda, just before our conversation. Now I don't speak Russian, but I have several family members who speak Russian or have Russian relationships. So I've just forwarded it because I was really Interested to hear. [00:44:03] Speaker B: You have to let me know what they say. [00:44:05] Speaker D: I will. [00:44:07] Speaker A: That's great. [00:44:07] Speaker E: I will. [00:44:09] Speaker D: But I also wanted to say, and this is a kind of good reminder about podcasting, right? Podcasting is this amazing medium that's really agile and it will kind of adjust to whatever gets thrown at it, which is a real strength. And I'm sure some people will also say that it's a weakness. But, you know, I've been around for a long time and I started off doing research in radio studies and of course a lot of people said that the video was going to kill the radio star. Remember? You know, you can't. So we haven't quite seen that. People still listen to the radio. People will still engage with podcasting. Now, podcasting was all audio first. It's now been video for the last 18 months or so. And, and some, some of my colleagues are having conversations about, oh, but it can't be real if it's video. Therefore, you know, it's no longer a podcast because it's all about listening. Well, I think we just need to be open minded and at the same time really critical. And talking to industry people is just really helpful because it comes back to this human storytelling. There is something special about that, right? The vulnerability that humans might want to share that is a critical aspect of audience engagement, that personal storytelling. Now we might get to that point. We probably will get to that point with AI. And I'm thinking we do get desensitized. We get used to things. Anyone who has listened to the New Yorker in audio form, you know, initially it's quite annoying because, you know, it's AI generated a voice, but, but I do use that function, but I know it's an AI generated voice, so I don't expect anything else. So it's about managing expectations. I think, you know, I'd rather probably have a human voice, but as that gets kind of improves in terms of timber and other things, tone and I don't know, I think my point here really is it's important to know, is this AI generated? What sort of involvement of Geno AI has there been in the production of this content so that I as a listener and a viewer can make decisions based on that. [00:46:42] Speaker B: I just want to quickly add Mia, because you're talking about this human element. And Valerie Geller, who is a wonderful radio and podcast consultant, just has a new edition of her book out and she talks about this exact thing. And what she says and I love is that. But AI is all about data and predictions and humans are risk takers. And that's how you bring that unique human touch, is like, take that risk, go out on that limb when you're telling the story. Don't do the predictable method. And that's how you maintain that uniquely human perspective. And so I just loved that tip from her that I wanted to share. [00:47:24] Speaker C: I think one thing to put, I want to point out, though, and I don't want to impose my own guess about audiences on this, but one of the things that I've noticed with many audiences is that they do often tend to trust what the computer might tell them, to oversimplify more than humans. Certainly, I will say, as a journalist, it's been my personal experience that people will trust what they just type into Google and get out of Google, or what they see posted a million times on a social media post. Right. What's presented through some form of technology often has more credibility than humans. So one of the things that I actually do feel a concern about is that, and I'm not saying that we shouldn't disclose, I think disclosure is exceptionally important. But if a podcast was automatic, I've seen an output of Google Gemini where you just say, make me a podcast hosted by two people that is about, I don't know, watch straps. And you get five minutes of two people talking about something that you never thought you could talk about, and it sounds like a real conversation until you listen a little more closely and then you're like, okay. But there are people who are going to say like, oh, well, that's from the artificial intelligence. So it must be drawing on way more information and way more knowledge and way more context than just a couple of humans would. So it's probably correct. And who gives two flying Fs if it's about watch straps? But if you say, give me a podcast or give me something that is unannounced analysis of tariffs, are they going to trust. Is an audience going to trust me, or is an audience going to trust in artificial intelligence? Because while the computer must be right, it knows more than anise does and probably does more know more than anise does. But how much of that is knowledge that's correct or curated? Because it's hoovered up a whole lot of stuff and half the time we don't even know what that is. So I just. One of the things that I worry about is that as this technology gets better, I think that many audiences are just going to be apt to trust it the way that they trust other technologies. And that may not be a good move. The Audience is not always right. The customer also not always right. [00:49:55] Speaker A: Well, and it makes me think you're just going on to the point that I was thinking a bit of that. If we're expecting students or journalists or people who are using, you know, AI, generative AI to show their work, we're taking what, what generated AI is and we're not asking them to show their work. In fact, they won't show their work because it's proprietary decisions and we don't even know entirely what, what they're using as a base for information. Right. Which I find interesting that how, how can we truly hold ourselves to those standards and not the, the tools that we're using? Not sure if anyone has a thoughts or if that's just me musing into the, into the ether, but I mean. [00:50:37] Speaker B: I could say quickly that like even, you know, the problem is, is that half the time even the makers of these AI agents don't know how the AI is coming. Like there is a black box of deep learning and back propagation that happens where they actually can't even go and figure out how they fig found that answer. That's the scary part. And so that's just a reality of how this technology works. And so it's up to us humans on how we're going to deal with that. I don't have answers, but I just have that as like a context and something to chew on and that humanity has to chew on. [00:51:13] Speaker E: Yeah, yeah. Further to that point, there's been very recent research into, I think it was anthropic, figuring out how to map some of the processing that was happening behind the scenes and it made no sense. It was crazy literally how it was happening. And some of the analysis that came out of that was that because of the way these are built and designed and because this feature of unpredictability and inaccuracy, this is what allows the AI to generate the way that it does. So human like. So this is, one could argue it's a built in feature. Now of course that does not alleviate us from our responsibility to be critical about it. It's still something of our creation. It's still something that is artificial and it's. Humans are still superior in a lot of ways and for most things to be honest. Anise, you mentioned, how is the analysis about something and we've all seen this, that AI, when it analyzes something and produces some output, it tends to be a mile wide and an inch thick. It doesn't often bear scrutiny and analysis in a deeper way. I just Wanted to touch on something else that with AI and in education, not that I want to steer the dialogue back there, but one of the things that students find very challenging is absorbing the amount of information that we throw at them in an academic context. And, and if you've ever read a journal article from a scientific publication, they can be bone dry. And one of the things that AI can do in turning it into sort of a podcast format is make it accessible. And I did my master's running through because I had a very long commute at the time running all the papers that I was supposed to read through a text to speech translator so that I could listen to them on my commute. That's pre AI and it, it saved my degree because I wouldn't have been able to suffer through those bone dry papers ad nauseam. And this is one of the things that I think students can benefit from is far more ways to engage through AI. [00:53:56] Speaker A: Yeah. As someone who's currently working on their PhD, being able to listen to articles that have been voice generated are very helpful when you just need to not look at screen. I can absolutely confirm that. As we're sort of wrapping up or as we're wrapping up the conversation, I wanted to give each of you a minute or so to share any last thoughts and maybe a prompt that you can take or ignore is what are the other big questions around AI use that we need to be asking or, you know, what do you think is sort of the next step right now? We think about. Tim, you mentioned the idea of AI right now being really, really broad and not very thick or deep. We know that also what came out with ChatGPT, what, two years ago, three years ago, is totally different now. These iterations are changing so quickly. So I'm curious to hear thoughts there. And Tim, I'll start with you not. [00:54:46] Speaker E: To get technical about any of this, but there is also a splinter that's happening with AI, generative AI in that there are specialty models being trained and there's a lot of research being done around scaling AI down so that it can run on your laptop or run on your phone. Will it be as capable as something like a ChatGPT or a Claude? Probably not. But will it do for us what we need? The signs seem to point towards, yes, we're moving in that direction, but in terms of an overall summary, I guess as AI becomes more embedded in how we and write and speak and create, educators and media professionals, I think are being pushed to reconsider what counts as original, what's trustworthy what's ethical. And in higher ed, for sure, that means confronting some pretty tough questions around authorship and intellectual property and how we define meaningful learning. I spend a lot of time with faculty who are concerned about the academic integrity angles, saying, hey, start with your learning outcomes. How do the students prove what they know? If all you're doing is asking them to write an essay, maybe you can do better than that. Maybe you can take a different approach. So I think the opportunity now is to lead with values and designing practices that work with human judgment and creativity and accountability at the center of all the AI work. [00:56:28] Speaker A: Thanks, Amanda. [00:56:31] Speaker B: I just want to say again, lead with values, because, you know, that was brilliant, Tim. I lead with values. Wow. I try to live my life like this, like, have a moral set of values for yourself that you bring into business, that you bring home, and like, what is it that you're coming down back to all the time? And I think that's really important. And I completely agree with that. So I love that, Tim. Thank you for saying that. One big thing I guess that I would. I'd want to impart is that there is a lot of scary elements to all of this. But I. And especially as creatives, especially as someone who's worked in journalism and taught journalism, like, I know, and they're all valid, but I also think sometimes we get. It's about striking the balance of, like, letting yourself get excited with intentionality around it. That's all. Strike the balance with intentionality. Have a. Have a critical eye, read the fine print experiment, have fun, create and be intentional about both. And I think if you can, if you can really be intentional, you'll be able to continue to not get the joy completely sucked out of this really exciting time that we're all a part of, that will be. That is history in the making and not get completely destroyed along the way. So I guess the word would be intentionality. [00:57:57] Speaker A: Thanks, Amanda and Anise. [00:58:01] Speaker C: I hate sounding negative like Cassandra coming into the room here. I mean, I do think that being intentional and having your values in mind is really important. But the thing that keeps coming up for me and that gives me really a huge amount of pause, I actually don't just see it with audio, you see it in the use of AI with visuals as well. There is an almost monoculture that seems to be emerging out of generative AI. What we have put into it, and not even what we have put into it, what it has taken from so much of the existing content and then started to reproduce, has a monotony to It a similarity to it, everything is coming out in a very similar style. And as people then use generative AI to like, oh, give me some ideas for a podcast, or how should this sound? Everything is continuing to have that same sound. And I am very worried that as we keep using generative AI as a tool to stimulate our own creativity and as an ingredient in our creativity, we will end up losing the spark of that. Often create something brand new. [00:59:24] Speaker E: And. [00:59:26] Speaker C: I'm always worried that my employer is going to be like, you said something about us. But there is the joke about the CBC sound, the NPR sound and podcasts in particular really broke from that trend for quite a while to the point where public broadcasters around the world started to emulate that sound, that conversational podcasty vibe that many audiences around the world, especially in North America and English language markets, gravitated to. And now generative AI just makes stuff like that, no problem. Is that how it's supposed to be? Well, according to generated AI, it is. We're seeing the same thing with visuals where, you know, people are asking it to generate pictures for them. And all of a sudden there is this flood of. I mean, some people have called it slop of pictures that all look the same. They all have this general vibe, and then nobody bothers to make anything else. Or if they do, it's the exclusive domain of a specific type of artisan. And I guess I just worry that we're going to unintentionally enter into a period of homogenousness. I don't know how to say that word. I should know. Generative AI would know how to say homogeneity. That's the word word homogeneity. See, if we told generative AI to say that, it would have gotten it right and you wouldn't have had me laughing at myself. You know, not to end on a negative note, but I do think that if we take into account what everyone else is saying here, that that can be avoided. But that's my two cents. I suppose that's my fear. [01:01:04] Speaker A: Well, it's interesting when you think about the we've mentioned, you know, the memes or the trends that come through. Suddenly everyone wants to generate an image that looks like something, you know, their version of it. And, you know, that's different than maybe we're thinking about art generation. But yeah, that idea of there's a style right to two different things, and it might be popular for a reason, but its popularity can continue if that's the starting point, if it's being generated. Mia Would you like to wrap us up? [01:01:33] Speaker D: Well, I would like to echo that note of caution because I do think we have to keep a careful eye on the ethical use of AI. And that comes in a few different layers. This is the way I'm thinking about it. So with social media, we were all very excited when social media came in, right. And then we realized that there weren't a lot of legislation, regulations around that. And both Australia and Canada really did feel that firsthand when we started to put some barriers or trying to put some barriers on the big tech giants. Right. So there is a problem, I think, that we haven't got that ability or we're running behind. There's a pacing problem of technology outrunning regulation. So that's one of the points where I want to kind of wrap up on. And that has to do with rights, with privacy. So we do need some sort of governance. And at the moment we're not really fast enough to kind of sprint. I think there is this sense of us sprinting behind the fact that there's an amplification around biases and representation because we do know what is being used to train these AIs. Well, it's what's online, it's on the web, and we know what's on the web. Right. So being really careful of this kind of. And I do also, like lead with values. So thinking about respect, thinking about informed decision making, making. And then I want to end on a point that we've not talked about at all. And this is a really important point. And you know, it doesn't make for good media to end on a point that we haven't discussed. But anyway, I will. And it's the impact on the environment. So the actual impact of all these servers that are located in all these places that we know that are running to answer our sometimes quite simple questions. It's all very convenient for us to put something into chat, GPT to Claude or whatever we use, but it's actually invisible to us the amount of energy required for those answers to be generated. And I think we have to really keep an eye on that. And we're just starting to see some interesting research and conversations about that. [01:04:16] Speaker A: Well, thank you so much, Mia. And I mean, because you've sort of just dropped, you know, we hadn't talked yet about the environmental impacts. Does anyone else want to want to jump in and share anything on that? [01:04:25] Speaker E: I think, Mia, you identified something very important and we all have to be conscious of the environment these days. I mean, I won't get into the situation Alberta finds itself in around oil and gas and coal and energy generation and what the ostensible plans are to build giant data centers in the province. But we can't ignore it. We can't ignore it. Denying that this is going to have an impact on the environment is like saying climate change is a hoax. We know that's not true, and so we can't ignore it. We have to pay attention to it. It's inescapable. That is one of the things also from a technical standpoint that they're working on is reducing that carbon footprint. But is it here yet? No, it's actually growing worse as usage increases. [01:05:24] Speaker A: Well, thank you so much everyone. I really appreciate the conversation today. I know there's a lot to think of and really there's more questions than answers at this point, but I think if we're asking the right right ones, it's a great place to start. Special thanks to Amanda Cupido, Founder and CEO of Lead Podcasting and Instructor at Seneca College Anise Hadari, the Prairie Director for the CBC Branch of the Canadian Media Guild Mia Lindgren, adjunct professor at RMIT University in Melbourne, Australia and Tim McGee, inspector designer at Mount Royal University. I really enjoyed our conversation. I'm Meg Wilcox and thanks for listening to the Community Podcast Initiative. The CPI focuses on audio storytelling as a way to better include underrepresented voices. Our podcast is produced on the land home to the Niitsitapi or the Blackfoot Confederacy, the Yahe Nakota or Stoney Nakota Nation, and the Sutina Nation. This land is also home to the Metis Nation of Alberta districts 5 and 6. As media creators, we strive to uplift the voices of Indigenous peoples while strengthening our commitment to diverse and inclusive audio storytelling. You can learn more about the cpi@thepodcaststudio ca or on social media ommunitypod yyck.

Other Episodes

Episode 7

June 10, 2024 00:32:28
Episode Cover

Kattie Laur talks Canadian podcasting

There are more than 16,000 Canadian podcasts out on the Internet — but how many can you name? In the last episode of the...

Listen

Episode 8

June 01, 2023 00:27:40
Episode Cover

How reciprocity, solutions and rethinking objectivity can help decolonize journalism

In a first-of-its-kind textbook for journalism students, Duncan McCue sets out to demonstrate how integral reciprocity is raising the standards of coverage of Indigenous...

Listen

Episode 6

April 01, 2023 00:29:36
Episode Cover

Podcasting in a pandemic: reflecting on two years of learning and sharing knowledge through the Canadian Mountain Podcast

This episode explores the behind-the-scenes work that goes into producing a CPI series: the Canadian Mountain Podcast. As the Canadian Mountain Podcast completed its...

Listen