Episode 16

October 31, 2023

00:39:19

AI Series: The Writing Industry

Hosted by

Pat Quigley
AI Series: The Writing Industry
Storyteller In-Depth
AI Series: The Writing Industry

Oct 31 2023 | 00:39:19

/

Show Notes

By now, you have either used artificial intelligence (AI) to help you produce some form of written content or have likely heard about some of the ways AI is used, but how is this tool impacting the writing industry as a whole? To help navigate this dicussion is Kelly McConvey, the Program Coordinator of the SCMAD's Communications - Professional Writing program.

Kelly will take you through how she approaches the use of AI-powered writing tools like ChatGPT in the classroom, her opinion on using them in the writing industry in general, ethical concerns, some of the risks involved, some of the elements she looks forward to, and more. 

View Full Transcript

Episode Transcript

[00:00:00] Speaker A: Hello and welcome to Storyteller In Depth, a podcast where we go behind the scenes to learn more about the School of Communications, Media, Arts and designs, people, places, and things. I'm your host, Pat Quigley, and in this episode, we're going to be exploring AI and writing. I feel like that calls for some dramatic music or something, right? By now, you have either used AI to help you produce some form of written content or have likely heard about it used in some form of some way. But the popularity of AI and its use in the writing industry is the duo that kind of just snuck up on us as it becomes such a strong force in the industry and seems to only be getting stronger. So where does that leave us? Will AI take over writing jobs? What about essay writing in school? There are so many questions and concerns about it. So to help us navigate these questions and provide some insight into this situation, we have Kelly McConaughey on the podcast. Kelly is the program coordinator for a professional writing communications program and is also pursuing her PhD in Human Centered algorithm design. Kelly will take you through how she approaches the use of AIpowered writing tools like chat GPT in the classroom, her opinion on using them in the writing industry in general, ethical concerns and some of the risks involved, some of the elements she looks forward to, and a lot more. Be sure to stick around for this episode. Thank you so much, Kelly, for being on the podcast today. [00:01:35] Speaker B: Thank you for having me. [00:01:37] Speaker A: Yeah. So you are a program coordinator and instructor within our professional writing communications program here at the School of Communications, Media, Arts and Design. So to start off, can you briefly share when and why you started that program, your work experience within the communications industry, and the current professional and educational pathways you've embarked on? [00:01:58] Speaker B: Yes. Wow. Okay. That's a lot. So I have been a professor here at Centennial College for eight years. We just graduated our 8th Cohort. So I still think of it as a new program, but it's very quickly becoming not. So we started it in 2015, and before that, there was sort of a two year development process. So I've actually been working with the college for ten years. My background was as a technical writer primarily, as well as a consultant in the AI space, where I was doing a lot of requirements gathering and training materials and implementations of AI based soft sort of systems within American companies. So I was working in that field, but I found as a technical writer, I was also being asked to write a lot of other things like pitch decks and speeches and website content and the occasional birthday card, whatever sort of needed. Writing within an organization just has a way of ending up on whoever has writer in their title on their desk. So I started looking around for a program that would sort of teach all these different areas of writing within understanding that writing is a multidisciplinary field. And there wasn't really a program you could only sort of study one facet of writing at a time. You could go into creative writing, or you could go into publishing, or you could go into technical writing, but there wasn't a program that, instead of looking at the industry, looked at the total skill set. So I approached Dean Nate Horowitz here at the Story Arts Center way back ten years ago, and he was really into it. So we went through the two year development process, and that's kind of how it started. Over the time that I've been at Centennial, though, while I've been coordinating this program and working as a professor, I've also sort of been continuing on that AI and machine learning path more closely. So my career has actually moved a little bit away from communication, specifically into that realm. Yeah. So I completed a Bachelor's in Professional Communications, I think it was professional communications. I can't remember exactly what was called at Athabasca, which was a long and arduous process, while I also had two kids and went through parenting small children while trying to complete a Bachelor's part time while also working at Centennial. And then after completing that, I went on to the Master's of Management in Artificial Intelligence program at the Smith Business School at University. So that's actually in Toronto. So I did that for a year, which happened to be the year over COVID, which was very interesting. After completing my Masters, I took a short leave of absence from the program at Centennial and joined the Omnia AI team at Deloitte. So I was working with them for a semester on their AI ethics and AI education teams. And that was a great experience, and it really helped me realize how interested I was in the research side into really tackling these bigger AI problems that we're seeing so often in the field and in all these different areas. So when I came back to Centennial, I decided to pursue a PhD. So I'm now currently a PhD student at the University of Toronto's Faculty of Information. I work in the human centered data science lab I'm supervised by Dr. Cheyon Guha. And yeah, so I'm working my way through that PhD while also working at Centennial. And so my educational journey looks a lot like so many of our students in that it was nowhere even remotely close to a linear path. It took lots of turns based on my interests, based on what was going on in my life outside of school. But I'm really now happy to see the convergence of communications and AI, these two things that I've spent my life studying and working in with all these new generative AI tools that have come out. So it's a really exciting time to be in both of these spaces, because suddenly writers care about AI and engineers and other AI experts care about writing. So it feels like I've positioned myself quite well accidentally. So those twisting, turning, unpredictable paths do actually pay off. Yeah, I think that was all of your questions. [00:06:39] Speaker A: You definitely have kind of set yourself up to really succeed in this new world of AI. It's crazy to think how forward thinking you were to get into that. [00:06:51] Speaker B: I'd love to claim that I was forward thinking, but I don't think I can. I just sort of took an approach to both my professional career and my education, where I pursued what was interesting and what seemed like it was in best service, using my skills of the world that we live in. So I don't think it was particularly intentional. I in no way predicted what was going to happen with AI, in absolutely no way. But instead it was that this area that interested me so much also had the potential for some real harms. And so I thought, if I have the passion for this and I can be of service to people in this area, then it's a good thing to spend my time doing so. Yeah, that would be my encouragement to students, I guess, is pursue the things that interest you and that seem like a good use of your time. And at least in my case, it worked out really well, for sure. [00:07:43] Speaker A: And I mean, the use of artificial intelligence has just become more and more popular, particularly in writing, since the introduction of chat GPT, which can generate written content basically within seconds. It kind of scares me a little bit. To give listeners a bit of a background on this, can you provide some examples of how AI currently impacts the writing industry? [00:08:03] Speaker B: Oh, yeah, sure. I think we're really quick to think of chat GPT as wiping out writers entirely. Like, why would we ever hire writers again? And we've been using AI and writing a lot earlier than that. I use grammarly literally every time I write anything, even a text message on my phone. I have the grammarly keyboard on my tone on my phone. In spite of being a writing professor, I actually have terrible grammar and problematic spelling, so I'm heavily reliant on those sort of tools that help us. Yeah, so, like spell check, grammarly, those kinds of AI assists, and I see chat GPT for writers as very much the same thing. It's an assist, a very helpful assist, in the same way that a calculator is an assist. So really, that's how we use it within the program and how I see it being used in the field. Not that you sort of plug in your need into chat GPT and you get an output and then that's your final product, but rather, I think everyone has sat in front of a blank page, totally overwhelmed by it and not able to even start. And I think chat GBT is a wonderful solution to that. If it gets you going, it gets you started, but you have to be so critical of it. And that's really, I think, its use in industry and where we're seeing it being used is, can you generate 50 headlines for me as a jumping off point? And then I'll look at those as a writer and say, oh, all 50 of those are terrible, but this one has some promise, and maybe if I tweak it and change it, I can get it to something really great. So that's its use, I think, in industry overall. [00:09:45] Speaker A: Right. And what would some of the limitations being to using AI to write with? [00:09:50] Speaker B: Oh, there are so many limitations with using AI to write. Yeah. Okay, where do we begin? So the big message, I think, for Chachi PD, for students, for people working in the industry, for people who are thinking of using it instead of hiring writers, really, for anyone, is that it is a language tool. It's a large language model and a language tool. It's not a research tool, it's not an information tool, and it should not be used as such. I used it to do my homework, to put together sort of a presentation for faculty on I wanted to put myself in this position of a student who would be using it right, to do their homework. So I tried using it for a bunch of different homework tasks to disseminate and share with other faculty. And what I found was that it did a really good job of doing a terrible job. It was incredibly sneaky. It created citations for my literature review that didn't exist. But it wasn't just that they didn't exist, it's that they were perfect citations for exactly the point I was trying to defend or source. And they didn't exist, except it attributed authors who'd written similar papers and for journals that had published similar work. So it really looked truthful like, that writer is real, that journal is real, but that volume doesn't exist. This exact paper doesn't exist. So that would be I think the biggest caution is that if you're using it for a purely language task, it's incredibly useful. If you're using it for information retrieval or research, you're going to spend more time fact checking it than you are if you had just done it yourself. So that would really be where I think its biggest risk would be. The other thing I would caution for organizations that want to use it is I really question how long it's going to be free for. We're already seeing different tiers and paid use. So if you rely on it, if you build your organization around having this automated writing source without actual writers, you lose a lot of control, right. It just can be taken away with a hefty price tag the second you really start to rely on it. So that's a definite potential harm and then also we really don't know who owns the material that goes into or comes out of Chat GPT. So if you're using it for commercial purposes, like if you're using it to generate product descriptions, those aren't your product descriptions anymore. Those belong to Chat GPT and are out there in the corpus in the training database and sort of exist to who knows who. It's most likely OpenAI, but that's something for the courts really to decide. Hopefully in the next year we'll see progress on that. But so that would be the other thing is that all of that content that a writer would normally create for you would be proprietary and would be owned for you, and it no longer does. It's something that's being shared and can be accessed by other people, for sure. [00:12:54] Speaker A: And I mean, talking about the ethical concerns that come around with using this AI source to create content. Right. And so could you elaborate on some more of the ethical concerns and how you see that maybe playing out in the future as the technology gets more widely used? [00:13:12] Speaker B: Yeah, absolutely. So the ethical concerns with Chat GPT, it's really hard to say these are the ethical concerns because we don't know. And that's so scary to me. That's much scarier than saying we know exactly what the ethical concerns are. Yeah, I mean, we know that it's biased. We know that those biases are ingrained into it, that the ground truth of those models comes from very biased sources. I compare it to, like, if you wanted an answer to a question, would you trust reddit for know, would you trust the sum of reddit to be unbiased and fair? And, and like, the answer is no, right, for the vast majority of people. And I say that as someone who really loves reddit and that's where the content is coming from. You're letting just the sum of the internet decide the tone and the voice that you're going to be using when you use Chat GPD to write. So bias obviously can slip in that way. And then we've seen some workarounds when we talk about other generative AI tools like Mid Journey or Dolly, there are definitely some workarounds that have been integrated into them. Early on, when Dolly was first released, if you search for a picture of a doctor, they would be overwhelmingly white and male. So Dolly actually now if you do something like type in give me a picture of a doctor, it'll on the back end append, plus female, plus BIPOC or however they word it onto your query. So it actually forces the algorithm to bring these less biased results forward. midjourney doesn't actually do that. midjourney just brings you whatever the algorithm would, whatever the results wouldn't be naturally. So if that's what we're doing, if that's the level of security we have over the bias in a model that's being used so widely, where some human actually has to go in and write these equity rules onto potential biased results, then I think we still have a ton of work to do. The other thing is that tools like Chat GPT, what they're really doing is they're predicting the next most likely word. That's how they work. So what is the next most likely word to come out of it? So it's inherently and by design always going to converge towards the median. It's going to converge towards the most likely result. And if you're talking about creative work, and especially unbiased creative work, I don't think that the most creative work is the next most likely thing. Really great creative work is unlikely. It's surprising, it's provocative, and you're just never going to get that from something like Chat GPT because that's not what it's built to do. But when it comes to if you want to talk about, in practice, wide scale ethical concerns, we've seen similar models go awry. Like a big one is in schools that use AI based proctoring tools. So you write an exam from home and the proctoring tool takes over your camera and watches you do it. We know that those tools flag students of color mid exam to ask them to show their ID on a much higher rate than it asks white students to do the same thing. And anyone who's ever written an exam would know how stressful it would be to stop multiple times to confirm your identity mid thought process. So that's like a very practical, concrete harm. And there's so many others. There was like a Dutch system, I think it was Dutch to determine who would get welfare that went totally awry. Their child welfare is an area that my lab researches an algorithm to determine child welfare, resource allocation. We've seen these systems cause so much harm and I don't know why we need to keep seeing specific systems cause harm to be able to say, well, this could be harmful. Do we need to see Chat GPT actually cause harm before we say it's likely harmful? I think we can extrapolate from past things and even just from a very recent example. There was that lawyer in the States who used Chach GPT to write his arguments and they found them all to be false and just made mean. You can imagine if you had paid that lawyer to defend you and found out that he automated the task and it made up the entire defense, how much harm that could potentially cause you. So there's some very specific and probably not well sourced examples. I wish I had the links prepared, but I don't know. We keep looking for these specific cases where it went awry and caused harm and I don't think we need to do that. I think we can speculate and use our imagination. And what is the Black Mirror Chat GPT episode five years from now? I don't think you have to stretch that far to think about what that could potentially be. [00:18:40] Speaker A: And a lot of the questions of people, professionals in the creative industry right now that are asking themselves, like, will AI end up replacing me? And will there no longer be a need for my job because of this artificial intelligence? So, as the program coordinator and instructor of the Professional Writing Communications Program, have you brought up the role of AI in this industry to your students? [00:19:02] Speaker B: Yeah, absolutely. The first point would be it's sort of a broad point. I don't know why we as a society have decided that the creative industries are the thing we want to target. Isn't that like the dream jobs? When you're a kid? Don't you dream of being like a writer or an artist or a musician? Why are these the things we've automated? Why haven't we automated the mundane tasks of our lives? It seems like such a silly thing as a society to decide to do. Don't automate the fun stuff, automate the boring stuff. But anyway, I digress within the program. So I teach a writing program, a professional writing program. We don't do creative writing. We're not a publishing program. We're not interested in that. We do technical writing, marketing, communications, reports. We do some research stuff. We do all kinds of things that you would do as a writer in a professional field. So the way I talk about it with my students or the way my students and I discuss it really is the same way we would discuss anything. When you are a writer, the skill that makes you a professional isn't the words you put on the page. Aren't the words you put on the page. See, I told you my grammar is terrible. The deliverable, the outcome isn't the thing that makes you a professional writer. The thing that makes you a professional writer. It's understanding who your audience is. It's all of the sort of process work that comes before. Like, what are the business goals? What are the organizational goals? How can we use communications to meet those goals? Who's our target audience? What kind of messaging and language are they going to respond to? What cultural sensitivities do we need to incorporate into the work? How can we be sure that our work isn't biased and as neutral as possible? How are we going to measure the success of our communications to ensure that there's return on that as an investment that an organization makes? All of that stuff, all of that process is what makes you a professional, and none of that can be automated. So, frankly, if my students do all of that thinking work first and then leave the output to chat GPT, but the output is in line with all of that thinking work, then I don't really have a problem with it as long as you put a pin in the very real privacy and legal concerns around that. But as a writer, the thing that makes me like I said, I have terrible grammar. The thing that makes me a professional writer isn't my grammar and vocabulary. It's understanding what an audience needs and how to meet a business's goals through communications. It's engaging with a community if you're a social media writer or ensuring that technical communications are audience appropriate and user manuals really meet the needs of the people using them in the place and time and attitude that they're accessing them. It's all of those other things which are so much harder than writing. All of that is so much harder. That's the thinking work. If you just sit down and start writing, you're not going to be successful. And that's what Chat GPT does, right? It just sits down and starts writing. So that's really how we approach it as a program that all of that foundational process work, the thinking work really still needs to be there. And then if you want to turn to Chat GPT at some point in those different phases to automate or give you a starting point, I think it's absolutely a valid part of a writer's process. But it doesn't in any way replace a writer. And I'd be willing to bet that it also doesn't replace a musician or a photographer, a graphic artist or graphic designer, web developer. All of those are the same. It's not just about the output. It's about making sure it's the right output for the right people at the right time. And that isn't at least not yet, isn't something that you can automate, right? [00:23:04] Speaker A: So do you think there are areas of the writing industry that can utilize AI more than other parts of the industry, like copywriting versus instructional design? [00:23:13] Speaker B: That's a good question. I can only really speak from my own experience because we're just starting to see it in professional industries, in industry more and more. But I personally use it to get over that blank page syndrome when I just can't be started. I also use it to give me sort of like a framework for writing within. Like if I have to write something I hate marketing writing. I am useless at selling things. Like just completely useless. My background is as a technical writer and AI. You can imagine how good I am at selling things. So if I am in a position where I have to write a call to action or have to write a sales piece, I'll use it to generate a framework for that that I can then tweak and change and rewrite and add to. And by the time I'm done, it doesn't look anything like it did initially, but it gave me a starting point and a structure to work off of. But when I'm doing that, I'm relying on my experience to know what about the automated output actually works and what doesn't work. So you still need all of that experience, be able to make those judgment calls. So I don't think it's so much that it's really great for one area or another, in that it's great for people who have short timelines and enough experience to make solid judgment calls when they're just sort of stuck on a task. That's really where I think it's most useful. That being said, as a technical writer, I have seen some incredible applications of it where it can generate, like a user manual based on a piece of software, where a user walks through every step of the software with a recording on and then it generates the steps, which is so cool. But again, if I was hiring a writer and they just did that and gave it to me, I would imagine find a lot of problems with it. If I hired a writer who used that as a starting point and then assessed whether it met the audience's needs and changed it and tweaked it, I would be completely satisfied with that. Yeah, that's sort of where I am. But I have seen it write, I'm not in advertising, we don't cover advertising in my program. My brother happens to be an advertising copywriter and I know he uses it to generate some headlines and stuff like that as a starting point. And I've seen it do absolutely terrible job in advertising. So I think the more provocative, the more exciting or surprising that you need to be, which is where that advertising space kind of lives, the less appropriate for the task it is. So, technical writing, you don't want it to be surprising. No one wants to be surprised by their user guide, right, or their user manual. No one's, like, troubleshooting a problem with their phone and wants to be excited and surprised and delighted by the user guide. You really just want the solution. So I think that is an area where it's more appropriate, but the more creative, the more surprising the work needs to be, the less appropriate it is. [00:26:18] Speaker A: Yeah, I know. Me and my fiance, we've used Chat DPT a couple of times to help generate recipes and things like meal plans and stuff like that, and you look at it and you go, I don't know if we need that much applesauce in a week. It's a great idea, though, but let's take that and use it for something else, right? [00:26:37] Speaker B: Yeah. [00:26:38] Speaker A: You definitely need that human touch, right, especially in the writing aspect. Why do you think that might still be something that's so important to writing in general? [00:26:47] Speaker B: Well, I think, first of all, you don't know what you don't know, right? So if you're using a tool like Generative AI to fill in the blanks of what you don't know, like, I don't know how to write a press release, so I'm going to let Chat GPT do it for me, then you're in trouble because you don't know what it's doing wrong. Just like if you're using Chat GBT to generate a recipe for you and you've never cooked before, it's not going to turn out. But if you have cooked before, you can make a judgment and say, well, that's way too much applesauce. Chat GPT, nice try. So that would be the first thing. And then I've completely lost my train of thought. I'm sorry, what was the question? [00:27:28] Speaker A: Yeah, just why do you think there may still be a need for human touch involved with AI? [00:27:34] Speaker B: Okay, so I think we need human touch involved with AI, because if you look at I think COVID is a great communications example. We refer to COVID a lot. I know no one wants to talk about COVID anymore, which is completely fair, but we saw the dangers of misinformation during COVID We saw communications sort of hijacked and used as a weapon more than we ever have in my lifetime, at least. And with the global nature of communications right now, I think the stakes are considerably higher. Right. And obviously, propaganda has existed for a very long time, but that kind of guerrilla style propaganda that we see on social media platforms is particularly harmful. So that is one of the reasons why I think that human touch is so needed. Misinformation is dangerous, and you need reliable, trustworthy humans in the loop to be generating those communications, to be responsible for those communications. And so the human touch in writing, we spend a lot of time in the professional writing program talking about empathetic communications and what does it mean to be an empathetic writer. And we can't automate empathy. So if you want your communications to resonate, if you want them to have an impact to connect with your audience and to meet what they're intended to do, then they need to be empathetic, right? Sometimes as an exercise, we have our students write something that they disagree with or write an argument with someone that they disagree with. And you can't write persuasive writing that tries to persuade someone you disagree with without really considering their side, without having that piece of empathy. Like, where is this person coming from? What's motivating this position that you disagree with? And it's only once you've done that empathetic work and considered their point of view that you can actually write persuasively and try and convince them. And that's something that chat GPD can't do, because that's that human piece, right? At the end of the day, the fundamental role of communications is connection. Whether it's a brand communicating with its audience or it's a job applicant writing a cover letter, you're trying to connect with someone else. It's human to human. So whether there's an automated robot step in the middle there, there still needs to be that human to human. So I think that's really why, fundamentally, and it doesn't mean there aren't messages that can't be automated. Like, there are many emails that I wish I could automate in my life that I would be quite happy to let chat GBD handle. For me, but there are also many emails that I really wouldn't trust it to do. So for me, that's it. It's that communication, that connection at the core of communication that requires a human. And I think most people in a creative field, or really in any field would agree with me. [00:30:51] Speaker A: Yeah, for sure. I mean, I'm in the broadcast side of things, so less in the writing. But ever since I've heard of Chat GBT and a lot of the AI technology that's coming out, I've been like, well, how can I integrate that into my workflow, into my life? And using it for social media and using it for all these ways and hearing definitely use it as a launch off point to create things, right? So how do you do that while still tapping into your own creativity and originality? [00:31:27] Speaker B: One of the ways I like to do that is to hit the resubmit button on Chaff GPT. Many, many times you want a good sample size. You don't want it to generate one headline because then you're not going to be able to get out of that framework where you copy that one headline or that mind frame. So you want it to generate 50 headlines or 50 tweets, and then you start to get a sense of what's working and what isn't. So that is one of the ways, if you're in the point of your career where you're developing the skill and students are going to hate me for saying this, and I understand that. I don't think it's as useful a tool because first of all, that thing where you don't know what you don't know, but also developing. People talk about creative skills like it's this divine inspiration that comes down on you. But so much of creative work isn't. It's like rote. It's rote practice. And if you're automating it early on in that skill development phase, you're not going to develop that reflex and that muscle memory for that kind of creative work. I think it's important to not rely on these assistive tools too much until you're comfortable with your own skills and until you're comfortable understanding what your process is and what your craft is and able to recognize when you're allowing assistive and generative tools to take away from your craft. And I think one of the best ways to do that is to think of your work as a craft and tune into that. I'm not a particularly meditative person, I don't know what you would call it. But if your work becomes unsatisfying because you're automating so much of it, or dissatisfying because you're automating so much of it, then there's a problem and you really have to pay attention to, like, are you relying on Chachi PT because you're stuck and you need something to get your creative juices going? And you tried going for a walk and drinking a glass of water, which is always my first advice, and it didn't work? Or are you relying on chat GPT because you can, and because you're kind of bored and tired, then there's a problem, because if you're bored by the work, your audience is going to be bored by the work. So, yeah, it's really that mindfulness part about how you're using it. And we work a lot in the professional writing program about what is your individual process as a writer, how do you define that process and really thinking consciously about it. So it's intentional, and that chachi BD can absolutely be part of that process, but it needs to be that intentional piece. Like, these are the areas where I'm allowing myself to rely on assistive tools no matter what they are, but these are the areas where I'm not, and that's really important. [00:34:18] Speaker A: What excites you most about the intersection of AI and the writing industry right now? [00:34:23] Speaker B: Oh, accessibility. Yeah, it's accessibility whenever someone asks about what are the good things that AI can do? Because I spent so much time in my life talking about the bad things about AI, but when it comes to accessibility, I think it's absolutely incredible. Like the ability to translate content into multiple languages, or have really great speech readers that can read with people who are hard of or who have vision impairment. Sorry, I completely forgot the term there. Yeah, there are so many ways that AI can help with accessibility that are absolutely fabulous. As a writer, I think those are the areas we should be pursuing and embracing. Like, how can I make my content meaningful for the largest number of people? And how can AI help me do that? Is absolutely a valid question and something that every writer and every creative professional should be pursuing. [00:35:18] Speaker A: Definitely. For sure. So something we like to end the conversation on when we're just talking about AI is if you could have access to any kind of AI tool that can be completely made up in your head, imaginary, what would you love for it to do? [00:35:35] Speaker B: I'd love for it to draft emails for me. That would be my thing that when I click reply on an email, it does its best guess at what I need to say, because my inbox is like, applicants, current students, other professors, a whole different groups of audiences, and the questions actually aren't that different. So if I could automate the we start the day after Labor Day in September. I'm sorry, we only have one intake part of my life that would be absolutely ideal, but not to send those emails, to leave me with the opportunity to tweak them, or to address other concerns in the original email, that kind of thing. But yeah, email over and over again. I wish it could automate email for sure. [00:36:22] Speaker A: I know I use the Google or the Gmail option sometimes when it's like, oh, here's what you might want to say next. I was like, that does sound like something I'm going to say next. [00:36:32] Speaker B: It's getting wildly good. It's really great at it all of a sudden. I know, but then I find that sometimes the email I'm writing in my head before I click the respond button is actually way more detailed. But then it gives me that predictive one. I'm like, okay, that's fine, I'll send that instead because I don't have to write it out. So I do wonder, are we losing something in communications? And then there's sort of a bigger concern too, which the AI community is starting to talk about a little bit, which is AI requires training data. So it's looking at all the past emails you've sent to decide what you're going to say in response to this one email. That's just how it works. It's the foundation of AI. But what happens when that training data is all emails generated by AI? What happens when we're training our current AI on the responses of past? And I think we can see it in Google. I don't know if anyone else has noticed, but I've noticed that the Google results quality has really gone downhill in the past, like two years. Google is just not as good as it used to be. So there is this sort of like recursive nature to these tools that we need to keep an eye on because we have so much clean training data right now and we're not always going to have that. So we might actually be at a high point for this kind of technology. So I'm curious to see what happens there. But I am wondering what's going to happen to the state of communications when we're all just clicking the auto reply button at some point. It's just going to be like robots talking to each other through our inboxes. And we have to, as a society, I think, be concerned about that for sure. [00:38:09] Speaker A: And I know we could probably go back and forth and talk about AI all day long. Kelly, but I want to thank you so much for being on the podcast today. [00:38:17] Speaker B: No problem. Thank you so much for having me. It was great, so much fun. [00:38:25] Speaker A: Thank you so much, Kelly, for being on the podcast. This topic has certainly been something that I've been thinking about a lot, and I know so many others are thinking about it as well. It's reassuring to know that there are some things that AI tools can't replace right now in professional writing, and that there are so many other elements that make up becoming a professional writer that can't be automated. The idea that we can use AI tools like Chat GPT is just that a tool to assist us rather than completely take over is a reassuring thought to those listening right now. We'd love to know your thoughts on Chat GPT or other AI writing software. Did you anticipate all this popularity? Have you used them? Let us know in the comments on this episode's Instagram post where you can find us at Story Art Center. Until next time, I'm your host, Pat Quigley, and this is Storyteller in Depth. It.

Other Episodes