NLN Nursing EDge Unscripted
The NLN Nursing EDge Unscripted podcast, brought to you by the National League for Nursing Center for Innovation in Education Excellence, offers episodes on the how-to of innovation and transformation in nursing education. Each conversation embraces the power of innovation to move educators away from the mundane and mediocre to the interesting and exceptional.
NLN Nursing EDge Unscripted
Scholarship - ChatGPT and Chatbots Impact on the Delivery and Evaluation of Student Learning in Nursing Education
This episode of the NLN Nursing EDge Unscripted Scholarship track features guest Karen Frith and Fidelindo Lim. Learn more about their work, "ChatGPT: Disruptive Educational Technology" and "Machine-Generated Writing and Chatbots: Nursing Education’s Fear of the Unknown."
Frith, Karen H.. ChatGPT: Disruptive Educational Technology. Nursing Education Perspectives 44(3):p 198-199, 5/6 2023. | DOI: 10.1097/01.NEP.0000000000001129
Lim, Fidelindo. Machine-Generated Writing and Chatbots: Nursing Education’s Fear of the Unknown. Nursing Education Perspectives 44(4):p 203-204, 7/8 2023. | DOI: 10.1097/01.NEP.0000000000001147
Dedicated to excellence in nursing, the National League for Nursing is the leading organization for nurse faculty and leaders in nursing education. Find past episodes of the NLN Nursing EDge podcast online. Get instant updates by following the NLN on LinkedIn, Facebook, Twitter, Instagram, and YouTube. For more information, visit NLN.org.
[Music] Welcome to this episode of NLN podcast Nursing EDge Unscripted the Scholarship series. I'm your host Dr. Steven Palazzo a member of the editorial board for Nursing Education Perspectives. Nursing EDge Unscripted and our track entitled Scholarship celebrates the published work of select nurse educators from the NLN's official journal Nursing Education Perspectives and the NLN Nursing EDge blog. The conversations embrace the author's unique perspectives on teaching and learning innovations and the implications for nursing program development and enhancement. In this episode we will discuss ChatGPT and chatbots impact on the delivery and evaluation of student learning and nursing education. We will discuss the perspectives of our two guests today. Dr. Lim authored a guest editorial in the current issue of Nursing Education Perspectives titled,
"Machine Generated Writing and Chatbots:Nursing Education's Fear of the Unknown," and Dr. Frith
authored the article, "ChatGPT:Disruptive Educational Technology," which can be read in the May-June issue of Nursing Education Perspectives. Dr. Lim is a clinical associate professor at New York University and Dr. Frith is a dean and professor at the University of Alabama Huntsville and I want to thank them for their time and welcome you to our podcast. Thank you. Thank you if you describe ChatGPT or chatbots to our listeners who may be unfamiliar with the term. Well I'll be glad to start so I'm Karen Frith and glad to be here with you. One of the things that has been developing over many, many years is the use of artificial intelligence. Its roots go back really to the 1940s so this is an evolution of artificial intelligence and many of our health care providers use a type of artificial intelligence in clinical decision support tools and everyday life so this new iteration is the generative part of artificial intelligence and generative means that it's creating new content and so it takes many, for example, if we're talking specifically about ChatGPT it has a mathematical equation and it looks at probabilities that something that it has scoured the internet for or is in its learning database is what's the probability that particular piece of information would be relevant to the question that's been asked and so it's generating information whereas past artificial intelligence has used data to do predictions but only to support decision making where this is generating new information. Wow. Amazing how the technology's advanced. Thank you for that question and building up with what Dr. Frith had mentioned. When we think of AI chatbot, meaning we think that this was from last year, I give my example to my students at the electrocardiogram has been doing this for decades, right, it can read P-Q-R-S-T and gives you a automated interpretation of the cardiogram. What the new iteration of these Generative Pre-trained Transformer which is the GPT in the letters, is that it collected an enormous vast amount of information that is out there in the internet and using artificial intelligence and natural language processing it's able to respond to question much like a human being. And I say human being because this is sort of a touch, a touchy subject to serving this. It's not human, it's a robot that is talking with you. And so the information that is presented is based on the accumulated wisdom of the internet using predictive analytics. So for example, whenever we type "nursing care," the chatbot would predict the next word which is probably "plan," "nursing care plan." So using that best information they use mathematical equation tokens and make it into some form of an automated response that can do it in very, very short amount of time. It's very, very efficient because, for example, if I'm going to read a textbook about the nursing care plan of heart failure that would take me maybe 45 minutes on a fast read if I really want to know a lot, but ChatGPT can do that in two and a half minutes, right so that is the main impetus and why this is so exciting because it makes it so efficient and with the whole idea that we are actually saving time offloading some of our work and my question to that, and I don't know if this would ever be answered, what are we going to do with the time we saved? [laughter] Be creative, unless ChatGPT does that for us too! So Dr. Lim, please describe the pathologies of learning that you described in your article the amnesia, fantasia and inertia and how these concepts relate to concerns about the use of ChatGPT and the potential consequences in clinical practice from learners relying on this type of assistance. Great, thank you for that question. So I don't know if our listeners are familiar with Dr. Lee Shulman and we probably have seen his name on this book, Educating Nurses. He wrote the forward to Patricia Benner's "Educating Nurses: A Call for Radical Transformation." So Dr. Shulman, who was a physician by training, was the president of the Carnegie Foundation for the Advancement of Teaching. So I discovered that article which he wrote in 1999, completely before any of these AI, ChatGPT came about and he is a scholar in teaching and learning and he explained that many of our learners, adult learners, students, college students sort of suffers from the what he calls epidemiology of mislearning. He calls it pathologies of learning but he also says it's a epidemiology of mislearning or malfunction. I kind of prefer the word malfunction instead of pathology. So amnesia - it is this nature of the learner that we forget, we are bound to forget. You have just presented something, you asked them the following week, they will not remember. So a lot of these requires experience and repetition. My fear is that easy access to ChatGPT would sort of exacerbate the amnesia because we are not engaging as much time, we're not investing as much time learning about the pathophysiology of societies for example because now it's all been done for us by the chatbot. When I wrote this article I look back to it later and said oh some of my idea is probably already obsolete and this is only two months ago because now thinking about chatbot this could also be a cure for the amnesia because the fact that we are exposed to certain new materials so little and we have no context by using chat but we might actually get the context because I could tell the ChatGPT explained to me a heart failure patient of somebody with diabetes who is living with mental health issues. That's the context that the ChatGPT might be able to do. Fantasia is the malfunction of understanding so you have this notion that you learn something, but you actually did not, right, so the idea that you feel like you're doing research because you log into the internet and you seem like you're doing work, which is gives you the illusion of research, but may not be with the depth and breadth of the knowledge you would like to achieve. One of the disadvantage of these ChatGPT it hallucinates - it gives you sometimes information that may be totally unrelated to the context and that's a challenge. So how do you deal with the the fantasia? We provide them context, you know, we make sure they they got the patient experience to deal with the knowledge so it becomes more contextualized. So inertia is how when we went to nursing school you learned vast amount of information and so much of this is sitting at the back of our brains doing absolutely nothing. I learned maternal nursing but I never use that information but I'm sure if I'd be put on hypnosis I'd be able to to come up with those and protect it in the back of my head but we are not using the information. So ChatGPT AI is producing a lot of maybe information that we will not be using and it's sort of a disadvantage in a profession like ours because we rely a lot on movement, activity, things that we do with our hands because we're prompted with our brains to do that. So that's an issue with ChatGPT because nursing is such contextualized to empathy, building the tool is not, not yet well formed to develop empathy because empathy requires proximity, presence, and intuition. Those types of things the other one, pathology or the other epidemiology that I did not write in my article but Schulman mentions is nostalgia. This is the one that is afflicting the faculty. So I've been a teacher for 28 years so nostalgia is the feeling that, well, it was better in 1990. Well, I've learned it this way in 1985. Why can't you learn it the same way I did? So the evaluation of the technology is very different based on the nostalgic impression. That is something as a faculty I have to guard myself of saying that, yeah, I did grow up with the mimeograph machine. That's how I learned my nursing care plans, but we stopped doing that so we have this idea that the past was better. So, yeah, sorry, a long answer to your question. No, that was a great answer. I appreciate it and it segues really nice into my question for Dr. Frith is you know your perspective is that ChatGPT, we can use this for good in nursing education. I'd like you to elaborate a little bit on that position. Yeah, so one of the things that we need to prepare our students is how to use technology correctly and so I've been teaching a long time as well. I remember when we first had the internet available for students to look up information and at that time we began teaching students about how to evaluate the quality of information that they found online, which we still do provide some of that information and so I think it's a responsibility of faculty to help students critically think about the use of technology and how it can help or hinder them in their learning or in the care of patients, which they ultimately will do. I just have the position that every faculty member should really think about generative AI. It will not go away. I mean, it's taking over so many things and there there are going to be advancements particularly in medicine and drug therapies that will revolutionize just like antibiotics did the care of patients in the 1920s to 40s, where people don't die from common infections anymore. I think to have a black or white opinion about generative AI is is nostalgic. I'll take Dr. Lim's work. I think it really is that regardless of what we valued as in our experiences for example for me early teaching experience. Technology moves on whether we want it to or not. For me, I think the most important thing for faculty to do is to really have conversations about the use of ChatGPT with themselves and with their students and to come up with some strategies that can help students learn how to use that ethically because I do agree with Dr. Lim's article about the losses in learning that really will happen if we don't help our students to learn how to use this tool and any other tool that comes along in the future in a way that supports learning and doesn't detract from it. I think there are ways to do that. So for example if you are working with a group and and you want them to investigate what are the evidence-based strategies for the care of a person who has diabetes and hypertension and they're living alone and they're 70 years old and they're working on a project together certainly they could use ChatGPT to get a gestalt learning about it. And I think that's okay and it gives them a place to start. But that's not the finish line. Where they should be educated is how to effectively look at what ChatGPT puts together if you ask for references from ChatGPT it may or may not be a true reference and students need to know that. It may just be a made-up reference so there's there's a real need to educate students on the use of it, the parts that are good and the parts that they need to be careful about because these are going to be lifelong learners. We need to help them understand what's good practice for learning and applying to their education. There are many different ways it could be done. A teacher could ask it to create questions on tests and have students correct those questions, just different kinds of ways that a faculty member could get them to understand the difference in real, valid information and vague sort of okay information, but not precise and not in depth is what you get from ChatGPT. Well thank you. Both of you have concerns about the use of ChatGPT to undermine scholarly writing. Is it time for us to re-examine what is considered scholarly writing and for example, as Dr. Frith mentioned, using ChatGPT to write a draft ? Yeah, you know Steve there was, I don't know if you read that article recently in the New York Times. There is a chatbot in development that will write the news. So in other words, the news writer, one day, the fear is they will become obsolete. So you could give them the the why, where, when, how, and who and it'll give you essentially the the full news reports. The New York Times published article, but they didn't participate in the experiment. In scholarly writing, which is sort of the the highest cognitive use of our cognitive faculties, is writing, right, so we take pride on our own scholarly work that you do based on years of study. And now we have this technology that can shorten that amount of time. Well, there is reason for concern for that because the idea that you cannot really fully know if the work is plagiarized or not. It's so difficult to do that, although there are products, also AI products, who takes there's an AI solution to an AI problem, right. It's a cascade technology and the fear is that there could be a certain disuse of our brains, the atrophy of our human intelligence. There are scholars out there who think that there is a beginning sign that chatbots might eclipse human intelligence because the machine will not get tired. Whenever you write a piece or your dissertation, an article or something, you tire out. Your brain is not as good and the machine brain will not get tired. For me, one of my concerns with the chatbot generated writing is the lack of adequate and reliable and good references that Dr. Frith just mentioned. I read somewhere that references are like thank you notes. This is one way of the writer to acknowledge that you did not come up with that yourself. One of my faculty here at NYU and I was in the master's program... This is by the way BC, before the internet, before in a computer stuff. He said that nothing is really truly original these days so even if you think you've come up with something original someone might have already thought about it before you. He said to me a well-written reference list is our way of bowing to the people who did scholarly before me, before us. We want to give them due credit. We want to thank them if we generate this information through chatbot, proper attribution, there is ethical concerns with that. In fact, people are worried that any paper that you and I all of us in this gathering have published is now somewhere where they're being used to generate responses to someone chatbot. I don't get acknowledged for that, right? So you could ask the chatbot to provide references, but they don't provide primary references. It's always secondary references and sometimes it's just wrong reference that they couldn't do so that those are the concerns. So in a 50 years from now when there's a technology there that put the hammer down and says, yeah AI will surpass human intelligence, there's no question about it, then we will have to recalibrate the way we see writing or the way we see academic work. I'll just add to that. I think some of the solution to this concern is working in teams. It's difficult now to do very thorough research as a single writer. Yes. I agree. Yeah and so when you work with groups and you bring the intelligence and the motivation and the drive of a research team together that you can't reproduce that in artificial intelligence. You just can't. Not at this point. Now there's an AI that you can get a little close to that, but not really. I think as long as teams understand their end goals and for me when research is being developed it's always a negotiation between the the ideal and the feasible and the goals of the team and where they want to go and even when you get results back there's a negotiation about what are the next steps. Okay, we found this - what does it mean? Where do we go in the future? Even if you have that mapped out as a team generally I think that will stay and I think as long as researchers and our scientists really hang on to their ethical values and the belief that science is based in our human understanding a phenomena that there's hope that it doesn't take over everything. I think it does require people to think deeply about this and and I love Dr. Lim's article because he brought it back to what learning is about and the problems with learning and how this exacerbates it. I think good thinking such as what he has written helps ourselves and faculty across the country who really think about this in a way that is beyond the knee-jerk reaction and just kind of hash out one of our concerns what you know what kind of solutions could we take what are some alternatives and what do we believe as faculty is important for our students across anything from undergraduate all the way through doctoral work because all of them will be affected by generative AI. Well, it's like any technology. We're going to have to learn to adapt and there'll be good things from the technology and things that aren't so good and I have a feeling this is going to move at a very fast pace over the next several years. It's going to be...we're going to respond nimbly and quickly to to some of this but it's one thing to be fearful another to accept reality. These things are here and are going to continue and how do we best put them to use and we might not even know yet how we're going to use this technology and what it's going to bring for us the next five or ten years. So being open-minded to it and starting to explore it and investigate it now and using it in some way in your classroom to teach students how to best interact with this type of information and technology. I want to thank you both so much for joining the conversation. I appreciate the time out of your busy days to come here and share your thoughts with us. To our listeners, if you have not had the opportunity, please be sure to look for their work, "Machine-Generated Writing and Chatbots: Nursing Education's Fear of the Unknown," in the current issue of Nursing Education Perspectives
and "ChatGPT:Disruptive Educational Technology," which can be found in the May and June issue of Nursing Education Perspectives. So again, thank you both for joining us. It was a pleasure and it gives us a lot to think about and probably more questions than answers. Thank you Dr. Palazzo. Thank you.[Music]