Helping higher ed professionals navigate generative AI
July 1, 2024

Encouraging ethical use, AI friction and why you might be the problem

Encouraging ethical use, AI friction and why you might be the problem
The player is loading ...
AI Goes to College

We're in an odd situation with AI. Many ethical students are afraid to use it and unethical students use it ... unethically. Rob and Craig discuss this dilemma and what we can do about it.

They also cover the concept of AI friction and how Apple's recent moves will address this under appreciated barrier to AI use.

Other topics include:

  • Which AI chatbot is "best" at the moment
  • Using AI to supplement you, not replace you
  • Why you might be using AI wrong
  • Active learning with AI,
  • and more!

---

The AI Goes to College podcast is a companion to the AI Goes to College newsletter (https://aigoestocollege.substack.com/). Both are available at https://www.aigoestocollege.com/. 

Do you have comments on this episode or topics that you'd like us to cover? Email Craig at craig@AIGoesToCollege.com.  You can also leave a comment at https://www.aigoestocollege.com/. 

Transcript

Craig [00:00:14]:


Welcome to AI Goes to College, the podcast that helps higher education professionals navigate the changes brought on by generative AI. I'm your host, doctor Craig Van Slyke. The podcast is a companion to the AI Goes to College newsletter. You can sign up for the newsletter atai goes to college.com/ newsletter.



Rob [00:00:35]:


Alright, Craig. So I I I've had an idea, and I see this with the new AIs. It seems like every week, a a new 1 pops out that's the the next latest and greatest thing. And the 1 that brought this to my mind was Claude 3.5, which is pretty powerful. It does some pretty cool things. And I think about the roles that that we're in as as voices in academia, trying to encourage the adoption of products for for students to use with licensing and the things that take legal contract renewal, and the bureaucracy that can often take months to get that level of approval. And what at what point are we comfortable saying, yes. This is the 1 we should pursue.



Rob [00:01:13]:


This is the 1 that's worth going through the time. Is it OpenAI and ChatGPT because they were they were first, or is there a new 1 that's coming that's gonna be better? Is that Claude? Is that something else that will come out next week? And and part of my wondering comes down to thinking about the Internet. And I remember in in the late nineties, it's like there were a ton of new websites that were supposedly the latest and greatest thing. And there were some that have become very successful, like Amazon, household name. And then others like pets.com that they thought were gonna change the way everyone bought their dog food and all these different things, which had a, you know, phenomenal crash that that didn't make it. And I I just really wonder, are we in the midst of trying to pick a a winner so we can, you know, provide the licensing for our students, provide access and opportunity? Because the thing about these AIs is many of them have a subscription fee associated with them. And it's not like a website where I can just go and use it and decide if I like it, but it's a a financial investment on which of these, new AI of the day, the new flavor that is being pushed out there. And and how how does that get navigated in in what we're doing and what we're seeing?



Craig [00:02:25]:


So there there's a lot to unpack there because I think that's a critical issue that you raise. So the first thing that I started thinking about is, does it matter which tool we use for a lot of this? I think in some cases, yes, maybe, but in other cases, no. And so so if we go back to think about teaching somebody spreadsheet skills. You know, you can use Excel. You can use Numbers. You can use, you know, OpenOffice. You can use Google Sheet, Sheets rather to teach the basics, but there might be times when you really need to teach Excel. And I think maybe the first thing to do is to separate those out.



Craig [00:03:09]:


Or is this like, in my principles of information systems class, I don't care what they use because we're not doing anything that's really all that intricate. You know, maybe if you're teaching some kind of a programming class where you want them to use a particular tool, it becomes a different issue. So I think that's kind of 1 of the things we need to think about. Are we teaching the tool, or are we teaching the concepts? And if it's we're kind of at that concepts level, we can probably use anything. But if we're trying to teach them a particular tool that's dominant in whatever the field is, then I think we need to give it more thought about exactly which 1 we need to use and and kind of lock in to a provider at that point. At a more general level, that's a reason I'm such a big fan of Poe, poe.com, because they pretty quickly come out with whatever the new models are. And so it's I don't know. I I may not get this exactly right, but it's about $20 a month.



Craig [00:04:03]:


And you get access to 25 or 30 different models, including all of the big ones. And so that's what I recommend to my students. And they're fine. In my case, they're fine with the free versions. It doesn't really matter. But I think for the listeners out there, if you have to pick 1, I would pick Poe just because it does give you that flexibility. But then you raise the kind of legal issues and the institutional issues. You know, if we're doing this as a as a 1 off, it's not too tough to figure out what to do.



Craig [00:04:31]:


Just pick 1, go with it, whatever is the right 1 for your circumstance. But I don't know what we're gonna do around the institutional issue. I have some hope that maybe Microsoft Copilot will become the thing because Microsoft is already so embedded in a lot of universities. You've got Google on the other end of things where a lot of I don't know about Washington State, but at Tech and at some other schools I've been at, the students all use the Google Suite. You know, their email address is a Gmail address. Mhmm. So maybe that's what ends up happening. But I think as individual instructors, I don't know that I'd worry too much about the institutional aspect of it just yet.



Craig [00:05:09]:


You know, you're an administrator. I'm a former administrator. We have to worry about those kind of things. But I think as long as what individual instructors are doing isn't overly risky, then they can probably do what they want for now.



Rob [00:05:21]:


Yeah. No. That makes sense.



Craig [00:05:23]:


But it it's gonna be an issue at some point. So I think you raise a good point, and we'll just have to see. Just have to see. See, this is why if you're in the on the faculty, just stay faculty. You know, just just trust trust me. Trust me. Just stay faculty. It's a much better life overall.



Craig [00:05:42]:


So that aside, I think we should mention that it's really a different situation on the staff side of the house. You know, they may have very good reasons to standardize on a particular platform, but I would advise not chasing the latest and greatest. So right now, you were talking about Claude. Was it SONET?



Rob [00:06:06]:


Yes. 3.5. Yep.



Craig [00:06:08]:


Which is outperforming Claude Opus, which is their bigger model, and also, GPT 4 o. But it doesn't matter. You know, I don't think the differences are large enough to be material for most things that most of our listeners are gonna be doing. So don't get in the habit of switching back and forth over what's really fairly minor improvements, you know, benchmark improvements. It's kinda like, you know, should you go out and buy a new computer because Apple went from the m 2 chip to the m 3 chip? You know, probably not.



Rob [00:06:44]:


Well, that raises another point, and I think you wrote about this in 1 of your recent newsletters, Craig, is Apple's new AI and their entry into AI is gonna require people to upgrade phones if they wanna use it.



Craig [00:06:59]:


So It's nice transition, by the way. Yeah. Yeah. Let's let's transition into Apple, talking about Apple.



Rob [00:07:04]:


Yes. So the 1 thing I I bought, you you know, 2 years ago, the 12 Pro. And my idea was I'm gonna use this phone until it just stops working because the phones cost $1, 000 or more. You know, they're computers and they hold up. They last. But then you get a drop like this with some pretty game changing sorts of AI interactions that require you to dispose of, get rid of a working device in order to buy a new device. And this balance between sustainability and being able to use technologies to me is a really interesting conversation about, you know, what's important to us as a society.



Craig [00:07:43]:


Yeah. Yeah. Right. Right. Well, and and as I mentioned in the newsletter, I was at least mildly irritated because I literally just bought an iPhone 15. And I thought about the Pro because I really like the Titanium, but I had no real reason to have a you know, to buy the Pro. In fact, my old 13 was fine. Just the charging port was getting flaky on it.



Craig [00:08:06]:


So I bought a new phone. And now I, you know, I know I actually know what I'm gonna do. Since my wife doesn't listen to this, I'm I'm gonna get the pro when they get all the AI stuff actually implemented, and then she can have the 15. So, yeah, it it'll be fine. It'll be fine. But you're right. And so, you know, is this a 1 off where from now on, you know, it it'll every but everything you buy from Apple on the phone will be capable of the AI implementations, or is this something we're gonna have with every generation? But it goes back to the idea of should you chase the latest and greatest AI models? You know, pause a little bit and think, do you really need whatever this new thing is, especially when it comes to AI because talk is big. I haven't seen any of this implemented yet.



Craig [00:08:54]:


And given how crappy Siri is, I'm not so sure that it's gonna be all that great. You know, we've already caught Google or not yeah, Google a couple of times putting out these really slick demos, but then the product didn't really work. You know, Apple, I think, has a better track record around that, but I I would not ditch my phone right now to go out and buy 1 that's AI capable. And same thing with the Copilot enabled Windows computers. I mean, they sound really great, but I would recommend just taking a breath, seeing, you know, if it's something that you really need and then moving forward. Because I I'm I'm guessing for a lot of people, it's really not gonna be anything that they need right away.



Rob [00:09:34]:


Yeah. No. And I I would agree. But the the 1 concern I have as an educator is at what point does the better off student have that newer technology, whether it's the Copilot, you know, powered Windows machine or the 15 pro or whatever that thing is versus a student who's struggling just to be able to attend college and creating these gaps of what's available to them to help with their writing or to help with their integration of various different things. Some interesting, yeah, big discrepancies potentially that, you know, affects opportunity.



Craig [00:10:07]:


Well, absolutely. And I and I think that's an episode we should do a deep dive into later. I know we've we've kind of talked about it, and I've written about it, the equity equity of access issue, because it it's gonna have multiplier effects. If you don't have access to good tools, not only are you not gonna do as well, you're gonna be disadvantaged during your time in college, but but you're gonna be disadvantaged afterwards because you're not gonna have the same level of expertise in this, what appears is gonna be a critical tool for knowledge workers in the future. And so it it's a huge problem. I was really happy that OpenAI, when they released 4 0, made things available to even the free users, on a little bit of a throttle basis. That's a step in the right direction, but it is not gonna fully address the problem. Yeah.



Craig [00:10:57]:


And that that's where institutions need to step up and do things like most of us have with Office 365. Is just available to everybody. You know, right now, Copilot is not that way, but I'm hoping that there will be some moves in that direction. So let let's talk a little bit more about Apple and and what I think is the really big piece of this that is not getting attention, and that's the idea of friction. So Rob and I both do privacy and security research, and 1 of the things that we've talked about in the past in that context is this idea of being private or being secure just gets in the way. You know, it doesn't really do anything for you. It's kinda like locking your door. Know, you lock your door because you have to lock your door.



Craig [00:11:39]:


You don't lock your door because it does anything for you to help you get the groceries inside or do whatever it is that you wanna do. And we have the same situation here with AI. Right now, we kind of have to do something extra to use AI. So right now, you have to have some level of expertise to use the chatbots. I mean, it's not an insurmountable skill level, but you do have to have some skill level. I mean, that's why, like we talked about last time, prompt engineering is kind of a thing. Prompt design is a thing. But, that could change with Apple.



Craig [00:12:12]:


You know, Apple is good at getting rid of friction, which is why I have an iPhone, an iPad, and a Mac. It's not that that can do a lot of things that I can't do with Android and Windows. It just doesn't have the same level of friction. And so what do you think about that? Am I off base there?



Rob [00:12:29]:


Am I wrong? I actually I I think it's gonna be I really think it's gonna be the thing that pushes many more of our students into using AI tools purposely. And and the reason why I say that is right now, if if I wanna use, say, chat GPT, I copy and paste something, and I say, you know, take this paragraph and make it sound better or, you know, whatever I might be trying to do to improve writing skills. If that was plugged in automatically to my Microsoft Word, it would just be way more seamless. And and at some level, that's starting to happen. Right? I have I have Grammarly premium that does some of that, and I've got some of those sorts of things. It's made life a lot easier to ensure my emails I write are are worded professionally and some of those sort of things. But if that did that to my text messages automatically and, you know, by email messages I write on my phone. I think all of a sudden, these AI tools and the promises of how it's gonna change our lives, how it's gonna change how we use our technologies.



Rob [00:13:29]:


I think we'll see a lot more, implementation of that. A lot more people using that. And not even because they know they're using it, but because it just happens.



Craig [00:13:38]:


Yeah. Yeah. Yeah. I mean, that's the whole point. You want it to just happen as you need it to happen. You know,



Rob [00:13:44]:


you you mentioned That raises an interesting I'm gonna tie it into another 1 of your your newsletter posts recently, Craig, about what is the, you know, what is cheating with generative AI and and how does that work? And when it just becomes automatically a tool that's baked in and is part of it, you know, I don't think that's an in many ways, I in many ways, I would love to see what their thinking is beyond getting lost in grammatical problems. So



Craig [00:14:18]:


Well, people are saying we need to go back to blue books, you know, for exams. And it's like, no, that's the last thing we need to do. Have you tried to read I mean, nobody can read my handwriting either, so I'm not denigrating anyone. But, you know, so you're nobody can read my handwriting either, so I'm not denigrating anyone. But, you know, so you're writing under stress and trying to write quickly. And they're like, no. I I don't wanna read that stuff. But well, it it's interesting that you you bring up the idea of Grammarly and writing into this whole cheating thing.



Craig [00:14:43]:


Because what as you were talking a couple of minutes ago about Grammarly, I thought, well, you know, it used there used to be friction to doing spell checking. And a lot of people don't remember this, but you used to either have to install a spell checker or you'd go in and click on, you know, some menu item and then spell check, and you'd wait until it spell checked your document, brought up a list of misspelled words. Now, it does the little squiggly underline, which that's all about the reduction of friction. There's still a little bit of friction there, but it's tiny compared to what it used to be. And so I think we are gonna see the same thing happen here. And Apple, to me, is uniquely positioned to pull this off because they control their ecosystem. Yeah. How do they Yeah.



Craig [00:15:25]:


And and people are already used to, you know, whether this is good or bad, I don't know. But all of the the data just goes to Apple. It's a losing battle to try to stay up with all the settings.



Rob [00:15:36]:


Well and this goes back, though, to why the iPro 15 or the 15 pro is because they claim they do a lot of the processing power locally on the device, and they don't share it to the cloud, and they don't send it home to themselves. And why the the lesser phones can't do this is because they don't have the processing power to run that language model, to run that AI engine on the local phone. But that is gonna be a huge thing to to look at is, you know, is the the promise of we're not collecting your data a true promise, or is it something that will change, something that will be exploited as leadership changes, as revenue opportunities get created, and so forth?



Craig [00:16:18]:


Yeah. I think Apple is trustworthy as any big giant company is, but, yeah, you're right. Things change, and, you know, what we think they mean in that policy may not be what they really mean in that policy. So but I think the big message here is that we're gonna see a huge increase in the use of AI. You know, right now, a lot of students don't wanna take the time to learn it or they you know, they're they're not gonna wanna put the effort into it. But when it just happens, it's gonna be good in a lot of ways. Like you were saying, it automatically checks your, you know, your emails and your text messages and whatever for wording and appropriateness or whatever you want it to check for, but it's gonna cause some challenges for us in higher ed as well, because it it's gonna effectively be undetectable in a lot of ways. It it's kind of mostly undetectable now in any automated way, but it's just gonna be more so.



Craig [00:17:10]:


And Well, Well, and



Rob [00:17:11]:


I think what that does, Greg and this goes back to, I think, a theme we've talked about a lot of times is that at some level changes what our job is as educators away from being the person who can test whether you know something, or do a test whether you know how to do something. Right? Knowledge versus process. And and the process is gonna be way more important than just regurgitation of facts.



Craig [00:17:35]:


Well, yeah, I think you're absolutely right there. And it's another thing I've been thinking a lot about, but I don't really have any good answers for, and that's, do we need to separate out learning from assessment? And we tend to put those pretty tightly. I mean, our learning outcomes are evaluated through assessments. But that's just some sort of an external indicator of whether or not the students have learned. It really doesn't tell us whether or not the students have learned. I mean, it does to some extent, but not anywhere near perfect accuracy. So if we take away the assessment part, do we then take away the incentive to cheat? I mean, when I was playing sports, I didn't cheat on my drills. You know, you did your drills because you wanna be a better player.



Rob [00:18:18]:


Mhmm.



Craig [00:18:18]:


And it wasn't. You know, you can rebound like nobody's business and you play really great defense, but your drill sucks, so we're not gonna put you in the game. It's like, no. You know, if you perform, you perform. I know you you've got, Son, who's a fantastic musician. You know, he plays these scales not because he's gonna get graded on playing his scales. It's because it makes him a better guitarist.



Rob [00:18:39]:


Mhmm.



Craig [00:18:39]:


So I don't know. I think that's something we need to think about for for the long term.



Rob [00:18:43]:


Yeah. I'll put a plug in for our our text book, Craig, and the importance of taking an active learning approach. Because when you're active in the learning process, you're working on that, how do I do this thing? And in the process, you're learning about it. But it's really getting your hands dirty with the learning process that causes you to see, you know, how does this information lead to my decision making, or what's the implications of of doing this thing? And that's gonna become even more important as we move into this world of of generative AI based technologies.



Craig [00:19:16]:


Yep. Absolutely. And by the way, that's Information Systems for Business, an Experiential Approach from Prospect Press. Rob and I write that book along with Franz Belanger of Virginia Tech, and the 5th edition is coming out. It's actually out. You can buy it now. And it it's designed for the required there you go. That's it.



Craig [00:19:35]:


Nice orange cover. It's designed for the principles of information systems and intro to information systems at level of class. And if you want more information about that, feel free to email me, craig@agostocollegedot com. So we mentioned cheating. Let's move into our next topic. And I wanna give a quick background on this 1II was at a system wide conference that we have every year. My university is part of the University of Louisiana System. And I was down in New Orleans, and there was a panel of people that I this is gonna sound so bad.



Craig [00:20:07]:


I don't even remember exactly what they were talking about, what the topic of the panel was. But 1 of the panelists was the dean of libraries at 1 of the institutions, and she said, you know, they're like I don't remember the number. There are, like, 10 different types of plagiarism. And there are? I mean, I thought there were, like, 2. Right? Plain old plagiarism and self plagiarism, which I'm still not convinced is a thing. But it's, like all these different types of plagiarism, and she started running through some of them. And that got me to thinking about AI. And my thought is and, Rob, I wanna know what you think about this, is that a lot of times, students don't know if they're cheating with AI.



Craig [00:20:47]:


I mean, I think I think sometimes students know they are cheating. I mean, you know, when they're copying and pasting and I mean, they know they're doing something they shouldn't be doing. But there are a lot of edge cases where it's not entirely clear whether or not what they do is wrong. You brought up Grammarly. I mean, if Grammarly points out a grammatical error in 1 of our classes, I don't think that's cheating. But how are students gonna view that? So what do you think?



Rob [00:21:11]:


Yeah. III think, you know, 1 of the things that's made you know, to talk about Grammarly is a lot of that is baked in now to to to Microsoft Word. Right? And and that sort of grammar editing has become a tool that's baked in. And I think that's reduced the friction of whether that is something they should or should not be using. And and so I I do think students struggle with at what level they can use AI, and I I actually blame it on us. I blame it on the faculty because I'll give you an example. We had in 1 of our classes an athlete whose coach gave them a 30 minute lecture about never ever use generative AI because it is cheating, and then you will get in trouble, and then you can't play your sport anymore. So don't use it.



Rob [00:21:54]:


Right? The coach is protecting his players from from getting into academic trouble. And then the students taking a class that we teach in the College of Business, where we're encouraging them to use generative AI and to use learn how to use this tool to do various different things. And she's like, my coach told me I can't use it. Right? So she was at this point of feeling like, you know, she'd been given 1 set of instructions and then another 1. And and, you know, across the academic institution, I think we're all at different places about what's allowed. And so the lowest common denominator for students to make sure they stay out of trouble is I'm just not gonna use it because I can't keep track of which classes I can use it for and which classes I can't use it for. And so until we as as academics become somewhat consistent on the ethical use of generative AII in the classroom, I think students are gonna struggle. And there'll be some who push the envelope and, you know, bend the rules because that's kind of their attitude.



Rob [00:22:52]:


And then there's others who are a 100% rule followers that are gonna say, I'd much rather work harder and and do all these things myself than take advantage of some of these really great ways to enhance, you know, my ideas and the things that I come up with.



Craig [00:23:07]:


Yeah. I I think that's spot on. Yeah. If they don't understand plagiarism fully, how can we expect them to understand the ethical use of generative AI, especially when we don't understand it?



Rob [00:23:19]:


Yeah. And the thing I think is is I I really like I I like talking about this as the ethical use of AI as opposed to academic integrity. Because in in life, I think a lot of things the decision to make are about the ethical appropriateness of of what you're doing, and I think it translates well to the real world. But and getting to that point where we can have that conversations around in this context, what is ethical about this? And and 1 of the key points I think is is important is the idea of transparency.



Craig [00:23:48]:


And this



Rob [00:23:49]:


is what I tell students is if you're gonna use generative AI, tell me how you used it. And maybe even tell me a little bit about what was your thought process on whether you trusted what you had, and and let's have that discussion about how it's being used as opposed to trying to sweep it under the rug and and hide it.



Craig [00:24:05]:


No. No. That's right. That's 1 of the things that I tell our students is transparency will save you. Because if it's inappropriate use that you disclose, it's a matter of you not understanding where the line was. That's an entirely different thing than you set out to cheat. You know, it it's a matter of, a lack of understanding. It's not a matter of a lack of, honesty, lack of integrity.



Rob [00:24:28]:


Absolutely.



Craig [00:24:29]:


So, yeah, I think 1 of the big things here I wanna give 2 big messages here. 1 is we need to set expectations for the students. We need to kind of let them know what our rules are. We've at Louisiana Tech and College of Business, we've just adopted a policy where you have to have a policy around AI, and we we published a framework, and it's you know, it's got to address these 5 or 6 elements. I don't remember exactly what the number was, which I think is a good approach because, you know, in a freshman English class, using Grammarly to help improve your writing might be cheating. You know, it might be inappropriate. In our classes, it's like, god, please do that. You know? Please.



Craig [00:25:12]:


Some of the papers that I have to review, I wish they'd do a little rewriting with Grammarly too, but that's a tangential point.



Rob [00:25:19]:


Just wait until Grammarly or whatever it is intercepts the words you speak and makes them grammatically correct before someone hears them?



Craig [00:25:26]:


That's what we need. Yeah. That that would that would help me quite a bit. So I think that's 1 thing. We need to set expectations, but I think we need to help students think about this at a higher level. And I try to do this in a couple of ways, getting them to think about whether or not they're crossing the line from using AI as a tool to help their understanding of something and their learning or as a substitute for the work of learning. You can't learn without work. You know, any kind of explicit knowledge, explicit learning, you can't learn without work.



Craig [00:25:59]:


And, really, a lot of implicit as well. But the other big thing and and there's gonna be a newsletter article coming out on this sometime in the next couple of weeks, is we need to start helping students understand the long term, take a long term view of all of this. You know, short term expediency is fine for a lot of things, but it is not fine for learning. You're gonna if you don't do the work now, you're gonna have trouble later. Mhmm. Your future self is gonna be irritated with your past self, And so we really need to think about how to communicate that to students in a better way. Because I I don't think we broadly do a great job of that. You know, we get caught up in the short term.



Craig [00:26:37]:


You know, what do I need to do this week? And they do the same thing. And sometimes, they take what might be expedient in the short term, but it's harmful in the long term. And we need to change their thinking around that.



Rob [00:26:47]:


Yeah. And I think that's true. Maybe how we we should have been educating all along, right, is learning how to learn is at least in the world of information systems, but I'd imagine it's true for most every discipline, is the most important thing you're going to do in college because what you learned how to do in 2024 is maybe not gonna be how you're doing it and what you're doing in 2026 or 2028. But if you've learned how to take these new technologies, look for their value, how does that make my job better? How do I begin using this new thing that just came out? And what's that process look like? And how can I do it faster than I could if I hadn't have gone to college? I think that's setting you up for a successful career.



Craig [00:27:32]:


Yeah. Absolutely. Absolutely. But, you know, it's really hard to take the long term view when you're young. You know, it's so far I mean, look. When I think back, the age I am now, and you can't even picture that when I was 20. You know? Just it's unfathomable that you'd ever get to that point. Okay.



Craig [00:27:51]:


So that trend can transition us into our last topic for today, and that's that if you're getting bad results from AI, the problem might be you. You know, I hear people say, you know, AI output is garbage, and and a lot of it is. A lot of it's terrible. And, therefore, AI is no good. But I don't think that's quite right. If you give me a saw and a piece of wood, you are not gonna get good outcomes. The problem is not the saw. The problem is me.



Craig [00:28:18]:


And I think there's a lot of that going on with AI as well. So I don't know. What what do you think?



Rob [00:28:23]:


Yeah. No. I I think that's true. It goes back to the idea of prompt design and how you write these prompts. And, you know, I would imagine if, you know, your example of of cutting wood with saws that if we paired you with someone who knew what they were doing, and you spent a week doing nothing but cutting wood with a saw that you would become remarkably better at it. And I think it's the same way when it becomes to to generative AI is, you know, to just grab it and do 1 prompt and look at it and say, that's terrible. Well, you've not gone through a process of learning.



Craig [00:28:55]:


Yeah. Better is a low bar with my woodworking skills, by the way.



Rob [00:28:58]:


You know, we gotta start somewhere.



Craig [00:29:00]:


That's right.



Rob [00:29:00]:


But I I think the output does get better as you get better at prompting, as you get better at refining, as you learn how to speak the generative AI language. And I think there is some hallucinization, or some BS as I've heard, referred to in some places that a generative AI gives you. And part of the me in that is recognizing when the output I get is incorrect, is wrong, is overstated, And using maybe even my own editing skills, not on the prompt, but maybe I need to take the results and remove the parts that are wrong from the output.



Craig [00:29:36]:


Yep. Yep. By the way, there's an article that just came out that said something like, you know, Chad GPT is bullshit. And, that their article is wrong, by the way. So I'm I'm seriously considering writing a response. I mean, their their larger point is fine. Mhmm. But bullshit is wrong word.



Craig [00:29:55]:


And, like I said, I'm not gonna reveal too much more about that right now, but, I think that's an article that's waiting to be written. I think the big point here is really ties into the same thing with learning. AI supplements you. It doesn't replace you, especially for anything important. If AI can go through and prioritize my emails reliably, that's great. So I'm talking here about things that you're creating that are gonna go out and be a representation of who you are. You know, you you just like you can't hand that to an assistant, you know, hire some intern and say, Hey, write this article for me. You've got to do it yourself, or you've gotta guide your assistant.



Craig [00:30:35]:


It's the same kind of thing here. You're the creator, not AI. You're using AI as a tool to create, not as the AI is not the creator you are. But that doesn't mean you shouldn't use AI. It means you should invest the time in getting good with it. So you can you can produce better content with AI than you can without it.



Rob [00:30:55]:


And more efficiently. So all of a sudden, you're doing more work in a shorter amount of time, which, you know, amplifies, you know, what you can do in a day.



Craig [00:31:04]:


Or more effectively.



Rob [00:31:05]:


Mhmm.



Craig [00:31:05]:


I mean, I I think efficiency is certainly 1 part of the equation, but I may be less efficient in terms of how many words I write with AI, but the output is a whole lot better.



Rob [00:31:16]:


Mhmm.



Craig [00:31:17]:


I mean, I'm sure that a lot of what I write, I could write much more quickly without AI, but I'm not sure people would wanna read it. Mhmm. So it it's it really is a matter of using the tool in the right way. And there's a little bit this may not apply too much, so this might end up getting cut. But I think there's kind of a an out there reason that you really wanna make sure you're using AI to help you, not to do the work for you. We all have unique voices. You know, Rob, you come at these things from a little bit different angle than I do. If we got a 3rd person on here, they'd have a little bit different take.



Craig [00:31:52]:


And if you just let AI do your creating for you, you rob the world of that voice.



Rob [00:31:57]:


Mhmm.



Craig [00:31:58]:


And I think that's something we can kind of subtly teach to our students and to our coworkers who wanna use AI is, you know, don't deprive the world of your voice.



Rob [00:32:10]:


Yeah. I absolutely agree. 1 of the big issues I've had, and I'm trying to figure out how to do this, is to get chat GPT to write in my voice from the perspective of how I communicate and how I write, which is 1 angle of things. But I I don't want my view on the world driven by what chat gpt says my view on a particular topic should be. That should be me and and what I can piece together and what I'm about. And it's part of what makes society unique is everybody does have those differing views, and we come together and we hash them out. And, you know, we don't just rely on the the robot, if you will, to tell us all how to think. That, to me, becomes quite a, a boring world to live in.



Craig [00:32:52]:


It's kinda dehumanizing. Mhmm. The other big reason that I I want people to be very careful about how they use AI for creations. Look. And I wanna just take a second here and clarify what I'm talking about. Look. If I'm writing letters of recommendation for undergraduate students for an internship, if I can ethically take their resume, the job description, and a sample of a letter I've written and put it into chat GPT and have it crank out the letter for me, great. Right? Because that's not my voice.



Craig [00:33:25]:


You know, in fact, my guess is that those letters never get read in 90% of the cases. But if I've got 1 of my doctoral students who's applying for a job at Washington State and somebody that I know I know a lot of people there or at least know a number of people there. They're gonna read that letter. I don't want chat gpt doing that for me. I don't want AI to I'm you know, maybe critique it for me. Let me know if there are any gaps. Let me know if I missed anything, but that's gotta be mine.



Rob [00:33:50]:


Well, I'll I'll I'll confirm that. And I've I've seen in the hiring process people whose letter of recommendations I was 95% certain were written by Chad GPT and not by the person who wrote them. And we took that as a red flag that if the person couldn't take enough time to do that themselves, what is that really saying about the letter that they're writing for that person? You know, and it didn't necessarily help the candidate because it was like, I don't know how to, take this. We see it with graduate student applications too. Oftentimes, we'll see students that obviously wrote their statement of purpose, not in their own voice, but in the voice of a technology. And it's like part of what when you enter the academic life, you've got to be able to express your thoughts, your ideas, your voice in your research. And if the foot that you're putting forward is I'm gonna rely on a a bot to do that for me, what does that say?



Craig [00:34:38]:


Well, and there there's there's a little bit of a corollary there too, is that if you have AI do that kind of work for you, you've deprived yourself of the learning and growth that can come from the writing. A lot of us teach our students writing is thinking. You know, until you write it down, you haven't really thought through it completely. And so, like, with those essays, look, if you go into, especially like a doctoral program, you better have thought through what you wanna get out of it. Mhmm. Because it's 4 years of being poor, of, you know, working your tail off. Now it's got a huge payoff at the end if you've thought through all of these things. But if you just, you know, plug something into an AI tool and get your statement of purpose out, you're probably not gonna make it through the program because you haven't really thought through whether or not you should be in the program or what you wanna get out of it.



Rob [00:35:30]:


Yep. And so



Craig [00:35:31]:


I think that's the other big thing. But for something important, you know, you you have to kind of triage. You know, if it's a reply to some routine email, yeah, yeah, do that for me. Other things, man, not so much.



Rob [00:35:44]:


When it goes back to the advice I give undergraduate students that are struggling with writing essays or or doctoral students, And that's the whole concept of writing with your heart. And then when you're done writing, edit with your brain. Because a lot of people get stuck in the world of I can't write anything because it needs to be perfect. And and I think generative AI and some of those sorts of tools might be helpful on that to where you get your ideas and your thoughts down, and you do all of that without worrying about perfection. And then you let some technologies help you to improve the writing.



Craig [00:36:16]:


I think that's precisely the way to think about it. So getting this mental model of AI as a colleague is a great way to do that. So just like we send our articles to each other for commentary, AI can help you with that kind of thing. So if it's like, look, I've I've written this. I've edited it. Help me clean it up. It's just like if I send something to you and say, hey, Rob. I'm getting ready to submit this.



Craig [00:36:42]:


Can you read through it and see if there are any big gaps, any big thing? And and Rob literally has done this in the past. I think you've done that for me. I have. Pretty sure. Yeah. It's the same kind of thing with chat GPT. So you write it, and it's okay to use I know we say chat gpt, but apply that to any of these tools. It's just convenient.



Craig [00:37:03]:


Using AI to get you over the blank page syndrome, to help you get organized, all of those things are great. Just like, if we're at a conference and I've got this idea floating around in my head. Hey, Rob. Can we grab a cup of coffee or a beer? Can you help me think through this? So the same kind of thing. It's great for that. It's great for the back end where, you know, I I just need some help getting this cleaned up, but don't let it do it for you. If Rob writes the article, it's Rob's article. It's not my article.



Craig [00:37:28]:


Same same thing.



Rob [00:37:29]:


100%.



Craig [00:37:30]:


So alright. Well, this is all available at ai goes to college.com. I don't think I've said that enough in this episode yet. You can sign up for the newsletter. You can also search for that on Substack if you wanna go directly into Substack. Rob, any last thoughts on any of this?



Rob [00:37:47]:


Nope. I think we we hit a lot of things today, and and hopefully it's helpful.



Craig [00:37:50]:


Well, it is to me. I don't know if it is to anybody else, but it's always good to think through these things. So remember, your go to your favorite podcast app, follow, like, subscribe, whatever button your app has, or you can always go to aigosetocollege.com/follow, and there's a whole bunch of buttons for a whole bunch of apps right there where you can subscribe, like, etcetera, etcetera with 1 little click. Alright. We will talk to you next time. Thank you. Thanks for listening to AI Goes TO College. If you found this episode useful, you'll love the AI Goes to College newsletter.



Craig [00:38:25]:


Each edition brings you useful tips, news, and insights that you can use to help you figure out what in the world is going on with generative AI and how it's affecting higher ed. Just go to ai goes to college.com to sign up. I won't try to sell you anything, and I won't spam you or share your information with anybody else. As an incentive for subscribing, I'll send you the getting started with generative AI guide. Even if you're an expert with AI, you'll find the guide useful for helping your less knowledgeable colleagues.