April 22, 2025

Of Syllabi, Spells, and Structured Prompts: AI for Fall Teaching

Of Syllabi, Spells, and Structured Prompts: AI for Fall Teaching
The player is loading ...
Of Syllabi, Spells, and Structured Prompts: AI for Fall Teaching

This podcast episode elucidates the necessity for higher education professionals to cultivate a comprehensive understanding of generative artificial intelligence (AI) and its implications within the academic sphere. We, Craig Van Slyke and Robert E. Crossler, alongside our esteemed co-author France Belanger, delve into practical anecdotes regarding the integration of AI tools, such as ChatGPT, into pedagogical practices. Through illustrative narratives, we highlight both the advantages and limitations of AI, emphasizing the importance of expertise in ensuring accurate and reliable outcomes when employing these technologies. Furthermore, we discuss actionable strategies for faculty members to prepare for the upcoming academic term, advocating for the enhancement of syllabi and the generation of active learning exercises. Ultimately, we reinforce the imperative for educators to embrace AI, not merely as a technological advancement, but as a vital component of modern educational methodologies.

Takeaways:

  • Incorporating generative AI into educational practices necessitates an understanding of its limitations and capabilities.
  • Faculty should actively engage with AI tools to enhance their teaching methodologies and improve student learning outcomes.
  • Effective use of AI can streamline the process of creating educational materials, such as syllabi and assessments, thereby saving time.
  • AI's role in generating content must be accompanied by critical evaluation to ensure accuracy and relevance in educational contexts.

Links referenced in this episode:


In an enlightening exploration of generative AI's role in educational environments, the podcast episode scrutinizes the intricate balance between technological assistance and the necessity for human oversight. The discussion is anchored by a personal narrative involving a Dungeons and Dragons gaming session that serves as both a metaphor and a case study for the broader implications of AI in education. As the hosts recount their experiences, they navigate the myriad challenges and advantages that AI presents, particularly in terms of efficiency and creativity. The episode emphasizes the essential role of educators in critically evaluating and refining the outputs generated by AI systems, thus ensuring that the integrity of educational content is preserved. Furthermore, the hosts advocate for a proactive approach to embracing AI technologies, encouraging educators to experiment and adapt rather than remain mired in traditional methodologies. Ultimately, the conversation serves as a clarion call for educational professionals to engage with AI thoughtfully, fostering not only personal growth but also the evolution of pedagogical practices in an era defined by rapid technological advancement.

Mentioned in this episode:

AI Goes to College Newsletter

Chapters

00:00 - None

00:41 - None

00:41 - Introduction to the Podcast

00:59 - The Role of AI in Gaming and Education

08:41 - Embracing AI in Education

19:06 - Exploring AI in Education: The Role of ChatGPT and Copilot

23:53 - Exploring Prompt Engineering Techniques

30:17 - Preparing for a Creative Summer

Transcript
Craig

Welcome to another episode of AI Goes to College, the podcast that helps you navigate the changes being brought on by generative AI or something like that. That's kind of what we do. I am joined once again by my friend and co host, Robert E. Crossler, Ph.D. from Washington state University.So, Rob, how's it going? You're back in the US Back in.

Rob

The US and looking forward to the end of a term and seeing what summer holds before we start this process all over again.

Craig

I think everybody's looking forward to the end of the term. All right, well, let's get into it. You related a pretty interesting story about Dungeons and Dragons and AI. Why don't you tell us about that?

Rob

Yeah. Thanks, Greg.So I had this great dad idea where I suggested to my kids that we play Dungeons and Dragons together over the Internet because they both live on, on opposite sides of the United States and trying to find those. Those moments as opportunities for us to still have connection.And my oldest child got super excited about it because it was in his wheelhouse and said, dad, do you want to do a Star wars themed version or do you want to do the Lord of the Rings themed version? And I was like, well, Star wars is my cup of tea. He's like, perfect.And he sends me a 312 page PDF document and says, I need you to read this and then create a character. And I was like, whoa. Dad's great idea wasn't read a 312 page PDF document so I could play a game.So I ultimately took and fed kind of the instructions he gave me for the kind of character to create and a 312 page PDF document to ChatGPT. And I asked it to create it for me and it did.It created me a character, gave it a name I would've never come up with, and gave it all the features, all the traits and so forth. It's like sweet. It was done in about 5 minutes. Good to go. And so I send it off to via email to my oldest child and he's like, dad, this is great.Your statistics, your health numbers or whatever aren't high enough. You should probably reroll. And I had, I don't know what that means. I didn't roll this. You know, ChatGPT just made them up for me. It worked for me.So let's have a zoom session so we can talk about this so we can get it figured out.And as we're going through it, he's looking at the different decisions that were made and he's like, you know, this Armor class does not exist in the Star Wars PDF document that I gave you. That's something that is existent in the Dungeons and Dragons rules.And so even though I had given it this PDF that had all the rules of everything that we're going to, it went and looked at other knowledge it had to create this character. I would say about 10 to 20% of the decisions it made for me were incorrect. And so we went through a process, we fixed them, we made it better.But one of the things it did for me is it helped me to realize that even when I get this very official looking document that, you know, for someone who's a novice doesn't exactly know what they're doing, I couldn't 100% rely on it. It required someone with more knowledge, more expertise to ensure that it was an accurate, acceptable document for the playing of this game.So super helpful to get 80% of the way there, but also realization that if I'm going to rely on this for more of these games, that I need to have that expertise in order to make sure that I'm not playing by incorrect rules.

Craig

Yeah, absolutely. And which AI did you use?

Rob

I use ChatGPT for that 4.0.

Craig

I wonder if it might have been better in Gemini because of the context window. 312 pages is pretty long.So if you ever, you know, if you feel like playing around with it, although you probably invested enough time in this already, I had something kind of similar happen. I was working on something and ChatGPT said, Do you want an annotated bibliography for this? I said, sure.So I took the list of references and sent it to my graduate assistant and said, see if you can track these down. Well, he comes back a couple days later and said, I just can't find these two articles. I thought, oh, I know what's going on was pretty sure.But I said, let me check. Well, they were, they were complete hallucinations.And you know, this was kind of no harm, no foul because, you know, nobody was really relying on this. We just, we were seeing if those articles were out there.But you've got to be really careful even with the guiding document or when it suggests, I'll do this for you. So, yeah, that's a lesson learned.

Rob

Well, and it's interesting too, if knowing the right language model to use to ultimately accomplish what you're trying to accomplish is an important piece of this, that's going to be a challenge for more novice users of generative AI technologies.I had a conversation this morning with a colleague who is Losing confidence in Copilot as a tool to use because he doesn't feel he's getting very good results.And it doesn't reprocess and rethink when he gives it different prompts and is getting very frustrated with stuff that he doesn't feel is as usable as he would like it to be. And I'm sure if he used a different tool in the same way, he'd get better results. Different results, because all these tools are different.But if you take the moment to step in and to start using it like we've encouraged people to do, and then it's not giving you results, it could be discouraging. And one of those steps that make you say, I'm going to keep doing things the way I've always done them.

Craig

Yeah, that's a good point. But he's also learning what the limitations are. So, you know, there's something valuable in that.And it's like any tool, you know, it's going to be good at some things, not so good at others, other things. What makes generative AI tougher in that regard is what you said.Like, there are all these different tools, you know, with a spreadsheet for most people, Excel numbers, Google sheets, you know, they all kind of do the same thing. So if you do it in Excel, you can do it in sheets or whatever. But these AI tools are very different.There's a core that they're all kind of equally competent at. But, you know, one is really good with longer documents and one's good at images and one's good at reasoning and so on and so forth.I think we're going to be dealing with this for a while. But all of that being said for the folks out there that are trying to figure out how to get started with this, just pick one, pick one.Figure out what it's good at, what it's not so good at, those things that maybe is not doing so well today. Come back in two months, three months, see if it's better.I mean, there's just a big flurry of new models that were released, what, in the last week or so. We're going to be dealing with this for the next, I don't know, probably several years, but.But your story brings up another issue that I think people need to be aware of, and this is the idea of statistical regularity. And what these AI models do is they fall into patterns. I'll tell you a quick story about how this came to be. Pattern persistence bias.I was at Tennessee Tech University giving a couple of talks, one of which was to students. I needed to liven them up. So I. I got into dad jokes, you know, said, let's. Let's have it. Tell us a dad joke. And I don't.It came up with some joke about ladders, but it was kind of generic. And I said, well, you guys get a dad joke. And so they did. And a couple of more jokes came up and they were about ladders.And then I said, well, get it. To tell one about Tennessee Tech. And it came up with this one. Why did the Tennessee Tech business student bring a ladder to class?Because they heard the stock market was going up and one of the kids said, what's up with all the ladders? You know, you probably know a lot of dad jokes. I know a lot of dad jokes. Not many of them involve ladders.But it gets into this persistent cycle of kind of tying into the same things because of the way the larger language models work. And they tend to repeat words like delve and nonetheless and bullet points.But I think the same kind of thing happened in your situation where there was some pattern out there somewhere that led to the star. Not the Star wars, the Lord of the Rings. What was it? Armor?

Rob

It was the Dungeons and Dragons. It gave me chainmail as my armor type. And that does not exist in the Star wars universe.

Craig

Well, of course not. Of course not. Chainmail in Star Wars. Okay. Anyway, I think on that we'll leave this topic and move on to the next one.I know this term isn't over yet, but we're already starting to think about fall. So what should faculty out there be thinking about in terms of preparing for the fall term as it relates to generative AI?

Rob

Yeah, I think one thing that can be helpful, and you talked about this in the newsletter that you sent out earlier this week, is about using it to update our syllabi. Right. It's one thing every semester Your syllabus is 95% there, but need some updates. It's a great tool for that thing I actually find more useful.And where I'm going to spend some time on it this summer is taking my slide decks and having it come up with active learning exercises.And I'm going to push into that and have it create active learning exercises for me that enhance the utilization of generative AI in the classroom and getting students exploring with that and playing with that and helping me to brainstorm and come up with ideas of how I can teach material in a way that is hopefully more cutting edge, more active, utilizing new technologies than what I may have done in the past.

Craig

That's a great use because it's low cost. So let me see if I can explain what I mean. So you want these active learning exercises.So you put your slides for a class into ChatGPT or whatever and say, give me 10 active learning exercises and maybe you give it some parameters.I want them to last 15 minutes, and so much of that time should be for them to work together and then some debriefing or whatever it is, however you want to bound those. And in a matter of a few minutes you've got 10 activities, let's say eight of them stink. In a few minutes you've got two good activities.That's a pretty big win, at least in my perspective. Rob, we. We've co authored a textbook together with our friend and colleague Franz Belanger, and we have a lot of activities in that book.They're hard to come up with if you're trying to come up with, you know, from scratch. It takes a lot of work.And so one of the things that I think AI is not, I think AI is so good for is just churning out lots of ideas where you can pick and choose and people can get fixated on the bad ones and not pay attention to the fact that they're getting some number of good ones, good suggestions, or what often happens with me is maybe I don't like any of the suggestions, but it gets my brain working. Or maybe I see two that I want to combine. Or there's one where there's something there but it's not quite right, and then we work through it.But I think that's a great use of AI. I want to go back to the syllabus because you and I are probably different about this.I hate the syllabus updates because I've been doing this for 30 years. I don't know that I've ever had a syllabus that didn't have at least one mistake in it.You know, there'll be one 2024 instead of 2025 or something like that. And so if AI can pick those up for me, does that save a lot of time? I don't know. What's it take to update the syllabus? Half an hour.But if it cuts it down, you know, that's 15, 20 minutes I've got to do, I spend doing something else.

Rob

Well, the part I like even more so than helping me get the dates right is it can critique the clarity of what you're communicating to students.

Craig

Yeah, absolutely.

Rob

Because I wrote every word in my syllabus, I know what every word means. But having another set of eyeballs say, could you give more examples of what is appropriate use of generative AI in the classroom?It will help students to have less ambiguity in how they're going to use that as they go forward.And then you can even ask it to give you some examples of how you might even say that to begin to process through what would be a more meaningful, guiding document as opposed to, you know, do what I mean, not what I say.

Craig

It assumes they're going to read anything you put in there. But, you know, hey, at least give them a shot. And some of them will.

Rob

They do. And they get in trouble for not following the rules.

Craig

Yeah, that's true, that's true. But there's a bigger message here, I think, that we want to get out there in the world.You and I have talked about this kind of, and I've thought a lot about it. I'm sure you have, too. If faculty members are resistant to AI, that's fine, but they still need to become AI literate.I think we've established well enough that AI is here to stay. Barring some apocalypse, we're not going to start using less AI in higher ed.And so I think it's really incumbent upon faculty to start to learn how to use AI because their students are going to use it in increasing numbers. You just need to understand it. How can you regulate something? How can you leverage something if you don't understand it? So I don't know.Do you agree with me or do you think I'm wrong?

Rob

I think you're right 100%. It's not going away. I think understanding how it's being used is hugely important.And it also, I think, can help make faculty more productive and better at their jobs.And in a world where so much can be done in ways that weren't possible two, three, four years ago, finding active learning ways of trying to get the same learning outcomes we were getting before and not having to sit there and spend an entire day figuring out, how do you create one.Active learning exercise is going to just make faculty much more focused on the places where they're adding a lot of value in the material and not recreating the methods in which they're pedagogically communicating this to the students.

Craig

Yeah, absolutely. It's tough for some of the faculty. I know we're busy. We're all busy. I've been working all day, and this is a Good Friday holiday for us.It's just the way it is and it's not going to get any easier.But I think if faculty focus not so much on what I said, like you've got to do this, but more on what you've said, this can save you time and make you more effective. That's the message, I think. I really like your message there.And for all of the folks out there in the world who are trying to get their faculty on board with AI and learning how to leverage it to make their lives better, Rob and I are available either separately or collectively to help you with that. We can do it on site. We can do it remotely. Just let us know. The easiest way to do that is to email me.Me at Craig, that's C R A I g@aigostocollege.com and Rob, what's your email address?

Rob

Hey, I goes to college.CrossleRai goes to college dot com.

Craig

Anything else on the fall term preparations? Any other ideas for faculty what they might do?

Rob

Well, what I would suggest, if you've read Craig's newsletter that he wrote about using ChatGPT to do this, I've done the exact same thing using Microsoft Copilot. And Microsoft Copilot worked well for accomplishing the exact same things that Craig outlined in that newsletter.And it goes back to the idea of when you step into this, use the tools that you have available. So Chat or Copilot is made available for us at Washington State University.So it's a tool that hopefully all the faculty that I work with have disposable to them and they don't have to go and pay for a license for another piece of software.

Craig

Yeah, And I use ChatGPT because that was what was handy. But you could use any of the models should handle that sort of thing. Claude or Gemini. I like Gemini a lot. Claude, Gemini.I'm guessing Llama could do it. You know, Copilot, whatever. I think that's a good message. Just use whatever you've got.I want to throw out another idea that's just a pain for me and that's exams I hate especially I teach big classes. So they kind of, kind of have to be multiple choice. And it's hard to write a good multiple choice question that actually gets it learning.And so it can take a lot of time to do that. And despite test banks and that sort of thing, it takes quite a bit of time. And I think ChatGPT, Copilot, whatever can really help with that a lot.Again, going back to the idea that it can churn out a lot of possibilities really quickly.I've also used it to critique exams where I'll give it my exam or a bank of questions, ask it if any are unclear, if any have duplicate correct answers. I'll ask it to level. You know, is this easy, medium or hard?So that kind of thing, which we all have to do from time to time, AI can save you a lot of time. Now, you still need to vet everything. You can't just say, hey, make my test up and pop it into word and print it out.You know, you need to make sure it's suitable and correct. But it's really great at that.

Rob

Yeah. And that reminds me of what's on my list of something I want to accomplish this summer. And we can talk about this more, more after I've done it.But I've been reading about people who have taken and used ChatGPT to grade oral exams to where students will speak and record answers, and then it will pro process through and give grades with rubrics and feedback. And I want to play with that experiment to see how well that works, because I have some skepticism that it'll work well and it'll work properly.But it's intriguing because if you can express orally what you've learned, in some situations, that might be a better expression of your understanding and your knowledge. So I think there's some fun things to play with in this space, and I want to start pressing into that a little bit and seeing what it can do.

Craig

Yeah, I designed a little GPT prompt, I don't remember exactly how I did it, to allow my doctoral students to get feedback on their synthesis papers.So every week they were writing a synthesis paper that kind of looked at common elements and points of divergence and that sort of thing across the group of readings for that week. And it. It's really harder to do that than it sounds.And so I just, you know, had this prompt that they could use to get feedback before they sent it to me. My ulterior motive was if they're really good, they're really easy to grade. You know, the bad ones are harder to grade, but. But they would.I mean, they might go through two or three iterations and they're getting feedback all the time, and it's improving not just that paper, but the next one as well. So it's really good because it doesn't get tired, you know, it doesn't get frustrated. It's available 24 7, so it's pretty fantastic. So.All right, anything else on next quarter? Next. Sorry, next quarter. Next term. Next semester for the 90% of the world that's out there on semesters. All right, let's talk about prompting.Want to talk about prompting? So, Rob, have you ever tried meta prompting? Meta prompting, that's not. Has nothing to do with Facebook.

Rob

Yeah, I actually did a little bit and it goes back to the idea I shared with. What I want to do with oral exams is to start the process.I actually asked ChatGPT to write for me the prompts I would need to use if I wanted to create this structure of what I'm just in my head.And then I took that and fed it in to ChatGPT and then got something that seemed like a reasonable path forward for how I would go about the summer project of how do we execute oral exams. And it was very good. It mentioned things that I wouldn't have thought of to put in that prompt and was super helpful.

Craig

Yeah. So the idea of meta prompting is you ask AI to write the prompt for you.So instead of focusing on, you know, what do I need to include in the prompt and how do I need to structure the prompt. It's almost like vibe coding, you know, You've heard that term, Rob?

Rob

I have, I have, yeah.

Craig

So that's just coding by. Instead of writing the computer code the way we normally would, you describe it to some AI tool and it. Describe it.Sorry, describe what it is you're trying to accomplish to the AI tool. And it actually writes the computer code. And I think it's still up in the air about how good that is. And it's certainly got its limitations, but.But it's the same sort of an idea. So who wants to be a prompt engineer? Most of us don't want to be a prompt engineer. So we do this kind of vibe prompting.We just say, here's what I want, write the prompt. And I use this pretty heavily for deep research reports.So the deep research reports, ChatGPT, at least these are where they, if you're not familiar with it, this is where it will actually go out and create a well referenced depth report on just about any topic you want. Mine average, 30 pages based. So these are extensive documents.If you're on a paid plan, the One of the $20, $30 plans, you get 10 per month, which is not a lot. So you don't want to waste them. You want to make sure that you get what you want on the first try.And so what my routine is, I will kind of get it in my head what I want the report on.So let's say I want to do a deep research report on how business schools in the United States are using generative AI to enhance student learning or something.I put that in Ask for a prompt, might do a couple of iterations to refine that prompt, but then chatgpt writes the chatgpt prompt for me, and then I literally just copy it, paste it in to the to the chat window, ask for the deep research report, which is just a button. You click and I'm good to go. And that really hasn't failed me yet. It's worked very, very well.So I think that's for those of you who are out there listening that haven't tried that yet. You absolutely ought to. Even if it's simpler than what I'm describing.Because what it does in a subtle way is you'll get better at prompting just by seeing the examples. It's like, oh, I never thought to like. The one that I would have never thought of is, what citation format do you want? Now?Not that that really matters much, but these metaprompts, when I would put them in deep research, ask questions, clarifying questions, and it kept asking me, do you want academic sources or non academic sources? Well, I always want academic sources and I always want it to be global, or at least so far. I've always wanted to be global.So now I put that in the prompt to create the prompt, and so I don't have to go through that extra step of waiting for deep research to ask me or if it doesn't ask me, and then it goes off and does something limited to the US And I wanted it global. So it's just a nice way of kind of almost stealth learning about how to improve your prompts. It's worth doing.

Rob

I think there's a lot changing in these various tools as they're coming out. So getting in that habit of using the tool itself to help you to be better at interacting with the tool is a nice skill set to have.So that way, as the world of these tools continues to change, your skills will evolve along with it.And if something that was working well six months ago stops working, you'll have that in your repertoire of how do I build myself up to be able to accomplish what I was trying to accomplish.

Craig

Yeah, getting a tool to help you with a tool that's meta is such a great word. I love that word. It's meta. It's very meta. All right.

Rob

Listen to you, Craig.

Craig

Speaking of prompting, I want to talk about one more thing, and that's open versus structured prompts, which is a really simple idea, but I think a lot of people don't really understand this. So an open prompt is kind of a vague prompt. So, so it's, you know, here's an email I'm sending, is the tone friendly or, or what do you think?And just let it go. And a lot of times it'll kind of zone in on exactly what you want, but sometimes it doesn't.And we can compare that to structured prompts, which is the kind of prompt we were just talking about. Where you get the meta prompt gives you the, the big prompt.And the big prompt is a highly structured prompt where you're very detailed on the context, exactly what you want, what format, you know, whatever you can think to include. They both have their places. I think we've talked about prompt engineering before. These open prompts are the opposite of prompt engineering.So most of most of my prompts I've mentioned this before on the, on the podcast is what do you think? So I'll put something in, I'll say, what do you think? And I do that for a couple of reasons. One is because it's usually efficient.You know, it kind of gets at what I want to get at. The other thing is sometimes it's interesting to see where AI is going to go.You know, we get focused in on exactly what we want and we lose a little bit of the possibility space. We constrict that a little bit too early.And so if you're finding that you're getting kind of boring and mundane results from AI, maybe try loosening up your prompts and making them more open.Now, if it's something where you want to be precise and you know exactly what you want, especially if you're going to do it over and over again, it's worthwhile to have a highly structured prompt.Like if you're going to use AI to, you're going to create all of those activities for what do you have 12 or 14, no, probably 20 something classes, right?

Rob

Yeah, we have 15 semesters and then the final exam. So that would be potentially 30 classes. But by the time you take the exams out of that, we're in the 20ish range of ISO.

Craig

Call it sessions. Yeah, 20, 25. Well, you're going to run that same prompt 20 or 25 times.And so if you always have to go in and tweak it a little bit, and tweak it a little bit and it's three or four rounds to get where you want, that's not very efficient.Rob, you would probably want to take Some time that first time through, nail down that prompt pretty tightly, and then you just kind of copy and paste, upload different slides, you know, maybe change the parameters a little bit depending upon the class, but then you can crank through them. So, so it's not the kind of thing I, I know some people advocate for really structured prompts. Some people like the really open prompts.In reality, what you want to do depends on what it is that you're trying to accomplish. And so I think if people haven't tried both, you should try both.And, and if you don't know how to structure the prompts, we come back to meta prompting, you know, and then, then like I said, you just copy and paste, crank through them.

Rob

Well, and what you're alluding to as well, Craig, is the idea of keeping track of the prompts that you've used. So a prompt library, I've heard it referred to as, but having some way of keeping track of these things.So I think what people will find is a lot of times they come back to a variation of a prompt they've used in the past. And so you find those that are working well and you can just be tweaking them.You don't have to put a lot of cognitive effort into, oh, yeah, how do I write that prompt? What were the pieces of that prompt that are necessary for me to get good results?And so saving those, editing those and reusing them is a great way to create efficiencies and consistency in how you're going to get hopefully good results that are helpful.

Craig

And it doesn't have to be anything fancy. You can put them in a Google Doc and Notepad or, you know, notes or whatever it is that you use.You don't have to go out and find some software for it. A little bit of a pro tip that I want to throw in, actually, two pro tips. One is I've gotten in the habit.If I'm storing a prompt or even the results for a prompt, I will link back to the session. So like in ChatGPT, and I think they all have something similar to this.There's, you can click, there's a little up arrow, I think, in the right hand corner. Click on that and you can share the chat. And one of those will be to copy the link. And just copy that link. Put it in like I use Apple Notes.Put it in Apple Notes. You've got it. Because if you start to use AI a lot, you're going to have a boatload of chat sessions.And they finally had added a Search function, at least in ChatGPT, I think Claude may have one too. So it's a little bit easier to go back and find. Find what you want.But I've spent more time than I care to admit going back and trying to find a chat session from two weeks ago. You can also give your chat sessions better names than I give them. If it's just the default name.A lot of times that's not the most descriptive thing. So the second pro tip is if you've spent some time refining something in a session, when you're done, ask AI to.To write a prompt that will get you there more quickly.So if you go through and you've kind of gotten your activities and like, okay, finally, this is what I want before you get out of that chat session, say, hey, can you write me a structured prompt that I can use when I want to do this again? And, I mean, I don't know, it'll probably get you 80, 90% of the way there, but that could really save you a lot of time.

Rob

Perfect.

Craig

You didn't seem very impressed by that, Rob.

Rob

No, it is impressive.And if you think about the fact that it can reflect on what you just accomplished and say, here would have been a better way to get there, you know, that's a little bit telling of, of what's the power in, in some of these technologies.

Craig

Yep. And it won't even call you a big dope when you get there. It's like you're a big dope. Here's what you should have done to start with.

Rob

Well, can I. Can I give it some context and tell it. I want it to refer to me that way.

Craig

You could, you could. I'm. I'm going to take a deep breath, I'm going to filter, and we're just going to move on. That could have gone badly. All right, Rob, we're.We're kind of up against our time. Any last words before we close this episode?

Rob

No, I just will encourage people to take a step back this summer as you approach the break, whenever that occurs for you, and see if you. The freedom of not being in the classroom, if that's what you're lucky enough to.To not have to spend time doing this summer gives you the ability to think maybe a little more creatively with how you're doing things and press into that and see what you can do to prepare yourself for next fall.

Craig

Great, great. And remember, you don't have to do everything at once. Just pick a couple of things, try it, learn. You'll start to do more over time.That's all you have to do. This really does not have to be hard. It really isn't. All right. Well, thank you very much for all things AI goes to college.You can go to aigostocollege.com if you go to aigostocollege dot com follow. You can follow us on whatever your favorite podcast app is.And remember, if you need help getting your faculty to be more engaged with AI or if you need help in developing a policy, whatever it might be, I would encourage you to get in touch with us. It's Rob. Rob Robert or Rob.

Rob

Rob Crossler.

Craig

Rob Crossleri goes to college.com or Craigi goes to college.com and that's C R A I G. You probably know how to spell Rob. All right, thanks a lot, and we will see you all next time. Thank you.