In this episode, Craig has a mini-rant about misleading click-bait headlines, discusses two recent generative AI surveys, gives the rundown on Google's rebrand from Brard to Gemini and Perplexity.ai and shares a modest experiment in redesigning an assignment to prevent generative AI academic dishonesty (which is a fancy way to say cheating).
More details are available at https://www.aigoestocollege.com/p/newsletter/, where you can subscribe to the AI Goes to College newsletter.
Contact Craig at https://www.aigoestocollege.com/ or craig@EthicalAIUse.com
--- Transcript ---
Craig [00:00:10]: Welcome to episode number 2 of AI Goes to College, the podcast that helps higher ed professionals try to figure out what's going on with generative AI. I'm your host, doctor Craig Van Slyke. So this week, I give you a mini rant. It's not a full rant, but a mini rant about misleading headlines, Talk about Google's release of a new model and its big rebrand from Bard to Gemini. My favorite part is gonna be when I talk about dot AI, which is generating a lot of interest right now, and I think it's tailor made for higher ed, even though I don't think that they're restricting the audience to higher ed and some promising results from a little experiment I did in redesigning an assignment. I'm gonna hit the highlights in this episode of the podcast. But if you want the full details, go to AI goes to college.com And click on the newsletter link and subscribe to my newsletter. A lot more details, screenshots, that sort of thing there.
Craig [00:01:09]: So here's my rant. Cengage, and if you're in higher ed, you know who Cengage is. They call themselves course material publishers, Just released its 2023 digital learning pulse survey. As far as I can tell, this is the 1st time the survey gathered data about AI. The results are pretty interesting. It says only 23% of faculty at 4 year schools thought that their institutions were prepared for AI related changes, and that number was only 16% for 2 year schools faculty at 2 year schools. 41% of faculty across the 2 different types of institutions thought that generative AI would bring considerable or massive amounts to change to their institutions. What bothers me about this survey, is really not the survey itself, But how it's being reported? So the headline of the article from which I kind of learned about this survey read, Survey reveals only 16% of faculty is ready for Gen AI in higher ed, which is not at all what the survey was about.
Craig [00:02:22]: The survey, at least the part of it I'm talking about, asked 2 generative AI related questions. Do you think your institution is is prepared for AI related changes. And how much will AI tools change your institution over the next 5 years? So first of all, that really isn't specific to generative AI, although I think that's what most people would interpret, AI as. The title of the article that led me to the survey said that faculty aren't ready. Well, that's not what the survey asked about. It didn't ask if the faculty were ready, although that would have been a good thing to ask. It asked if they thought their institutions were ready. So I want to caution all of you to do something you already know you should be doing.
Craig [00:03:09]: Read these click headlines, and there are a lot of them. Read the articles with a critical eye. If it's something that's important, if it's something that you're going to try to rely on To make any sort of a decision or to form your attitudes, take the time to look at the underlying data. Don't just look at how that particular author is putting the data. Look at the data yourself. All of that being said, I think we're probably not especially well prepared collectively for generative AI, And that's not a big surprise. It's still relatively new, and it's changing very rapidly. So we'll see.
Craig [00:03:48]: Speaking of changes, Google Bard is now Google Gemini, and it's not just a rebrand. So Google also, as part of the rebrand, announced that they have some new models. So with Gemini, formerly Bard, which you can find at gemini.google.com. There are 2 versions at the moment, Gemini and Gemini advanced, and this is kind of the same as Chat GPT and Chat GPT Pro. The nomenclature is a little bit confusing. Gemini is a family of models. Ultra is the big dog high performance model. Pro is kind of the regular model, and Nano is a light version optimized for efficiency, which I think signals that Google is gonna make a push into AI on mobile devices.
Craig [00:04:36]: I was pretty confused about the names and what models there were and that sort of thing. So I asked Gemini to explain it to me. The details of that conversation are in the newsletter, which is available at AI goes to college .com/newsletter. Gemini is kind of like GPT 3.5. It's fine for most things. If Gemini isn't up to the task, try Gemini advanced, which is kind of like GPT 4. So far, I've been pretty happy with, my use of Gemini advanced. It did a good job of helping me unravel the various names and models related to Gemini, And I've played with it for some course related tasks, and it's performed pretty well.
Craig [00:05:17]: I'm not sure I'd give up ChatGPT Pro for Gemini advanced, but it's a nice option, and I'm playing around with all of them. So there's no big surprise. Know, your experiences may vary, but I would suggest that you try it out for yourself. If you do, I'd love to hear your impressions. You can send those to me at craig, That's craig@ethicalaiuse.com. There was another generative AI, education survey that was out in the news recently, a higher ed think tank, and I don't even know what that means. HEPI released a policy note then included some results from a survey of 1250 undergraduate students in the UK. According to that survey, 53% of students used GAI to help with their studies, but only 5% said that they were likely to use AI to cheat.
Craig [00:06:13]: I don't doubt that the statistics accurately reflected the survey responses, but I'm pretty skeptical about both of these numbers. 53% seems pretty high, for students that have actually used generative AI to help them with their studies, And 5% seems pretty low for those who said that they might use AI to cheat, but I don't know. I think there are a lot more Students that are either using or going to use generative AI kind of at the edges of, ethical use. So I still think that the uptake of generative AI among students is lower than some of us might think or some people might think, even especially if we consider regular use. So they may have played around with it, but I just don't think many students are using it regularly yet. One of the reasons I like this particular little article, which is linked in the newsletter, is that it did discuss the digital divide problem. The digital divide is real, and it has real consequences for a lot of aspects of society, including higher ed. We need to keep chipping away at the digital divide if we truly wanna adjust society.
Craig [00:07:27]: And generative AI is just going to widen The digital divide. More details in the newsletter, which you can can access at AI goes to college.com/newsletter. It feels like it ought to be a drinking game. How many times will they say AI Goes to College .com/newsletter? So let's get to the resource of the week. There's been a lot of online chatter about Perplexity dot AI. The gist of all of this talk is that Perplexity is becoming kind of a go to generative AI tool when you wanna uncover sources. There's a lot of hype that says this is going to Be the new Google, and I'm not so sure about that, but it is a very useful tool. Exactly what it is is a little unclear at first.
Craig [00:08:16]: 1st, I'm gonna read you verbatim what the about page says. Perplexity was founded on the belief that searching for information should be a straightforward, efficient Variance free from the influence of advertising driven models. We exist because there's a clear demand for a platform that cuts through the noise of information overload delivering precise, user focused answers in an era where time is a premium. I couldn't argue with any of that, but I don't know what it means. Their overview page is a little bit better, And and it talks about some of the use cases for Perplexity Point AI, answering questions, exploring topics in-depth, Organizing your library and interacting with your data. I can personally attest that perplexity is Pretty good with the first 3, but I haven't really tried it with my own files yet. Here's a problem that we have with search. If you go into Google or some other search engine, you want information, but a search engine doesn't really give you the information you want.
Craig [00:09:21]: It gives you a list of websites that may or may not include somewhere the information that you want. Perplexity is much more about giving you the actual information that you're trying to to get and telling you where it got that information from. And so that's a fundamental difference, and I think it could ultimately reshape how we search for information on the web, and I think that's a good thing. There are a number of things that set perplexity apart. It's got a copilot mode that gives you a guided search experience. That can be really helpful. So what it does is it will ask you first, do you wanna focus on particular types of Resources. So right now, it's got 6 different categories.
Craig [00:10:09]: Well, 5 and then an all. So it's got all where you search across the entire Internet, Academic, and this is a big one, where it searches only in published academic papers. Writing doesn't really search the web. It just helps you. It'll generate text or chat, without searching the web. Wolfram Alpha is a computational knowledge engine. It'll search YouTube, and it'll also search Reddit, which I think is pretty interesting. So you can go broad or you can go really narrow.
Craig [00:10:41]: Another thing that it does is it will ask clarifying questions when it feels like it's necessary. Feels like. That's weird. When, it somehow in its large language model brain thinks that it's necessary. I give an example in the newsletter where I'd say I want to explain generative AI to a nonexpert audience. The audience will be intelligent but won't have any Background in AI or computer science, what topics would you cover? And instead of just giving me an answer, now I'm using, Perplexity's Copilot here. It says, what is the main purpose of your explanation? Basic understanding, application to risks and benefits. And so you can specify basic understanding or applications of generative AI or what are the risks or benefits or you can choose all of those or You can provide some other sort of clarifying information.
Craig [00:11:35]: That's really useful, so it doesn't take you down as many kind of unproductive paths. Perplexity's Copilot will even kinda show you the steps it took in generating your response or its response rather. So you can take a look at that in the, newsletter. I know I'm saying that a lot, but there's a nice little screenshot that'll give you a better idea of what I'm talking about there. You can also look at the underlying sources that perplexity used to generate its responses. So for example, the little, explain generative AI prompt that I gave it. It came up with 24 different sources along with an answer. So I can dig into any one of those 24 sources to see exactly what it was talking about.
Craig [00:12:25]: And when perplexity gives me its answer, it Gives footnotes, little numbers that refer back to those sources so you can dig in. You can also do the normal Chat thing like asking follow-up questions. So it it's really quite good. Here's one of my favorite, favorite, favorite Features of perplexity, it allows you to create what it calls collections. So collections allows you to group together different conversations, perplexity calls, conversations, threads. And so one one of my biggest frustrations with Chat gbt with the interface is I'll have some conversation with it and then wanna go back to that topic a couple of weeks later, and I Can't find that conversation because I've had 200 other conversations in the meantime. A little pro tip, you can search, your conversations in the mobile app, or as far as I know, you can't do it on the web interface yet. But you can always go in, search whatever keyword you're gonna search on, find the conversation, and then Add to it, and it'll pop up on the top of your list on the website.
Craig [00:13:37]: So I I know that was kind of a muddled, description. But, if it's unclear, just email me, craig@ethicalaiuse.com. So these collections can be really, really useful. I was working on something for my dean recently. I was using perplexity. I put all of those conversations in a collection. So to me, Perplexity dot AI is one of the more interesting tools that I've seen come out recently. If you haven't checked it out, you should.
Craig [00:14:07]: They have a free tier that you can play around with. And I And I'd used it and then almost immediately paid for an annual pro subscription. So, really, I encourage you to check it out. Alright. So here's my little experiment. So I'm teaching principles of information systems this term, And I include some pre class online assignments. These are simple little things that all I'm trying to do is get them to engage with the material little bit before class. They're easy, and I'm very lenient in the grading.
Craig [00:14:41]: Basically, if you put any effort into it at all, you get full credit as long as it's not late. But, unsurprisingly, I noticed that some submissions looked suspiciously like they were generated with generative AI. 1st time, I let it slide and just, commented on it in class. The 2nd time, I required students to resubmit. I just Said I'm giving you a 0 for now. Put this in your own words, and I'll give you credit. I'm teaching the same class in the upcoming spring term. We're on the quarter system, by the way.
Craig [00:15:14]: And so I started thinking about how to modify these assignments to keep students from just copying and pasting in a generative AI response. I know this is gonna sound incredibly lazy of me, but I don't wanna be spending my entire quarter dealing with academic honesty reports. I'd rather just prevent the problems in the 1st place, and we're gonna have to do this kind of thing. We're gonna have to rethink how we We do evaluations, how we assess learning, how we create our class activities. So I decided, I'm gonna try to modify an upcoming assignment. So this is a little activity I've used for years, And it's very simple. The original assignment was compare and contrast supply chain management systems and customer relationship management systems. Give 3 ways they're similar and 3 ways they're different.
Craig [00:16:05]: Like I said, these are kinda little lame o activities, but but you you can see where I'm going with this. I want students to kind of have to look at what the 2 2 different types of enterprise systems that we're talking about are. You know, they'll start to get some understanding of kind of what they're all about, so I decided to change the assignment. And and I'm gonna give you an overview of what I did, but all the details are in the newsletter. So I basically said, Hey. I'm gonna give you a task, and then I'm gonna give you the answer that was given by generative AI. You're then going to compare that answer to the information in the textbook and briefly describe how the information from generative AI And the textbook are similar and how they're different. And then I go through and say, alright.
Craig [00:16:56]: Here's what I put into, generative AI, here's what it spit back. And then the students had to to kind of do a little bit of work. I I was Pretty happy with the result. Some students absolutely missed what I was asking them to do, but that's okay because I'm not sure it was entirely clear. But the ones that got what I wanted them to do did a pretty good job of going back into the textbook and kind of seeing what the textbook said and then seeing what The answer, comparing that to the answer generative AI gave. Some students even went so far as to say, like, on page 247 of the textbook, it said this, And generative AI said that. And so I was pretty happy with the results considering I had about 15 minutes into revising the assignment. So I'm gonna do more of these.
Craig [00:17:47]: As I said, I'm teaching the class again in the spring, so I'm gonna spend part of the break Redoing some of my assignments, my online activities to make it to where they have they can't just copy and paste the question into generative AI. And I'll report on those experiments, as I go through them. Alright. That's all I have for you today. I'm out of breath, and you're probably tired of listening. So I will talk to you next time. Thanks for listening to AI Goes to College. If you found this episode useful, you'll love the AI Goes to College newsletter.
Craig [00:18:28]: Each edition brings you useful tips, news, and insights that you can use to help you figure out what in the world is going on with generative AI and how it's affecting higher ed. Just go to AI goes to college.com to sign up. I won't try to sell you anything, and I won't spam you or share your information with anybody else. As an incentive for subscribing, I'll send you the getting started with generative AI guide. Even if you're an expert with AI, You'll find the guide useful for helping your less knowledgeable colleagues.
Craig [00:00:10]:
Welcome to episode number 2 of AI Goes to College, the podcast that helps higher ed professionals try to figure out what's going on with generative AI. I'm your host, doctor Craig Van Slyke. So this week, I give you a mini rant. It's not a full rant, but a mini rant about misleading headlines, Talk about Google's release of a new model and its big rebrand from Bard to Gemini. My favorite part is gonna be when I talk about dot AI, which is generating a lot of interest right now, and I think it's tailor made for higher ed, even though I don't think that they're restricting the audience to higher ed and some promising results from a little experiment I did in redesigning an assignment. I'm gonna hit the highlights in this episode of the podcast. But if you want the full details, go to AI goes to college.com And click on the newsletter link and subscribe to my newsletter. A lot more details, screenshots, that sort of thing there.
Craig [00:01:09]:
So here's my rant. Cengage, and if you're in higher ed, you know who Cengage is. They call themselves course material publishers, Just released its 2023 digital learning pulse survey. As far as I can tell, this is the 1st time the survey gathered data about AI. The results are pretty interesting. It says only 23% of faculty at 4 year schools thought that their institutions were prepared for AI related changes, and that number was only 16% for 2 year schools faculty at 2 year schools. 41% of faculty across the 2 different types of institutions thought that generative AI would bring considerable or massive amounts to change to their institutions. What bothers me about this survey, is really not the survey itself, But how it's being reported? So the headline of the article from which I kind of learned about this survey read, Survey reveals only 16% of faculty is ready for Gen AI in higher ed, which is not at all what the survey was about.
Craig [00:02:22]:
The survey, at least the part of it I'm talking about, asked 2 generative AI related questions. Do you think your institution is is prepared for AI related changes. And how much will AI tools change your institution over the next 5 years? So first of all, that really isn't specific to generative AI, although I think that's what most people would interpret, AI as. The title of the article that led me to the survey said that faculty aren't ready. Well, that's not what the survey asked about. It didn't ask if the faculty were ready, although that would have been a good thing to ask. It asked if they thought their institutions were ready. So I want to caution all of you to do something you already know you should be doing.
Craig [00:03:09]:
Read these click headlines, and there are a lot of them. Read the articles with a critical eye. If it's something that's important, if it's something that you're going to try to rely on To make any sort of a decision or to form your attitudes, take the time to look at the underlying data. Don't just look at how that particular author is putting the data. Look at the data yourself. All of that being said, I think we're probably not especially well prepared collectively for generative AI, And that's not a big surprise. It's still relatively new, and it's changing very rapidly. So we'll see.
Craig [00:03:48]:
Speaking of changes, Google Bard is now Google Gemini, and it's not just a rebrand. So Google also, as part of the rebrand, announced that they have some new models. So with Gemini, formerly Bard, which you can find at gemini.google.com. There are 2 versions at the moment, Gemini and Gemini advanced, and this is kind of the same as Chat GPT and Chat GPT Pro. The nomenclature is a little bit confusing. Gemini is a family of models. Ultra is the big dog high performance model. Pro is kind of the regular model, and Nano is a light version optimized for efficiency, which I think signals that Google is gonna make a push into AI on mobile devices.
Craig [00:04:36]:
I was pretty confused about the names and what models there were and that sort of thing. So I asked Gemini to explain it to me. The details of that conversation are in the newsletter, which is available at AI goes to college .com/newsletter. Gemini is kind of like GPT 3.5. It's fine for most things. If Gemini isn't up to the task, try Gemini advanced, which is kind of like GPT 4. So far, I've been pretty happy with, my use of Gemini advanced. It did a good job of helping me unravel the various names and models related to Gemini, And I've played with it for some course related tasks, and it's performed pretty well.
Craig [00:05:17]:
I'm not sure I'd give up ChatGPT Pro for Gemini advanced, but it's a nice option, and I'm playing around with all of them. So there's no big surprise. Know, your experiences may vary, but I would suggest that you try it out for yourself. If you do, I'd love to hear your impressions. You can send those to me at craig, That's craig@ethicalaiuse.com. There was another generative AI, education survey that was out in the news recently, a higher ed think tank, and I don't even know what that means. HEPI released a policy note then included some results from a survey of 1250 undergraduate students in the UK. According to that survey, 53% of students used GAI to help with their studies, but only 5% said that they were likely to use AI to cheat.
Craig [00:06:13]:
I don't doubt that the statistics accurately reflected the survey responses, but I'm pretty skeptical about both of these numbers. 53% seems pretty high, for students that have actually used generative AI to help them with their studies, And 5% seems pretty low for those who said that they might use AI to cheat, but I don't know. I think there are a lot more Students that are either using or going to use generative AI kind of at the edges of, ethical use. So I still think that the uptake of generative AI among students is lower than some of us might think or some people might think, even especially if we consider regular use. So they may have played around with it, but I just don't think many students are using it regularly yet. One of the reasons I like this particular little article, which is linked in the newsletter, is that it did discuss the digital divide problem. The digital divide is real, and it has real consequences for a lot of aspects of society, including higher ed. We need to keep chipping away at the digital divide if we truly wanna adjust society.
Craig [00:07:27]:
And generative AI is just going to widen The digital divide. More details in the newsletter, which you can can access at AI goes to college.com/newsletter. It feels like it ought to be a drinking game. How many times will they say AI Goes to College .com/newsletter? So let's get to the resource of the week. There's been a lot of online chatter about Perplexity dot AI. The gist of all of this talk is that Perplexity is becoming kind of a go to generative AI tool when you wanna uncover sources. There's a lot of hype that says this is going to Be the new Google, and I'm not so sure about that, but it is a very useful tool. Exactly what it is is a little unclear at first.
Craig [00:08:16]:
1st, I'm gonna read you verbatim what the about page says. Perplexity was founded on the belief that searching for information should be a straightforward, efficient Variance free from the influence of advertising driven models. We exist because there's a clear demand for a platform that cuts through the noise of information overload delivering precise, user focused answers in an era where time is a premium. I couldn't argue with any of that, but I don't know what it means. Their overview page is a little bit better, And and it talks about some of the use cases for Perplexity Point AI, answering questions, exploring topics in-depth, Organizing your library and interacting with your data. I can personally attest that perplexity is Pretty good with the first 3, but I haven't really tried it with my own files yet. Here's a problem that we have with search. If you go into Google or some other search engine, you want information, but a search engine doesn't really give you the information you want.
Craig [00:09:21]:
It gives you a list of websites that may or may not include somewhere the information that you want. Perplexity is much more about giving you the actual information that you're trying to to get and telling you where it got that information from. And so that's a fundamental difference, and I think it could ultimately reshape how we search for information on the web, and I think that's a good thing. There are a number of things that set perplexity apart. It's got a copilot mode that gives you a guided search experience. That can be really helpful. So what it does is it will ask you first, do you wanna focus on particular types of Resources. So right now, it's got 6 different categories.
Craig [00:10:09]:
Well, 5 and then an all. So it's got all where you search across the entire Internet, Academic, and this is a big one, where it searches only in published academic papers. Writing doesn't really search the web. It just helps you. It'll generate text or chat, without searching the web. Wolfram Alpha is a computational knowledge engine. It'll search YouTube, and it'll also search Reddit, which I think is pretty interesting. So you can go broad or you can go really narrow.
Craig [00:10:41]:
Another thing that it does is it will ask clarifying questions when it feels like it's necessary. Feels like. That's weird. When, it somehow in its large language model brain thinks that it's necessary. I give an example in the newsletter where I'd say I want to explain generative AI to a nonexpert audience. The audience will be intelligent but won't have any Background in AI or computer science, what topics would you cover? And instead of just giving me an answer, now I'm using, Perplexity's Copilot here. It says, what is the main purpose of your explanation? Basic understanding, application to risks and benefits. And so you can specify basic understanding or applications of generative AI or what are the risks or benefits or you can choose all of those or You can provide some other sort of clarifying information.
Craig [00:11:35]:
That's really useful, so it doesn't take you down as many kind of unproductive paths. Perplexity's Copilot will even kinda show you the steps it took in generating your response or its response rather. So you can take a look at that in the, newsletter. I know I'm saying that a lot, but there's a nice little screenshot that'll give you a better idea of what I'm talking about there. You can also look at the underlying sources that perplexity used to generate its responses. So for example, the little, explain generative AI prompt that I gave it. It came up with 24 different sources along with an answer. So I can dig into any one of those 24 sources to see exactly what it was talking about.
Craig [00:12:25]:
And when perplexity gives me its answer, it Gives footnotes, little numbers that refer back to those sources so you can dig in. You can also do the normal Chat thing like asking follow-up questions. So it it's really quite good. Here's one of my favorite, favorite, favorite Features of perplexity, it allows you to create what it calls collections. So collections allows you to group together different conversations, perplexity calls, conversations, threads. And so one one of my biggest frustrations with Chat gbt with the interface is I'll have some conversation with it and then wanna go back to that topic a couple of weeks later, and I Can't find that conversation because I've had 200 other conversations in the meantime. A little pro tip, you can search, your conversations in the mobile app, or as far as I know, you can't do it on the web interface yet. But you can always go in, search whatever keyword you're gonna search on, find the conversation, and then Add to it, and it'll pop up on the top of your list on the website.
Craig [00:13:37]:
So I I know that was kind of a muddled, description. But, if it's unclear, just email me, craig@ethicalaiuse.com. So these collections can be really, really useful. I was working on something for my dean recently. I was using perplexity. I put all of those conversations in a collection. So to me, Perplexity dot AI is one of the more interesting tools that I've seen come out recently. If you haven't checked it out, you should.
Craig [00:14:07]:
They have a free tier that you can play around with. And I And I'd used it and then almost immediately paid for an annual pro subscription. So, really, I encourage you to check it out. Alright. So here's my little experiment. So I'm teaching principles of information systems this term, And I include some pre class online assignments. These are simple little things that all I'm trying to do is get them to engage with the material little bit before class. They're easy, and I'm very lenient in the grading.
Craig [00:14:41]:
Basically, if you put any effort into it at all, you get full credit as long as it's not late. But, unsurprisingly, I noticed that some submissions looked suspiciously like they were generated with generative AI. 1st time, I let it slide and just, commented on it in class. The 2nd time, I required students to resubmit. I just Said I'm giving you a 0 for now. Put this in your own words, and I'll give you credit. I'm teaching the same class in the upcoming spring term. We're on the quarter system, by the way.
Craig [00:15:14]:
And so I started thinking about how to modify these assignments to keep students from just copying and pasting in a generative AI response. I know this is gonna sound incredibly lazy of me, but I don't wanna be spending my entire quarter dealing with academic honesty reports. I'd rather just prevent the problems in the 1st place, and we're gonna have to do this kind of thing. We're gonna have to rethink how we We do evaluations, how we assess learning, how we create our class activities. So I decided, I'm gonna try to modify an upcoming assignment. So this is a little activity I've used for years, And it's very simple. The original assignment was compare and contrast supply chain management systems and customer relationship management systems. Give 3 ways they're similar and 3 ways they're different.
Craig [00:16:05]:
Like I said, these are kinda little lame o activities, but but you you can see where I'm going with this. I want students to kind of have to look at what the 2 2 different types of enterprise systems that we're talking about are. You know, they'll start to get some understanding of kind of what they're all about, so I decided to change the assignment. And and I'm gonna give you an overview of what I did, but all the details are in the newsletter. So I basically said, Hey. I'm gonna give you a task, and then I'm gonna give you the answer that was given by generative AI. You're then going to compare that answer to the information in the textbook and briefly describe how the information from generative AI And the textbook are similar and how they're different. And then I go through and say, alright.
Craig [00:16:56]:
Here's what I put into, generative AI, here's what it spit back. And then the students had to to kind of do a little bit of work. I I was Pretty happy with the result. Some students absolutely missed what I was asking them to do, but that's okay because I'm not sure it was entirely clear. But the ones that got what I wanted them to do did a pretty good job of going back into the textbook and kind of seeing what the textbook said and then seeing what The answer, comparing that to the answer generative AI gave. Some students even went so far as to say, like, on page 247 of the textbook, it said this, And generative AI said that. And so I was pretty happy with the results considering I had about 15 minutes into revising the assignment. So I'm gonna do more of these.
Craig [00:17:47]:
As I said, I'm teaching the class again in the spring, so I'm gonna spend part of the break Redoing some of my assignments, my online activities to make it to where they have they can't just copy and paste the question into generative AI. And I'll report on those experiments, as I go through them. Alright. That's all I have for you today. I'm out of breath, and you're probably tired of listening. So I will talk to you next time. Thanks for listening to AI Goes to College. If you found this episode useful, you'll love the AI Goes to College newsletter.
Craig [00:18:28]:
Each edition brings you useful tips, news, and insights that you can use to help you figure out what in the world is going on with generative AI and how it's affecting higher ed. Just go to AI goes to college.com to sign up. I won't try to sell you anything, and I won't spam you or share your information with anybody else. As an incentive for subscribing, I'll send you the getting started with generative AI guide. Even if you're an expert with AI, You'll find the guide useful for helping your less knowledgeable colleagues.