![AI Goes to College AI Goes to College](https://artwork.captivate.fm/1190fc69-7089-4a75-bb44-41162673cc56/aigtc-square-cover-from-getcovers.jpg)
In this wide-ranging discussion, Craig Van Slyke and Robert E. Crossler explore recent AI developments and tackle the fundamental challenges facing higher education in an AI-enhanced world. They begin by examining GPT Tasks, highlighting practical applications like automated news summaries and scheduled tasks, while sharing personal experiments that demonstrate the importance of playful exploration with new AI tools.
The conversation then turns to Gemini's new fact-checking features, with important cautions about source verification and the need to balance convenience with critical evaluation of AI-generated content.
The hosts have an engaging discussion about the challenge of "transactional education" - where learning has become a points-for-grades exchange - and explore alternative approaches like mastery-based learning and European assessment models. They discuss concrete strategies for moving beyond traditional grading schemes, including reducing assignment volume and focusing on process over outcomes.
The episode concludes with an announcement of an upcoming repository for AI-enhanced teaching activities and a call for educators across disciplines to share their innovative approaches.
Outline
GPT Tasks and Functionalities
Exploration of New AI Tools
Comparison of Search Tools
Privacy and Availability of AI Tools
DeepSEEK: A New AI Model
Open Source and Computational Needs
Privacy and Intellectual Property Concerns
Writing with AI Tools
Transactional Education Model
Proposed Repository for Active Learning Activities
Conclusion and Call for Interaction
[00:00] Introduction and GPT Tasks discussion
[15:45] Gemini's new features and source verification
[25:20] Writing process and AI tools
[35:10] Transactional education challenges
[45:00] Announcement of teaching activity repository
Mentioned in this episode:
AI Goes to College Newsletter
Welcome to AI Goes to College, the podcast that helps you figure out what in the world is going on with generative AI in higher ed. We've got several things we want to talk to you about today, starting with some functionalities that have been added to some of our favorite tools.
So the first one is GPT Tasks. I've been saying for a long time that a big part of the future of AI is going to be in AI agents that are just out doing things for us.
Rob, you're in Switzerland right now.
I'm sure when you were putting that trip together, it would have been awesome if you had some AI travel planner that would go through and plan all of your adventures, or at least help you with that and booking flights and that sort of thing. But there are thousands of tasks like that that we do every day.
An email agent that can sort through your email and tell you which ones to answer now, which ones to pass on, maybe it can even answer some for you. And we're slowly getting there. There was a pretty major development with GPT tasks.
And basically what GPT Tasks lets you do is set up a prompt that will run on a schedule. And so I'll give you two examples. One is I set up a task that every. I think it's three times a week, Monday, Wednesday, and Friday.
I get an email that gives me the latest news on AI and higher ed. And it's kind of like a Google alert, but you can make it much more tailored than you can with a Google alert. I was pretty excited about that.
But you know what my favorite task I set up was? Any guesses?
What was that?
Craig, you'll appreciate this. My daily dad joke. So every morning I get an email that gives me not exactly the dad joke, but gives me a little teaser about the dad joke.
I go into ChatGPT and there's a new dad joke in that conversation. So the one from a couple of days ago is one of my favorites. Are you ready?
Yeah. Yeah, go for it.
Why don't seagulls ever fly over the bay?
I don't know, Craig. Why?
Then they'd be bay gulls. Yeah, so. But I mean, that's a silly little thing. It's kind of fun, you know, it gives me a laugh or at least a groan every morning.
But it shows a small part of the potential of what we can do with agents.
Keep an eye on this area, because this is going to explode once it gets going, when we have all the standards in place so agents from different vendors can interact, that sort of thing, it's going to explode.
So, Craig, there's a couple of things I want to ask you a question, but before I ask you a question, I wanted to make a statement about what's great about your dad joke illustration is there was this new tool that came out and you found a fun way to use it and to incorporate it into something that brings you joy in your life. It didn't have any real work purpose. It didn't have any work meaning.
It didn't have anything that you're going to say, oh, wow, this is going to make me way more efficient in life. But you're like, here's this new tool. I'm curious if it would give me a different dad joke every day. And you went out and just tried it.
And I think as we see these various different tools come out, I go back to, that's something that everyone should be doing.
Whether you're a student looking at some of these tools for the first time, whether you're in a staff role or a faculty role, just do something and see what you get out of it. And as you start using it for a little bit, I'm very confident you'll find ways to use it for more and more things.
I don't know if you know this or not, but our mutual friend and colleague Franz Boulanger at Virginia Tech and I wrote a couple of papers on something called Application Play back in the early 2000s, where basically we said that one of the best ways to learn a new technology tool is to play around with it. And there's actually a psychological basis for it, a cognitive basis. You're trying to learn something new.
You've only got so much brain capacity to do it.
If you try to learn the tool and a really complicated task at the same time, you get overloaded and get frustrated if you pick something silly like send me a daily dad joke that's pretty simple and straightforward, and you don't have to really put much brain power into that. You've got a lot of capacity left to figure out how to actually do this thing that you want to do. So, no, I think you're right on.
Don't make this stuff harder than it has to be, especially when you're first trying to get started.
Yeah.
And so I wanted to follow up with your first example where it's sending you a summary of generative AI news from three times per week, and you're able to narrow it down, you're able to get it focused.
Have you tried to search in a different medium using Google tools or other sorts of tools to compare what ChatGPT is getting for you and what the manual process would be.
You know, I haven't, but that's worth trying because I do have a Google Alert set up to do kind of the same thing.
But what's so nice about ChatGPT, and I neglected to mention this earlier, is it gives me summaries of the article that just doesn't give me the article. It gives me summaries and why that might matter to higher ed.
But I haven't gone through and kind of vetted it, partially because there's a lot of convergence. You know, you'll see see the same story in six different outlets. And so I didn't see the need to do it.
But if you were going to use it for something where an omission would be really bad, you really should test it thoroughly. You know, if I miss something, eh, you know, who cares?
Well, but at some level, what I hear you saying too, though, is you have a level of expertise in the amount of attention you're paying to these sorts of things, that you have a level of confidence in what you're seeing, that it is accurate. And so I think that's important too is you have to assess where are you at, what is your level of expertise in the information.
So you can make some level of an assessment of how believable, how reliable, how biased is that information for influencing what you might actually do with it.
Right, right. And I think it's really important to add that I don't take that text that ChatGPT generates and put it out anywhere.
It's just for me to look and say, okay, oh, I want to look at that article. This is interesting, that sort of thing. So it's just directing me.
It's not something I would take and copy and paste and put into a blog post or something like that. So you're absolutely right. And I'm not sure if everybody has access to this yet or if it's just ChatGPT plus users and Pro users.
You don't have Pro, do you? $200 a month.
No, I don't.
I was reading about this just before we came on and it is only available to plus Pro and Teams users and it's in beta mode right now, which means it might not even be available to everybody in those categories.
So if you don't have it, you probably will eventually have it, especially if you're paying for ChatGPT, but there is a chance it may not be quite yet available for you. Yet.
And it also, at least for me, it was available on different platforms at different times. So like the.
And I don't remember the sequence of this because it wasn't really important enough to remember, but like, I had it on my phone and then didn't have it on the Mac OS app and didn't have it on the web. And then the next time I checked, it was on the Mac OS app on one of my computers and on my other computer.
And so if you don't get it, you know, chill and it'll does or OpenAI does this all the time. You know, they do these gradual rollouts. So if you don't have it, you probably will at some point. But keep an eye on agents.
That's really the big story here. Agents are going to be the way a lot of this plays out. All right, anything else on that one?
No.
Nope. Okay, this next one is very interesting. Gemini. And Gemini is rapidly becoming one of my favorite tools.
I think I've said on the podcast before, I use it more frequently than any other AI tool. It's just good for all the quick things that I want to do. They've added a couple of features recently, one of which is pretty under the radar.
It will now double check its responses. And so if you get Gemini to do something.
And down at the very bottom there's the thumbs up, thumbs down, regenerate, share, and then there's the three dot menu. Hopefully you can picture that in your heads because I know we're on audio right at the bottom of your output. And that three dots is more.
And one of the choices on more is double check your response. And so what it does is it kind of double checks the responses and then it'll give you little down arrows and highlights of different sections.
Like if it's something that's factual, it seems to check it.
And if you click on the little down arrow, it will show you the source for whatever it is, or a source for whatever it is that statement is trying to express. And I think that's pretty useful. They're slowly reducing hallucinations and giving you ways to more easily check for hallucinations.
Hallucinations are just when AI just makes stuff up, basically. I think that's going to be really good for those situations where it's important that you get things right.
You can click on the source and go see what the source actually said. So it's kind of like what perplexity does in a. On a smaller scale. But you can use it within Gemini, which is really Nice.
What I like about that, Craig, is it gives the user, and in this case, I'm going to say the student, because I encourage students to do this whenever they utilize generative AI technologies to help with anything they've done in the classroom, the ability to validate and authenticate that the results are accurate because of the hallucinations that we've talked about. Because sometimes these generative AI tools will just make things up.
Well, if I can very quickly and easily as a student, go and see what that source material is, read what it had to say, and say, okay, I can see where this came from. Now, you can do a couple of things. One, you can say, is that summary a match to what I read? So is it believable?
But you can also look at that source document and say, where did that come from? Right. If it was, you know, a National Public Radio and NPR News article, okay, it was.
It was news written by a journalist, I feel pretty good about it.
If it was some, you know, let's say, out there blog post of somebody who's just very radical in one direction or another, you might look at that and discount what the generative AI technology gave you as something that's pushing out perhaps a belief or a leaning that you may not be confident in saying, yeah, that's actually the way things happened, or that's what's going on.
So it very quickly allows someone who may not be an expert in that space to go in and say, yeah, it's believable, and I believe where this data is coming from, and I have confidence in what I'm putting out there.
Yeah, that is so important to keep in mind. It will show you its source, but that source could be complete garbage. So you also need to vet the source, especially if it's something important.
I mean, I really believe that you've got to balance the amount of effort you put into checking with the importance of checking who starred in the third James Bond movie. You know, if I get that wrong, who cares? But other things are important.
And I'm just looking over at a recent Gemini chat I had where I asked it to check its work. And one of the sources was a website called Very well Mind, which I think is a legit website, but I have no idea who's behind that.
Another one was from the psychology department at Penn State, and another one was from PubMed. Okay. If it's coming from the psych department at Penn State, you know, that's probably right.
And I might even want to look there to see what more they have to say, which I think is a hidden benefit of this approach because you can dig deeper. You know, it might send you on a tangent that you hadn't thought about before, make you think about something in a different way.
So I think there's a lot of benefit to that for any kind of deep, deep work that you want to do. Have you tried it yet?
Yeah, I've tried it. I tried it before I came to Europe and it worked. I was loving it as an example for students.
And interestingly, when I was trying it earlier today sitting here in Switzerland, it wouldn't work. And some of the other generative AI tools I was trying to play with say you're not allowed to do that in your location.
I do wonder if some of these features might be US centric and because of European Union rules or, or other country based regulations or even just decisions by these companies not to deploy all the features to all of the world at the exact same time, we may not see exactly this functionality where everyone's located who might be listening.
That's going to be an issue, I think, moving forward, but it always is with technology. China's been blocking Google for a long time and. But that's a good point. I'm glad you tried it over there.
And now we can kind of give listeners a heads up. Speaking of China, the big news this week was deepseek. That's D E E P S E E K which is.
I can't remember the name of the company that came out with it, but it's a Chinese company that created a large language model that rivals the best that Anthropic and OpenAI have to offer on these various benchmarks. But the costs to train the model were a small fraction of what it cost to train these anthropic and OpenAI models.
And as I understand it, you can run it locally because it requires so much less in the way of resources. And it's open source, so anybody can use it. And Nvidia stock tanked and then came back up. So there's all kinds of stuff going on with it.
It's interesting. Have you had a chance to try it yet?
I haven't tried it and I'm not sure if I will.
Which gets into the privacy side of things, which is where I think we're going is as I look at the privacy side of this, every keystroke, every rhythms of keystrokes, all the data you put into it is all sent back to China. And I know when it comes to many Chinese companies that the Government there has access to companies information and data.
And so I'm not comfortable, I'm not comfortable using it. I've read a lot about it and I think there's a lot of things to be excited about, what they've accomplished.
Some of it I think is overhyped and I can speak to some of the stuff I've seen on that. But I do think it's a step forward that we're going to see other results in other technologies as a result of what DeepSeq was able to do.
I played around with it through PO. So POE.com gives you access to. I quit counting, but it's somewhere north of 30 different models now in one interface.
And we've talked about POE before and I've written about it before. POE has the Deep SEQ models now. So I thought, well, I'm going to give it a try.
And the response was fine, but it wasn't, oh, I've got to quit using these others and start using deepseek. It was like I keep easing towards trying to get something that runs locally and that might be a reason to run deepseek.
I don't know if you can stop it from sending data back. I don't know enough about it yet. But the other, one of the things.
I've read, Craig, is that the locally run versions of deepseek don't hit the metrics that make the web based version that everyone's getting so excited about that it is not running as effectively and efficiently on the local model. So there is something about the resources that the servers at headquarters are able to do. And so there's something there that's interesting.
I think what's really interesting though is the fact that it is open source.
And what we'll see, I believe because of that is when something is open source, it means somebody else can come along and see how this was done, they can make some incremental improvements, they can do some different things.
And so I really expect within the next three to six months, if not sooner, we're going to see another open source model come out by another company that's going to continue to push the envelope on what's possible. And I think that running the local part or running it locally part is going to only get better.
Yeah, I think you're right. And Llama Meta's model is already open source and I know there are a bunch of other ones.
Interestingly, deepseek is running on top of Llama or used Llama as part of what it built upon as One of the open source things that catapulted. So I can imagine that Facebook's looking at what DeepSeq did right now, and we're going to see some pretty drastic improvements in what Llama can do.
Everybody is, and I'm a little skeptical. We've already seen some shenanigans around benchmarks with OpenAI.
And so I kind of want to wait and see on all of this, but I'm hoping that it will push other AI companies to make some of their models open source. I mean, I'd love to see something where maybe the last generation models get put out in open source form.
I don't have no idea if that's feasible or not, but that's going to be important because if you can run these models locally, you may be able to address some of the intellectual property and privacy issues which are critical to higher ed. So we'll have to see on that.
I don't know what I like about this whole open source as well is a lot of the big models that we're seeing from OpenAI, from Claude's model to Anthropic, and these different ones all have backings of people with very, very deep pockets.
But what China did because of their inability to access GPUs and some of those sorts of things is they had to come at this from a perspective that was a little bit more creative in how do we accomplish this with fewer resources of computing.
And so when I bring this back to higher education and I think about students in the classroom and what could be possible in doing some different things if that compute power gets to the point where it can be done locally with the lab computers that we have in a computer lab. I can imagine there's some incredible training opportunities for our students.
There's some very creative things that our students are going to be able to do because we've pushed down the amount of money basically that it takes to compute and operate and do these various different things. So I am really excited about what the future holds for opportunities for students to begin playing with the actual creation of this themselves.
Yeah, that's a great point. And the companies have already been developing these lighter models like Claude has haiku, there's 01 mini from OpenAI.
And so I think we're moving in that direction. But one of the things that I try to keep in mind is I was blown away by some of these earlier models.
In fact, a lot of what I do day to day really doesn't require these leading edge models to do what I'm doing. And I think that's true for a big chunk of us.
And so if these lighter models can kind of come up to the capabilities of models from a year ago, it may be fine for a lot of stuff that makes any sense. But we'll see. All right, so we talked about privacy. I know that you're one of the world's leading privacy researchers.
Anything else you want to say about that?
I think the next thing related to that is intellectual property. One thing is privacy.
It's privacy about me, the other side of things, and this is where when new companies come out, when new models are released, new companies release new models, asking yourself, to what level am I going to share intellectual property with them? And how do I know that I can trust them with that intellectual property?
I think this is a big concern for higher ed because we have a lot of intellectual property, whether it's the research that we're doing that is private, potentially we could be getting patents or different things on it that way, or access to student data, which is part of the intellectual property.
Is there something that sets us apart with how we're teaching our classes, understanding what's being done with that data, what's being done with that information, in a way to ensure that you're not giving away something that you should not be giving away?
This is a really fascinating topic to me because I think I'm a little bit divergent on what a lot of people are thinking. The kind of research that I do and the kind of research that a lot of social scientists do, nobody's ever going to pay for it.
You know, it doesn't have that. I'm not developing a new way to build nanofibers or something like that.
The likelihood of somebody being able to put together the right prompt to get my novel research idea is pretty low. The things that have already written are out there in the world anyway, and I want people to use that. I mean, we.
Part of our metrics that we judge our success by is citation counts. So I kind of want people to use my stuff.
I understand why some people are really concerned about intellectual property and the use of AI, and I think it's something that needs to be addressed. I'm just not sure that people like me need to be as worried about it.
So there's a couple of places where I would disagree with that.
One is, once a paper is written, once it's published, you've given away the copyright, you've functionally given away the intellectual property to the journal. And then at that point, it's their decision on whether they're going to make that available to train.
So before that, you know, if you want to make it available to where it's out there before it's published, that there might be some interesting ways to get ahead of how long it takes to get journal articles published. But the other side of data, I put this more on the data privacy, not intellectual property side of things.
But if we're collecting data from people, we have to jump through IRB hoops, institutional Research Board, and there are rules about where we're going to store the data, what are we going to do with the data, how are we going to use it, and it's our job as researchers then to comply with the rules that we've agreed upon in the process of getting approval to collect and store and use that data.
And if part of that, those rules that I'm supposed to play by and how I'm doing that is not consistent with, I can load them into these data models, I can do these various different things. I potentially might not be in compliance with the rules I set as a researcher. I'm going to follow in conducting my research.
So again, I would caution people to make sure that they understand the rules they have to play by. Even though we might look at, oh, you know, a person answered these 15 survey questions, it's not that sensitive. Who really cares?
At some level, the person who cares is irb because you've agreed to how you're going to use that data, how you're going to store it and where that's going to stored at.
Yeah, and I want to be clear, I was not talking about feeding our data into AI.
I'm talking about AI grabbing our ideas or, you know, I'm trying to refine some concept or some approach with AI that, that just doesn't worry me much.
I would not take, and nobody should really take data, certainly anything that's personally identifiable, and put it into an AI tool unless you're very sure it's going to comply with whatever regulations or agreements you have in place. So I was thinking of it more of an intellectual property thing.
I mean, you do need to understand your institution's rules because whether it's student data, whether it's institutional data of some other sort, you need to be really careful using that with AI, just as you would using it for any exposing that data to any other external source. I think the danger with AI is that it's so alluring to do it that maybe we get carried away.
I think this shows up a lot in the writing process as well. So many of the tools that you have plugged into browsers anymore, I think have AI plugins to them.
They want to help you to be able to write better, to do these various different things. And one of the things that I think is important as people think about writing is writing and editing are two different things. And that writing.
And where I think we bring value as human actors in the writing process is we're going to piece together thoughts in ways that the machine isn't going to piece together. And we're going to.
In that creative flow of writing, we're going to express ideas and express things that are going to build upon each other, that are going to put together things that maybe nobody has thought of putting together before.
When you insert the machine in the middle of every piece of that process, what I think happens is an interruption of the flow of the human creation of ideas.
If the machine is creating sentence after sentence with some prompts, I do really wonder if we have this interruption task going on that ultimately never gets us into that point of really being in what.
What I think the literature calls flow to where you just really start going with connecting things together in a way that is a very creative human process.
Let's break this down because you've covered a lot there. There's a lot of nuance to this that is not being discussed.
So let's have a continuum where you've got Hemingway, whomever, out on one end, Herbert Simon, Nobel prize winner, one of my favorite academics, on one end, and then you've got a routine email on the other end. You don't treat those the same, do you?
No.
Okay.
No. I think those are great, extreme examples.
So executives that have had assistants have had assistants writing their letters forever, you know, for a long, long time. The assistant will come in and put a bunch of letters on the desk for the boss to sign. The boss signs the letters and they go out.
Half the time they don't even read them because it's just routine. Especially if you trust your assistant, you know, getting copilot to write routine emails for you, you know, I know you do.
Don't you do that quite a bit?
Yep, yep, absolutely.
And so when I was talking about writing there, it was more from the perspective of the deep thinking sort of writing, where it's about coming up with novel, new creative research ideas and putting arguments together around those. But yes, the trivial and the mundane. The tools are great for helping with that.
So you're kind of talking about writing as thinking. Okay. But I want us to delve into this AI word now. I want us to dig into this a little bit because I think people are too binary. Never write with AI.
Just write with abandon with AI. And there's a lot in between there.
If I'm talking to students, my big message is, if you're writing an essay or whatever, do not ever have AI write your first draft. That's where you go wrong. That's where it's AI's work that maybe you're tweaking instead of your work that AI is helping you refine.
And I think that's absolutely a critical difference. Even for those of us who write as part of our jobs. Write this deeper stuff. I want it to be me writing that now.
I absolutely want ChatGPT or Claude or Gemini or my favorite Lex Lex page to help me better communicate those ideas and maybe help me craft the ideas. Because Rob doesn't want me calling him at three in the morning.
Although I guess I'd be okay now, but not when you're on the west coast to kind of figure these things out. I think AI can be great for that. But it's still my idea. It's my work. AI is just helping me. And I wrote a little article about this.
I think it came out last week in the AI Goes to College newsletter, which is available on substack or aigostocollege.com and basically my message was, use AI as kind of your copy editor as your. Oh, I use the. You remember Lou Grant? Lou Grant was this old, probably 70s TV show and I think he was a TV news editor.
But he, you know, the cigar chomping, gruff Spider Man. Maybe that's more current. The editor on Spider Man. Is that still a thing? I don't know.
But, you know, use it to tear your work apart, to be that editor that just makes your work tighter and better and clearer. Use it for that. Don't use it to do your first draft. It's got to be you that starts it right?
And that goes back to that phrase I shared, Craig, which is to write with your heart and to edit with your head. So that writing with your heart, that first draft is coming from you, from what you truly believe, from what your heart is telling you to say.
And then when you have that written right, you've got your thoughts that you believe in on paper.
Then when you start editing with your head, there's some great tools that can help you be a better editor and edit and make those ideas sound better, get tighter, bounce them off of Somebody make sure they make sense and that you've thought those things through.
And that's, that's really where AI shines in terms of writing. Just helping you tighten everything up and pointing out where the gaps are and the inconsistencies and the redundancies and that sort of thing.
And I have to say, I don't know if you've noticed this or not, but when I'm a reviewer or an editor for a journal article paper now, the writing is so much better.
You know, it used to be and this is a struggle for non native English speakers to write because the vast majority of the journals are published in English. And I can't even imagine trying to write in a different language. And so it's just a mess. It could make things very hard to understand.
And I'm sure there were good papers that got rejected because a reviewer or an editor got frustrated because of the language problems. That has just dropped way off in my editorial and review work, which I think is great.
Yep. I see a lot less of the please find a copy editor for this work sort of comments the review process that five, 10 years ago showed up quite often.
Yeah. And to be clear, these people are not using AI to write. Look, I can tell, I can absolutely tell when somebody's using AI to write.
And that's not what's going on. They're just using Grammarly or something to help them get rid of the little errors. Which I think is a wonderful thing from my perspective.
Let's tackle one more big issue and that's transactional education. We've talked a little bit about this before, but let me give you kind of the high level view. I like.
A lot of us have been thinking really hard about what do we do about AI higher ed. Because there's no effective way to really block students from using it. The clever ones are going to use it and you're never going to know it.
What do we do? And I've come to the conclusion that the real problem is that education has become transactional.
You do these things and in exchange you get this grade. I think about my syllabi and mine's not unusual. You know, I've got this fairly detailed grading scheme.
You know, the tests are going to be this many points and the homework's going to be this many points and this is going to be this many points. And it's almost look like you're looking at a bill, you know, an invoice.
You add everything up and there's your price and then we have rubrics you know, rubrics can be useful things. I'm not really anti rubric, but what could be more transactional? You know, you get the rubric in advance.
And that's what most of us do is they, we make our rubrics public. And okay, if I do this, I, you know, get here on the rubric. And the whole thing is you do this thing and you get this other thing in return.
And that's just a transaction. The whole idea of, no, you should learn this because it's useful stuff to know. Forget about grades, Forget about all that.
Don't think about this as a grade on a paper. Think about this as what you learn. And I wrote about this in AI Goes to College, the newsletter.
One of my brother's best friends in high school was a typical high school guy. Played football, fixed cars, chased girls, no interest in class.
And he gets to his senior year and it's like, oh, crap, I don't know what I'm going to do with my life. So he goes to a guidance counselor.
Guidance counselor looks at his academic record and pretty reasonably goes, well, your best bet is to learn how to fix air conditioners. And if you can't cut that, you can always hang drywall.
This guy is now a PhD chemist that owns his own research company that does chemical engineering for cancer drugs. But what he did is he started at junior college, it's what they called it back then.
He would take a class that he got an A in over again if he thought it was something he needed to really understand for down the road. And I remember him telling me this, so I'm taking that class again. What happened? Did you flunk it? He said, no, I got an A. What?
I said, yeah, I just didn't feel like I really understood this stuff well enough. And I'm going to need to know organic chemistry before I go on to this other chemistry or whatever it was.
He was not taking a transactional view of education. And he's a brilliant guy now. I mean, he was then. He just hadn't brought it out.
So I don't know how we do anything about this, but I want to get your take on it. What do you think?
I struggle with this because I know exactly what you're saying. And I think this is the way our education system is set up since kindergarten, that from the very beginning of the school system, it really is.
You do this score on these tests, this score on these homeworks, and turn them in and that's how you're going to earn the grade that you get. You know, I know I've done some things where it's ask students at the beginning of class, you know, who wants to get an A this semester?
And on the first day of class, everybody raises their hand. And then you lay out how they're going to earn their points to, you know, get that grade that they. They want.
And I begin wrapping my head around how do you do that differently? And I've seen some things like contract grading, where students sign a contract of what it's going to take to get an A.
And it moves it away from a little bit of the transaction, but still very transactional with the contract being involved. And these are the things I'm going to do and what I'm committing to doing.
I struggle with how you would change it in such a way to where it would be consistently changed with every faculty member, because faculty members have academic freedom to do what they want. And getting everybody on the same page with not being transactional is a challenge. But when I think about what can I do?
It's still transactional, but I begin looking at the process as opposed to the outcome.
And so if I see critical thinking and critical engaging with the ideas and the process of creating an outcome and I can begin to tap into that, then I'm more okay with that being transactional because I'm able to see into the inside of the thinking of the process and what they're creating.
And in some ways, I think these generative AI tools give a much easier way to peek into that critical thinking process, because there's prompting involved, there's reflections on what the prompts say.
And so you've got a lot more of those pieces you can begin to look at that give you the ability to see how students are using the tools and doing things to generate outcomes, because the outcomes they generate when they go off into the real world are going to be different than the outcomes they're creating in our classroom. You know, two years from now, it's going to be way different. It just. Things change so much.
But that process of critically evaluating, critically thinking and looking at how do I provide added value to what the machine's creating through that critical thinking process. I don't think that's going to change.
And so I think in many ways, it's changing our focus to what it is that we are evaluating gets us to a better place of seeing how well are students actually doing with these new technologies.
There's a lot to this, and I wish I had good answers. I think your focus on the process is a really good approach. I've been thinking quite a bit about mastery.
So one of my favorite classes as an undergrad was like a linear Programming Management Science.
I don't remember what the exact title of it was, but this professor had this index card giant box where for every topic we're doing a transportation topic or whatever it was. He would have eight or ten tests for that, which were a question or two, a problem or two to work through.
He'd do a lecture, and then we could practice. We'd go through some practice problems. And then you take a test. Not everybody, but if you wanted to take the test, you took the test.
If you didn't want to take the test, you didn't take the test. I think he had a limit on like three tests a day. Well, I was done by midterm because I'd had some exposure to this stuff in another class.
So I kind of knew what was going on. But I thought, I know exactly what I've learned in this class and I've demonstrated that I've learned it because I did it.
Now, you know, retention, all those other things, who knows? But I thought that was just a fantastic class because it wasn't. You didn't turn in the homework, you weren't in class.
All these things that we give points for. It was like you did it or you didn't do it. And I'm making up numbers here, but let's say he had 15 tests.
If you did 13, 14, or 15 of them, you got an A. If you did 10 to 15, you got a B or whatever it was. But he had some grading scheme like that. And I thought, this is just great.
If I just wanted to see, I didn't care about this class, I would have done the 10 tests or whatever and then just quit. I think that might be another approach. But this is going to be tough to address. But I think we can start chipping away at it.
And the first thing I'm going to do is cut back on the number of assignments. Students sometimes feel like this stuff is busy work, and I'm not so sure they're wrong.
Right. I've pivoted to homework being about the preparation of an activity before they come to class.
That then is that jumping off point where we can do something so it's graded binarily. Did you do it before you came to class or not?
Kind of with the idea that you're going to think about this a little bit before we engage in a learning process in the classroom.
And so you know, if they cheat, if they do something where they didn't actually put cognitive thought into it, then it shows up in class participation and their critical thinking abilities in the classroom because they didn't spend that time to be prepared to engage in some active learning inside of a classroom environment.
I don't know. I really want to find ways to move away from this transaction idea and I don't quite know how to do it, but we'd love to hear.
So let me throw this idea out there. And this is something I've learned being in Europe.
And I've heard students, especially my students from WSU who are here that says they're teaching by the European model. And I say, well, what's that? They said, well, the only thing we're going to have graded all semester long is the final exam. Yeah.
Is that kind of what we're getting to is there's going to be one big opportunity to demonstrate your learning. And maybe it's not written, maybe it's oral, maybe it's presentation based in a way that you get up in front of the world and share your expertise.
Is that a different mindset than what we do in the United States?
I think it's absolutely a different mindset. There was a pretty big push over the last 10 or 15 years to have a large number of low stakes activities.
And the idea was you lower the risk, you get the students to relax a little bit, they learn in smaller chunks and everything turns out better. That made sense to me at the time, but I'm not so sure that that's the way we can go with, especially at scale. You know, all of this is different.
If you've got 20 students, you know, it's not nearly as much of a problem. But I generally, in my principals class, I have between 75 and 95. You know, I can't do oral exams for those students.
You know, they can't all give presentations. And so it's going to be a struggle, but we might be able to help. So, Rob, I'm going to throw a little bit of a curveball here.
Rob and I are in the early process of creating a repository of learning activities that either make it so students can't really use generative AI inappropriately or that leverage generative AI. And I know, Rob, you've done some things, I've done some things. We both have colleagues that have done some interesting things.
But we were talking offline and realized that there's nothing that's putting all of this together where you can just kind of Go see what other people are doing. And so we're hopeful that by late spring we might be able to.
To put something out there that might be useful for people, but that's going to require Yalls cooperation. If we don't get content, we're not going to reach critical mass and it'll just die.
So listeners be thinking about that over the next couple of months. We'll put out a little survey where you can sign up to participate. It's going to be a free resource.
It's not, you know, we're not trying to make any money off of this.
It's just going to be this, this repository that I think can really be beneficial in helping people think about the way they evaluate learning a little bit differently.
Yeah, no, I'm excited about this, Craig.
I think this is taking what I've advocated for my local university, and that is we need faculty in the hallways talking about things, sharing their ideas, sharing what they're doing so we all don't feel like we're inventing a wheel, that we're all building this wheel together. And together we're going to build a better wheel than any of us would do individually.
So we need your help and we need to come together as a community of educators. And I think what we create is not going to be information systems focused.
Craig and I are both from the information systems domain, but we need people in marketing and accounting and even outside of the business domain, students to jump in and to help us create a place where hopefully we can push a conversation of how can education stay in front of this and do a great job of preparing students and bringing that added value that students expect when they write that tuition check every semester.
Yeah. And it's going to be critical to get people from across.
I know the English faculty have some really interesting challenges that some of the rest of us don't have. The sciences just go straight across the board. We're going to want to have participation from a broad spectrum of different disciplines.
If you have anything you want to share in that regard, it's easy to email me. Craigi goes to college.com and I'd love to see what you're doing. Maybe we can even get you on to talk about what you're doing.
All right, Rob, any last thoughts?
I'm going to follow up on what you just said and encourage people that no idea is a bad idea. And so it may not be a perfect idea. That's fine. We aren't looking for perfection. But even with the incremental steps forward in doing this.
It's going to help the community grow. So if you're doing something that unsure of. Of whether or not it's worth sharing, please share it.
And Craig and I may very well look at it and be able to help make it better. Right.
So this is a community approach, and I'm hopeful that through this we can build a better opportunity for engaging our students with generative AI in the classroom.
Make it better or borrow it or both. Yeah. And we've already had some listener interaction doing amazing things that I never would have thought about.
And so, yeah, we'd love to hear from you again. It's craig@aigostocollege. I'm sorry, craigtocollege.com don't forget the dot com. Speaking of AI goes to college dot com.
See if I can say that correctly. Aigostocollege.com that's where you can find everything. You can sign up for the newsletter.
You can go to aigostocollege.com follow and it'll have links for Apple podcasts and all the major podcast players. Make it really easy. Share it with your friends, share it with your colleagues.
Let us know what you're thinking, and I think with that, we will wrap it up. All right. Thanks, Rob.
Yep. Thanks, Craig.