March 4, 2025

AI's Impact on Critical Thinking, the Talent Pipeline, and Academic Research: Implications for Higher Education

AI's Impact on Critical Thinking, the Talent Pipeline, and Academic Research: Implications for Higher Education
The player is loading ...
AI's Impact on Critical Thinking, the Talent Pipeline, and Academic Research: Implications for Higher Education

In a timely discussion, Craig Van Slyke and Robert E. Crossler discuss the latest advancements in generative artificial intelligence, with a particular focus on the unveiling of Claude Sonnet 3.7. This development has prompted a wave of excitement and speculation regarding its implications for the future of programming. The hosts articulate their observations on how this model could revolutionize the way coding is approached, potentially rendering traditional entry-level programming roles obsolete while enhancing the efficiency of seasoned professionals. This raises critical questions about the evolving nature of job markets and the skills required in the face of such technological advancements.

As the dialogue unfolds, the hosts transition to a discussion on the ethical and educational ramifications of integrating AI into academic environments. They express concerns regarding the diminishing emphasis on critical thinking skills, particularly among students who may rely heavily on AI-generated outputs. Van Slyke and Crossler emphasize the necessity for educators to not only familiarize themselves with these technologies but also to instill a sense of skepticism and analytical rigor in their students. This approach is vital for ensuring that future professionals are equipped to discern and evaluate the information generated by AI, fostering a culture of informed decision-making and innovation. Van Slyke and Crossler offer some interesting ways in which AI can be used to help students improve their critical thinking skills.

The hosts also discuss how new AI tools, such as OpenAI's ChatGPT Deep Research may reshape the way in which academic research is done, for faculty and students. Higher ed professionals may need to rethink the very purpose of learning activities such as research papers.

The episode concludes with a call to action for higher education institutions, urging them to rethink their pedagogical strategies in light of the rapid proliferation of AI technologies. By fostering a collaborative and adaptive educational environment, educators can empower students to harness the capabilities of generative AI responsibly, thereby paving the way for a future where technology and critical thinking coexist in ways that enhance critical thinking skills.

Takeaways:

  • The recent advancements in generative AI, particularly Claude Sonnet 3.7, have significant implications for coding practices across various disciplines.
  • There exists a growing concern amongst educators regarding the potential displacement of entry-level programming jobs due to the capabilities of generative AI technologies.
  • It is essential for higher education institutions to adapt their pedagogical approaches to effectively integrate generative AI into the curriculum for enhanced critical thinking.
  • Generative AI tools can serve as valuable resources for academic research, but they must be used carefully to avoid over-reliance and ensure the integrity of scholarly work.
  • The conversation around generative AI's impact on critical thinking skills reveals a dual potential for either degradation or enhancement based on how these tools are utilized.
  • Educators need to cultivate a deeper understanding of generative AI technologies to guide students in their effective and ethical use in academic contexts.

Companies mentioned in this episode:

  • Anthropic
  • OpenAI
  • Microsoft
  • Peapod
  • Doordash
  • Uber Eats
  • Walmart
  • Chewy

Mentioned in this episode:

AI Goes to College Newsletter

Chapters

00:00 - None

00:41 - None

00:48 - Introducing Dr. Robert E. Crossler

04:51 - The Future of AI and Its Impact on Entry-Level Jobs

13:41 - The Ethics of AI in Academic Research

26:16 - The Impact of Generative AI on Critical Thinking

34:53 - The Role of AI in Critical Thinking

Transcript
Craig

Welcome to another episode of AI Goes to College. And once again I'm joined by my friend and colleague, Dr. Robert E. Crossler of Washington State University. Rob, how is Europe?


Rob

Europe's great, Craig. It's a slower pace of life, which has been kind of nice. So I've enjoyed the time to enjoy that, reflect on things.

I find myself having more time to think about things which I don't know if that's a good thing or bad.


Craig

That could go either way. All right, well, I'm glad you're having a good time over there. Well, let's get to it.

So it's been kind of a big week or week and a half for generative AI and it seems like a lot of them are really big. But there were two huge developments in the last week or so, one of which was finally a new model from anthropic Claude Sonnet 3.7.

There's a whole thing about why they jumped from 3.5 to 3.7, so it's weird there never was a real 3.6. So this is the next model after 3.5. It caused a lot of excitement. I'm a big Claude fan. Generally what it's good at, it's really good at.

Have you had a chance to try it yet?


Rob

Yeah, I played with it a little bit. I've been reading a lot about how it will just do programming for you and you plug that stuff in.

And before this call, I went on and had it make the Minesweeper game for me and it worked incredibly well and I had a game in 30 seconds. I mean, it was wonderful.

And that wasn't really taking advantage of 3.7, but seeing some of the stuff that I've read about with what it's doing with its agents, bringing everything together into one place, is it's going to be an interesting space to really push what they're doing.


Craig

Yeah.

You know, you and I are both in IS information systems, and so we really need to think about this a lot because I don't know what's going to happen with coding. We've talked about that a little bit, and what do we need to shift to?

But even if you're in some other discipline, I think it's worth paying attention to what's going on with coding in Cloud Sonnet 3.7, because other professions are going to follow. You know, it's not that big of a stretch to tie this into accounting, math tutoring.

You know, you can just go down the list and I think you're going to see One thing after another get ticked off.

Where before now it's been nice as an aid to programming, but everything I've seen online is like, yeah, it can really help make a good programmer more efficient and you can maybe do some little stuff if you're a novice, but it's not going to turn somebody that knows nothing into a real developer. Claude with 3.7 sounds like it may be a little bit different. Is that what you're reading?


Rob

Yeah, I think it's going to help move the bar further.

But from what I've seen, it still struggles with the complicated nature of big, large hairy problems and being able to figure out where the bugs are and to integrate those sorts of things. And that's where I think the human expertise is still really going to be necessary.

Because a lot of our code, at least existing code, is probably not the most cleanly written code. It's most easiest to understand.

And I think that the human developer that can understand that and then lean into these tools still becomes way more efficient.

Where I see the struggle is how is this going to affect the entry level programmer and the development of that entry level programmer to a point where they can be that intermediate level programmer that's needed for where these tools aren't quite capable yet.

And then the better and better tools like Claude get, I think creates a huge challenge for the development of people at the beginning of their careers.


Craig

Yeah, I think you're right. It's going to change the nature of those entry level jobs in a big way.

I'm a little worried that it in the short run it may take away a lot of entry level jobs, but that creates a whole lot of downstream problems for employers. So, you know, we need to have some big conversations about that.

I have some ideas and maybe that's something we should talk about on a future episode. So how did your Minesweeper game turn out? Is it like the old school Minesweeper?


Rob

It reminded me of 1995 and my Windows 3.1 device. So it worked well and had some nice flashbacks to my teenage years when I was first learning how to use a mouse and how to use Windows.

And to be able to create it that fast was pretty interesting.


Craig

Yeah. I just don't know where this is all going to go and I don't know that anybody else does either.

But I do think that anybody who is in higher ed needs to be thinking more and more about what it's going to mean for the talent pipeline into organizations and maybe more importantly, organizations need to be thinking about that. I Had a pretty interesting conversation with some people from the Society for Information Management.

And SIM is a big group of thousands of CIOs Chief Information Officers and other high level IT people. And this was their work 4.0 special interest group.

And we had a pretty interesting conversation about entry level jobs and the talent gap, the skills gap that may pop up if we cut off these entry level jobs. Because then how do you develop the mid level people?

If you shut off the entry point into the pipeline, the pipeline's cut off, you not going to get people further in. So I don't know. Big questions. I wish I had more answers because we'd be making a lot of money on the consulting circuit.


Rob

But so Craig, I want to speak to that just a little bit because I've read counter arguments to this and that is that a lot of organizations are yet to see profit from investment in these AI technologies. And so we do on one hand hear that it's going to cost us these entry level jobs and do all these sorts of things.

But much like many technologies before, organizations that don't have a clear plan and a clear way of implementing are not yet seeing profits and are turning away, abandoning some AI adoptions because they aren't seeing the promised delivery of doing these sorts of things.

So while I do see the potential for exactly what we're talking about, if not done properly by organizations, I think we're going to see a lot of wasted investments that aren't going to result in in the promised payoff of a much more efficient workforce.

So I think there's something to be said for properly and appropriately doing this sort of thing with a plan as opposed to thinking, oh, I just have to adopt AI and it's going to work magically to deliver all of these performance efficiencies.


Craig

It's the same thing we've seen over and over again. But is it Amara's Law?


Rob

We tend to overestimate the effect of technology in the short run and underestimate the effect in the long run.


Craig

Yep. And I think that's going to be true for generative AI. And we're already seeing that.

I mean, look at all of the early E commerce efforts that were spectacular failures. Peapod. Remember peapod pets.com yeah, but now you have Walmart delivery and all those kinds of services and you have Chewy.


Rob

But I'll go to it took us 20 years to get there, right? Amazon was not profitable for how long? We're going to see a lot of these AI companies that look like they're very promising.

A year, two years from now, they aren't going to be around anymore because they didn't find their way towards profit. But there will be others that will survive.

And they'll be like the Amazon that operated in deficit for years and years and years, but then finally became the behemoth that we know Amazon as today. There's going to be some ups and there's going to be downs and there's going to be pains and there's going to be successes.

But it will survive and it will change the way we do things.


Craig

And a lot of these early failures will just be a step towards ultimate success.

You know, somebody will sit back and watch whatever the generative AI equivalent of P Pod or Pets.com is, and they'll think, okay, here's where they messed up or here's the piece of the puzzle they didn't have. And you get, you know, chewy or I don't. The only grocery delivery we can get is Walmart. But I'm sure there are lots of other doordash.

That's the one, right?


Rob

Doordash, Doordash, Uber eats. I mean, there's all sorts of delivery services now. That COVID 19 pandemic really accelerated some of those sorts of things to profitability.


Craig

You big city folk have those. You know, we don't have those, Coleman.


Rob

What are you talking about?


Craig

Those of us in Eris, Louisiana don't have those, but Walmart will bring our groceries and that's a wonderful thing. The other big announcement that kind of flew under the radar a little bit, which surprised me, is just.

I think it was just yesterday, OpenAI released ChatGPT deep research to plus and Teams members. So previously it was only available to Pro members. I think plus is 20 bucks a month. Teams is either 25 or 30, depending upon how you pay for it.

And pro is 200 bucks a month. And guess who got a Pro subscription literally two days before they released Deep Research to everybody.


Rob

They were waiting for you to buy that, Craig, before they decided to release it.


Craig

That was the trigger. All right, Vance, like bought it. Boom. Go. Have you used it? Have you played with it?


Rob

I have not played with it. I've read a lot about it and I've got some ideas of things I'm going to do, but I do have some concerns with it.

But it's definitely exciting on the surface.


Craig

My two trials were pretty, well, three trials were pretty amazing, although two of them had some obvious deficiencies. So I did. I did three things. I did kind of a pre Review for a big paper a colleague and I working on.

So this is one of those papers where we've been working on it forever. It's a really good idea. Even a mutual friend of ours who is not easy to impress thought it was a pretty good idea and a good paper.

So we've been working on it slowly and we're in that last push before submission.

And normally I would send it to you and send it to some other people and get some comments, but you know, everybody's busy and I hate to impose and you know, you don't want to have to bug people. So I was a little reluctant to do that.

I put it into deep research and said, here's the journal we're targeting, here's the paper, you know, here's what I want to accomplish. It came back and asked me a bunch of pretty pointed questions about exactly what I wanted.

And then it gave me, I think it was a 16 page review of the article. I don't write 16 page reviews and I write really detailed reviews, but mine are not 16 pages.

I haven't gone through it in great detail yet, but everything that I saw on a scan was either something that I would have anticipated. You and I both do a lot of reviewing and editorial work and so you kind of know where the problems are going to be.

But it pointed out a lot of things that I hadn't quite thought of yet. And so that second set of eyes may push us from a rejection to a revise and resubmit.

You know, maybe not, we'll find out, but it's going to be a better paper because of it. And this whole thing took. Now, not reading through it in detail, but producing that 16 page developmental review took 15 or 20 minutes.

How long do you spend on a review for a top journal?


Rob

Sometimes the whole day.


Craig

Yeah, I mean, yeah, at least it ain't 15 minutes, I can tell you that. But I think this has huge, huge implications for academic research.


Rob

And what I would love to do, Craig, and maybe this is something we can do in the future, is talk to an editor of one of our top journals.

Because I've been spending some time gathering the AI policies of various different journals from the Academy of Management, from Journal of Business Venturing and other places that our faculty in my department publish.


Craig

And they are all over the board.


Rob

They're all over the board, but many of them limit your use of AI technologies to either grammatical and spelling type errors. And if you're going to use it as some sort of data analysis, you need to Discuss that in your research methods as one of the research analysis tools.

And so as you talk about creative uses like this that are becoming available through the advancing of these technologies, I wonder how this sits with the gatekeepers. And how are people even going to acknowledge that they do it, which has some ethical implications of its own.

Is it a better use of reviewer resources to expect that people use these tools to review it and begin addressing things from a perspective such as this?

I don't know the answers, but it would be great to talk to one of the editors in chiefs of these journals to see what their views are on the use of technologies like this.


Craig

That'd make an interesting panel discussion. Maybe we can put that together.


Rob

Yeah, that'd be awesome.


Craig

I'm not even sure I know what the ethics of disclosure are there. I mean, I didn't use generative AI to write anything, to create anything. You know, it's just giving feedback.

How is that different from me sending it to you now?

If I said, okay, here, develop, you know, a research question or something like that, maybe I'm starting to cross some lines, but the one that gets me.

So my doctoral students are turning in their seminar papers this weekend and yesterday, or I'm sorry, day before yesterday when we were meeting, I told them for the love of all the tolly, run your paper through Grammarly because I don't want to read bad writing. And they're all actually pretty good writers despite the fact that a couple of them are not native English speakers.

But why would we not want people to use tools like that?

I was looking, I can't remember what journal it was, but just the other day they had very detailed policies that were pretty restrictive around generative AI. But they specifically said you should run this through a copy editing AI tool. It wasn't just it's okay to do this, it's you should do this.

You know, we've talked about that before. But deep, deep research changes things a lot. And Google's had, Gemini's had deep research. It's been actually pretty solid.

But the, the other test that I did is I asked for some deep research on generative AI and critical thinking. So my pro account produced a 34 page report. I'll bet I could go in, edit it a little bit and sell it as a white paper. It was good enough for that.

I tried it again on plus and it produced a 20 page report, which doesn't mean it was worse. Maybe it's better because it's more concise. And there were some flaws due to My lack of skill at this, like the one I sent you.

You pointed out that it kept citing the same three sources over and over again. And that's partially my fault for not giving it some example sources and doing some other things there. It would have improved the result.

But, you know, I just don't know where this leads us in terms of higher ed. I mean, the research report in its old form may be dead.


Rob

Well, here's where I think that there's an interesting challenge in the midst of this, and you alluded it to, in what you said and how you've looked over this.

The feedback I gave you is to know that what you got is reliable and good requires a level of expertise to be able to look at that and to know that it's reliable and good.

And so I think in some ways the burden of what is education changes to how do we help our students to know how to know that what they've created is reliable and good?

Because from what I'm seeing is sometimes the deep research turns out amazing things that are really good, and other times it has some pretty phenomenal flaws in it.

But it is written really, really good that if you're not that level of expert, if you don't have a level of expertise, you may not pick up on those flaws that are baked into it.

And so I think the nature of what it is we need to be looking for and ensuring in that critical thinking component is almost next level of what we expect from students today.

And so therein lies the challenge is how do we get to that next level of critical thinking as these are being evaluated from where people's abilities are in current state?


Craig

Yeah, I think you're spot on. I cannot disagree with anything you've said there, although it's a funny thing.

So this has tremendous potential for bringing somebody up to speed really quickly in an area as long as you don't rely on it to be absolutely 100% true and totally comprehensive, I don't see much harm in that. It's kind of a jump starter. But then on the other end, you've got people that have that deep expertise.

I've got a great little story about Perplexity's deep research, which I'm a big Perplexity fan, although it's stumbled a little bit lately. So I ask it about this thing in Security. Here's something we're working on.

I want to find out about these adjacent areas, and it cited a paper by one of our mutual friends, Karen Renaud, out of Strathclyde. In Scotland. Karen is a just insanely prolific person. Publisher.

At first I thought, well, you know, I kind of know what she does and I don't remember seeing this, but she publishes a lot of stuff, so maybe I just didn't see it.

So I went into Google Scholar and looked through her Google Scholar, which was quite humbling to look at her Google Scholar versus mine, but I couldn't find it. Perplexity gave me a link, but a link to a completely different paper on a completely different topic.

I mean, nothing to do with this was a cyber security topic. It was, you know, like, I don't know what it was about, but it was just nothing to do with it.

So finally I emailed her, said, hey, Karen, you know, I found this thing that's attributed to you. The thing didn't exist.

She sent me a paper that was very slightly related to this topic, so you could maybe start to see how the hallucination got put together. But it was 100% totally made up.

And if I hadn't been up enough on the security literature and had the skills to go in and check everything, I could have looked like a bigger idiot than I normally look like.


Rob

So that's where I think some of the challenges. Because I think about a 35 page document, let's assume that something like that was on page 25.

And I'm going through the first 20 pages with a critical eye and it's looking really, really good to me. At what point do I feel confident that everything else is going to be good as well?

And so when you're not creating it yourself and putting your own thought into each and every section, at what point do we get in a hurry, if you will, or if our cognitive abilities start to wear out, do we put the rubber stamp onto things that ultimately turn out to be fabricated? How do we ensure we don't do that? Right.

And I think it's a check we have to put on ourselves to make sure that we don't say, okay, good enough, but we truly say yes. If I would have written this, this is basically, you know, what I would have come up with.


Craig

Yeah, it's the old human in the loop idea that we keep talking about. People are going to take these deep research reports, put their names on them and put them out into the world.

And they're running some huge risks if they do that.

But for academics and for really anybody in higher ed, I can see the potential of having this as kind of background and then anything important, you know, I'm going to try to dig into a little bit more. So on a 34 page report, I would probably use 0 of it in terms of putting it into my own document.

But in terms of okay, this is a decent way to organize it, here's maybe a perspective I hadn't thought about. Here's some links I can go check real quickly if it saves me five or 10 hours, which is not an unreasonable estimate.

It's the same thing when we have research assistants. We don't take what they do and publish it.


Rob

But here's where I think we might have some generational differences.

Craig is I Wonder if our 20 year old students would take that same approach or if they would say, oh, it created something for me, it's good, I'm turning it in.

What is it about your life experiences and your training and those sorts of things that have gotten you to the point where you're maybe a little bit more skeptical and wanting to ensure that it truly communicates what you want as opposed to yeah, if it gets me at least a B, life is good. Let's move on to the other thing I'd rather be doing.


Craig

Six decades of mistakes got me here. But that's our job as educators.


Rob

That's the conversation we should be having is what does education look like to where we know these tools are available, but yet we want our students to be critical, be skeptical, and to use the tools, but also at the end of the day, have a level of confidence in what they put forward, that it is accurate and correct and consistent with what they want to be saying.


Craig

And this is the big message that I think we want to get out there. Educators need to figure out the capabilities of these tools like deep research, so we can help guide students to use them appropriately.

We don't know how they work, what they're good at, what they're bad at, how to use them effectively. Then it's going to be really hard for us to teach our students how to do it. Does that make sense?


Rob

Makes perfect sense.

And I think the place where I'd push that even a little bit further is in the past I've seen solutions that were I'm going to pick on Microsoft Excel of let's create a Microsoft Excel class and then assume the students know how to use Microsoft Excel and then it can show up everywhere else and they'll know how to use it. And then people complain students don't know how to use Microsoft Excel because, oh, it's been two years since they've done it.

You just assumed that they'd remember I think a lot of what we're talking about needs to be context specific to the material that people are covering. And it's something that gets beat into people's heads over and over and over again.

And not just, well, as freshmen we taught them how to use it in their intro class. Therefore everything else should be good.

It's repeated learning because so much changes and it's going to be so different depending on the context in which you're using it.


Craig

Yeah, we need coordination. Yes, Excel across the curriculum. There's AI now across the curriculum. Ethics across the curriculum. All right, well that takes us into another topic.

Do you have anything else to say on deep research?


Rob

I would probably talk forever about deep research, but I think this is a good pivot point.


Craig

I do want to give one big caution because this was not highly publicized, but if you're a plus or a Teams user, you get 10 deep research queries a month. So use them wisely.


Rob

I think it's because they want you to spend $200. Once you fill it, you can use it meaningfully and want to use it more.


Craig

Yes, I'm going to use my. I get a hundred with Pro. Amazing how that works out, right? 10 times more. 10 times more.

Pro also does have access to a different reasoning model that's much stronger than the O3 many that the others have. It's not worth 200 bucks a month for most people. I want to be really clear about that.

Okay, so we've kind of talked around the edges of a really important topic in terms of generative AI, and that's critical thinking. And there's a lot of talk about generative AI and critical thinking. And you know, is it the end of critical thinking?

Is it going to degrade critical thinking? There was a paper that came out fairly recently.

It basically found that people felt like they did not give the same level of effort, effort in their critical thinking when they use generative AI. The paper is Lee et al 2025. It's some people from Microsoft, bunch of people from Microsoft and one person, Lee from Carnegie Mellon.

So some fairly big time folks.

What they did is they did a study where they ask people how they use generative AI and then basically ask them whether or not they thought they put more or less cognitive effort into it.

And I'm way oversimplifying the paper and people reported that they put in less cognitive effort into critical thinking when they use generative AI, which in my mind proves not very much. Part of the point is to put less cognitive effort into critical Thinking, I mean, that's part of what I want to use it for.

I know I'm kind of going at this in a pretty roundabout way, but the bottom line is generative AI can degrade critical thinking skills or it can enhance them. It's all about how it's used and as educators, how we teach students to use generative AI.

I think that lines up with a lot of what you've been saying. Rob, do you agree?


Rob

Yeah, I would. And here's where I was thinking about this last night.

I almost wonder, when it comes to the creation of documents, information, whatever you want to do with generative AI, is there a scale from zero where it means nothing in what's created to a hundred where it's super important to where then I can look at something and say, if it's towards zero, it doesn't really matter. Then I can just take it for what it is and use it. However, without much critical thought towards what it creates versus a hundred. Right.

It's going to be world changing, life changing, career changing.

One of those sorts of documents that I really need to put on that critical thinking hat and look through it very, very intentionally and kind of that first step of the process and what I'm creating is asking how important is the outcome of the creation of this material? And if it's.

Yeah, it's pretty important than to say, I'm going to take that cognitive effort I saved in writing it and I'm going to invest that into evaluating versus, you know, I wrote a funny poem that's going to make me and my friends laugh. Well, maybe that doesn't require much cognitive effort in looking at that and seeing what that does in.


Craig

The cynic in me thinks back on all the reports that I put together over my career that I'm sure literally nobody has ever read. That's not a knock on universities. Any large organization has that same kind of thing going on.

So, yeah, there's that piece of it, but that's part of critical thinking is how much effort should I put into whatever it is that I'm trying to create or analyze? I want us to send a couple of messages. One is AI is a neutral tool, like a lot of technologies.

So if it's used poorly, it's going to degrade critical thinking. If it's used well, it's going to enhance critical thinking skills. But we've got to help push students into that enhance.

And I want to give you just a few ways that we might do this. One is AI is a great devil's advocate, you can tell it. I want you to literally act as the devil's advocate.

And the devil's advocate was the one in the Catholic Church who would argue against whatever position was being promoted. AI would be great at that.


Rob

Well, and here's where I've seen that useful too, Craig.

And this is where the existence of multiple AI tools is handy is I've seen some people that will create documents and use in the creation process, say Chat GPT and then turn over to Claude and ask Claude to be the devil's advocate.

So that way they're getting some of that different reasoning, some of the different training, the different approaches criticizing what they've been doing.

And that way it's not a soul reliable reliance on whatever bias might be involved in one of the tools, but it brings together full tools to help do some of those sorts of things.


Craig

That's a great idea. Although I'm picturing in my mind, you know, telling Claude that here's something Chat GPT produced and then having Claude talk smack.


Rob

Well, so, you know, PO sent an email to me. I woke up to it this morning.

I haven't played with it yet, but they've got a tool now that is, I think they call it the PO agent that will let you bounce between multiple different things with the same content and share it across the various different GPTs in a way that is seamless.


Craig

Yeah, that's a great idea. I actually did this with a little experiment with Grok. Have you tried Grok? Grok's pretty good.


Rob

My one concern with Grok, and this is a little bit of a tangent, is if it's trained straight up on Twitter data, which I think to some level it is.

There's some things on Twitter that I'm not sure that I want influencing the conversations I'm having because they taking some of the guardrails off that I get why they do it, but at the same time, I'm not sure that's what I want in one of these tools.


Craig

Now, my conclusion was, as much as I liked Grok, you know, it wasn't better enough for me to want to switch.

I did the same prompt through Claude, ChatGPT and Grok and their their best models, and then actually had ChatGPT evaluate the thing and it said that Grok's output was the best. It wasn't a big margin. There were differences. Grok struck a better balance between comprehensiveness and conciseness.

But, you know, a different prompt might have come out differently and it was not anything that the Twitter Nonsense could have really bled into.


Rob

Yeah, I read that and find it interesting that it picked a winner amongst all of those. It's almost like it was forced to.


Craig

Pick a winner because it was forced. I told it, you have to pick a winner.


Rob

Yeah. Because I could see it coming out any of the different ways.


Craig

Yeah, they were all good. I mean, they were all very solid. You're absolutely right. I did force it to pick a winner and to explain its reasoning, which it did.

So another critical thinking idea is uncovering hidden assumptions. So you've written something.

There are always layers of assumptions under pretty much anything you do, and they're really hard to see because we hold some of our assumptions so deeply that we're just not aware of them at all. But that's a huge aspect of critical thinking.

If you get to where you can identify your own assumptions or the assumptions that are underlying somebody else's arguments, you're going to be a much better critical thinker because of that one thing. And AI can do that repeatedly to where you get to the point where you can identify them on your own more effectively.


Rob

Yeah, that's an interesting challenge.

I've asked students even to identify what their own bias is that they're bringing into the prompting that they're doing, and some of those sorts of things and just asking them what bias they're bringing in almost blows their mind that they have to consider that.

So I can imagine how as you then take that to the next level and say what bias or what assumptions are in the model and in the results because of that. And then the next step further is, how do we then change prompting? How do we change how we approach these sorts of things to try to eliminate bias?

Because I don't think you can ever perfectly eliminate all bias. That there is always some level of bias present in everything. It's just, are we comfortable with what that bias is and are we aware of it?


Craig

Absolutely. And this is kind of a bigger point about generative AI and higher ed is, look, you or I could play devil's advocate or.

Or we could help students identify implicit assumptions probably better than AI can. We've both done this kind of thing for a long time. That's a big part of our professions. But how much time do you have to spend that?

But ChatGPT, club, whatever, you know, it's just there. It'll work with you until you run out of quota.

But that's where I think we need to change our mindset a little, little bit away from, is this as good AI stuff as good as what a professor can be to Is it available enough and good enough to make a difference in the education of our students? That seems like a good place to end. What do you think works for me?

This has been another episode of AI Goes to College, the podcast that helps higher ed professionals navigate the changing world of generative AI. We'll talk to you next time. Bye.