AI hallucinations, or confabulations, can actually foster scientific innovation by generating a wealth of ideas, even if many of them are incorrect. Craig Van Slyke and Robert E. Crossler explore how AI's ability to rapidly process information allows researchers to brainstorm and ideate more effectively, ultimately leading to significant breakthroughs in various fields. They discuss the need for a shift in how we train scientists, emphasizing critical thinking and the ability to assess AI-generated content. The conversation also touches on the potential risks of AI in education, including the challenge of maintaining student engagement and the fear of students using AI to cheat. As they dive into the latest tools like Google's Gemini and NotebookLM, the hosts highlight the importance of adapting teaching methods to leverage AI's capabilities while ensuring students develop essential skills to thrive in an AI-augmented world.
The latest podcast episode features an engaging discussion between Craig Van Slyke and Robert E. Crossler about the impact of AI on innovation and education. They dive into the concept of AI hallucinations and confabulations, noting that while these outputs may be inaccurate, they can spark creative thinking and lead to valuable scientific breakthroughs. Crossler emphasizes that trained scientists can sift through these AI-generated ideas, helping to separate the wheat from the chaff. This perspective reframes the way we view AI's role in generating new knowledge and highlights the importance of human expertise in guiding this process.
As the dialogue progresses, the hosts address the implications of AI on educational practices. They express concern about the reliance on self-directed learning, noting that many students struggle to engage deeply without structured support. Van Slyke and Crossler advocate for a reimagined educational framework that incorporates AI tools, encouraging educators to foster critical thinking and analytical skills. By challenging students to interact with AI outputs actively, such as critiquing AI-generated reports or creating quizzes based on their work, instructors can ensure that learning is meaningful and substantive.
The episode also explores practical applications of AI tools like Google’s Gemini and NotebookLM for enhancing educational experiences. They discuss how these tools can facilitate research and content creation, making it easier for students to engage with complex topics. However, they also acknowledge the potential for misuse, such as cheating. The hosts argue that by redesigning assignments to focus on critical engagement with AI-generated content, educators can mitigate these risks while enriching the learning process. In summary, the episode provides a thought-provoking examination of how AI can both challenge and enhance the educational landscape, urging educators to adapt their approaches to prepare students for a future where AI is an integral part of knowledge acquisition.
Takeaways:
Links
1. New York Times article: https://www.nytimes.com/2024/12/23/science/ai-hallucinations-science.html
2. Poe.com voice generators: https://aigoestocollege.substack.com/p/an-experiment-with-poecoms-new-speech?r=2eqpnj
3. Gemini Deep Research: https://aigoestocollege.substack.com/p/gemini-deep-research-a-true-game?r=2eqpnj
4. Notebook LM and audio overviews: https://open.substack.com/pub/aigoestocollege/p/notebook-lm-joining-the-audio-interview
Mentioned in this episode:
AI Goes to College Newsletter
00:00 - None
00:41 - None
00:41 - Introduction to AI Goes to College
04:41 - The Future of Scientific Training and AI
11:48 - The Cost of AI Tools in Education
19:07 - The Impact of AI on Education
27:34 - Exploring Gemini Deep Research
30:59 - The Impact of AI on Student Learning
36:39 - Exploring Professional Happiness and Leadership Styles
Welcome to a new episode of AI Goes to College.
As always, I'm joined by my friend, colleague, and co host, Robert E.
Crossler, Ph.D.
from Washington state University.
And I'm Craig Van Slyke from Louisiana Tech University.
Although this podcast is affiliated with neither of our employers.
So with that out of the way, Rob, I sent you a New York Times article which I'll link to in the show notes that basically said AI hallucinations are good for scientific innovation.
Do you have a chance to scan the article?
Yeah, I read it and it wasn't surprising to me.
It really seems to double down on the idea of ideation, and that when you're coming up with ideas and no idea is a bad idea, and if that causes you to think about things differently and to brainstorm things differently, even if it's not 100% correct, given the expertise that scientists have in whittling down ideas and getting to what truly is the truth, if you will, if we can change our thinking and get more ideas on the table, it seems to be a good kind of hallucination, if you will.
There's an interesting bit of nuance here.
I think one comes from the use of the word hallucination.
I prefer the word confabulation because it's actually more fun to say and it better gets at the idea that these false responses from AI actually are based in a kernel of truth.
So a confabulation is where your mind puts together different true memories to create something that isn't true.
And so we all have these.
One of the reasons that this idea of hallucinations, confabulations, and innovation is so interesting is that what the AI is doing is it's probabilistically looking at these different things and connecting them.
It's all based on probabilities, even though they shouldn't be connected.
Yeah.
And what I love about the way this works is AI can consume information way faster than we can as human beings and then begin making those connections.
Where, if I would have read the same number of articles, maybe I begin making those connections.
Or if I get a room full of people where we've all read some subset of those different things and start talking and brainstorming and bouncing ideas off each other, we might get to some similar places.
But AI is allowing science to do that much, much faster and get to a point where we are potentially solving some very real problems, whether it's in health or in physics or in some of those places.
Yeah, yeah.
To your point, AI can just crank out these models or these connections or whatever they are so quickly.
And then our job becomes going through and figuring out what the wheat and what the chaff is.
Yeah.
And this is where I get excited by this.
But the question in my mind came up was, right now we have scientists who have gone through decades and decades and decades of becoming experts in their field where they're knowledgeable enough to look at some of this output and to begin doing something with this.
The scary thing for me is if we look at progression over 10 years from now, where AI has been part of the training process and it's been doing some things for young scientists, how do they go through a process of developing the expertise where they can then critically assess and evaluate these confabulations, if you will, that are provided by the machine?
I think they do it by going through these confabulations and figuring out what's useful and real and what's not.
You have a really important point.
I think we're going to need to change the way we train scientists, where it's not so much digging through the journal articles and reading everything and finding the connections.
It's more maybe.
I don't know, I'm speculating here pretty wildly.
It may be more like using different AI tools to scan those articles and then kind of going through what AI comes up with and saying, not interesting, not interesting, not real, not real, not real.
Oh, look at this.
This is pretty interesting.
Which is kind of what our brains do to some extent.
Yeah.
And I think what this says, and it's been something I've been wrestling with in my own thinking process, is how do you begin changing that education process that brings AI in as a tool and a helper, and maybe we're able to get through more topics, more deep thinking stuff, because we have better tools that we don't have to do the deep dive the same way we did before.
How do we trial and error that right over the course of a 15, 16 week semester?
Sometimes it seems like things change so much that it's hard to know, you know, did the world change while I was testing a certain idea, a certain approach, and is that still valid?
So I think, I think it's just important for us as educators to be sharing our trials, our ideas, in ways that can help everybody grow and how we can accomplish doing a good job of educating our students on podcasts like.
AI Goes to College.
So I'm going to put two plugs in real quick.
One is all you listeners, if you're doing interesting things with AI, whether you're on the administrative side of things or on the teaching side of things and the scholarship side of things.
We would love to hear from you.
Craigi goes to college.com because one of the changes that we're going to make in the coming year is to have guests on occasionally.
We want to bring in more ideas and more people.
So if you're interested, let us know.
We've already got a couple of possibilities lined up that are doing some really fascinating things with AI, and we'd love to hear from you.
The second thing is we're going to mention two tools.
Why is that so hard to say?
Two tools at the kind of latter segments of this episode that kind of get at how we might start to change the way that we educate our students and how to think.
Last thing I want to say on this, and then, Rob, see what else you have to say on it is there's an old quote that I have not been able to track down the origin of, and that's something along the lines of innovation lies at the intersection of previously unconnected ideas.
And that little quote has been kind of a driving force in my life.
And I think this gets at part of what's going on with these hallucinations or confabulations.
AI is connecting these things that we wouldn't connect and coming up with ideas.
And like you mentioned, ideation, one of the things about ideation is a lot of the ideas are complete crap.
But that's okay.
99 bad ideas and one good idea is still really, really good, especially if you can go through those 99 ideas pretty quickly.
Anything else on this, Rob?
Any other thoughts?
No.
I think you hit the nail on the head.
You know, the whole idea of 100 bad ideas and you have the one good one in there.
And how can we get to good ideas faster?
And how do we as educators, as we train up students, how do we help the students to be able to separate the good from the bad and use these tools powerfully and productively?
Absolutely.
All right.
And now I want to mention another tool that we talked about before, and that's going to lead us into a little bit deeper conversation.
So I just wanted to mention that po.com recently added two voice generators to its set of large language models.
And for those of you who aren't familiar with po, PO is kind of an aggregator.
It gives you a single interface into it's got to be 35 models or so now, including all the big ones, GPT and Claude and Gemini, and now 11 labs, which is a voice generator.
And if you haven't played with that.
It's worth playing around with.
And there's also another one called Cartesia that I was not familiar with, but actually produces some pretty good results.
I'll provide a link in the show notes to a little example of how this can be used.
And so just real briefly, what PO will let you do with voice, which you could do with 11 labs or other tools, is you basically can give it a script and it will read that script back to you if you get good with it.
You can do things like build in pauses and tone and lots of other things, but it's pretty natural sounding.
I don't know.
Rob, did you get a chance to listen to that little example?
Yeah, I listened to it.
I thought it was awesome.
It starts to blow my mind a little bit to think about how we can put together, you know, online courses.
Right.
I've developed some online classes and developed lectures where I, I basically record over and over and over again until I feel I got it right and it sounded good and it was what I wanted to live out on the course space for forever.
And now I think you can begin put the putting those tools in the hands of everyone, even someone who may not be the best order or the best speaker, but they're brilliant and they can put together thoughts that can be helpful.
We've had in some ways made their knowledge more accessible to students, to others in the world.
Yeah, absolutely.
I don't know if PO has this, but in 11 labs you can clone your own voice too, if you wanted to make it in your voice.
You can also have fun voices.
They've got different accents and they've got old time radio Guy.
And so it's kind of interesting to play around with.
But the reason I wanted to bring it up today is not because it's any new capability of AI overall, it's that now it's part of po.
And one of the great things about PO is for, oh, they've got a new pricing plan.
I think for $10 a month you can get access to all these different models.
Now there's some limitations on how much you can use it and the pricing is very opaque.
It's kind of hard to understand.
I have a $20 a month subscription and I've never run into any problems with, with my usage on it.
And then they go up from there.
You can, you can pay quite a bit of money for it, but that's versus paying $20 to OpenAI, $20 to Anthropic for Claude, $20 for Gemini 5 or $10 a month for 11 labs, and on and on and on.
Which brings us to the bigger point, the cost question.
Rob, you have some pretty strong thoughts on this.
Why don't you share them?
Yeah.
So it's been interesting watching all of these different tools come out where people get excited about using, whether it's ChatGPT or PO or whichever tool that they're using.
And as the new models come out, as things advance, one becomes better than the other, needing to switch places, needing to use both of them.
And so as workers, as academics, we are becoming confident in using these new tools.
It's helping to make us more productive.
The other side of the conversation that I see is a lot of these AI companies aren't making money yet, and they see the releasing of these tools, the adding of more features in these tools as a way to get more and more money.
And universities oftentimes are seen as a place where they can become the tech choice of that university and ultimately begin making profits, making money from these universities.
And with the universities potentially, you know, replacing the work they do with these generative AI tools, but yet paying all this money to do it, it is an interesting conundrum of, you know, where is it helping us to be more productive and where is it a good investment?
And where is it an investment that, that is an investment that could ultimately change the nature of, or cost people their jobs?
Yeah, well, yeah, it's.
Are we working ourselves or we going to provide the seed money to, you know, kind of push ourselves out of a profession?
And the other thing too is, and I was thinking about this parallel this morning, and that is there was a time when universities all hosted their own mail servers.
So you'd have, you know, at Washington State University that had their own mail server, and then cloud mail servers became good.
Right?
We have Google, we have Microsoft, we've got, you know, different companies that provide mail.
And eventually universities got out of the we provide mail support service and outsourced it to these third party vendors and started paying the Microsoft of the world money to provide email accounts to everybody and then cloud storage and some of those sorts of things.
And those same players are creating these AI tools, whether it's Microsoft's AI tool and Copilot or Gemini with what Google's doing.
And I have yet to hear anyone make a compelling argument that a university should just double down on any individual one of these AI tools, because nobody knows which one's the best one, which one's pros and cons.
They're all out there and you know, the, the process to ultimately decide which one is going to be paid for as a university subscription.
You know, the, the, the parallel to the mail server is, you know, is that the right direction?
And, and how, how does a school pick who the winner is, if you will, and are we at that place?
So I, I think there's a lot to be critical of and to think about as far as from an institutional perspective, what tools to lean into and, and where to, to provide these, which are great, but ultimately at what cost.
Yeah, and maybe it's best to just kind of relax a little bit and see how these tools converge because I think over time their capabilities are going to converge around the 75 or 80% of the capabilities that we really care about.
I've become a lot less excited about some of these new announcements, like whatever are they calling it, O3 for OpenAI?
I mean it's kind of like, you know, the 4.0 works really well for me.
So I think we're going to see these announcements, these developments, maybe push the edges.
They call them edge models, but normal folks like us are going to.
Well, that's probably a stretch.
People even less normal then.
I don't know, I'm just going to stop.
Other people, the people that aren't really, really at that cutting edge may not care.
Well, I think what you're getting at is kind of like when Apple releases a new iPhone or a new iOS.
Oftentimes it's a minimal step from what it did before.
And some people like to have the latest and the greatest and they're going to go and use it.
But for most people, the one that they're on is probably just fine, unless their phone needs to be replaced and then you, you make the leap.
So it seems like a similar sort of place where we'll get to eventually.
But right now it seems there's so much hype that every time something new drops, it's potentially world changing.
Yeah, well, and we just experienced 12 days of hype from OpenAI, which I found pretty underwhelming actually.
But that's a conversation for another day.
So yeah, it might be time just to relax a little bit on this.
I think the good news is as the, the edge models get better and better, I think the AI companies are going to provide better access to the non edge models, which could be good for our students overall.
What I would encourage, and this is kind of where I've gotten to as I think about bringing students along, is for decades in information systems, we've been doing research about technology adoption and the use of new technologies.
And it comes down to three or four factors that lead to the successful adoption and use of technologies.
One of those is self efficacy, people's confidence in their ability to use the tools.
So how do we build confidence in being able to do jobs using these particular tools and then how do we make that easy to use is another key component, usefulness.
So I think if we step back and we utilize a good tool and we ensure that students are finding ways to build confidence and recognize how easy it is to use these tools, then whatever direction things go, that skill set, those traits that people develop will actually pull them forward to wherever the technology takes us.
Well, I'm going to draw a kind of strange parallel or analogy here.
It's kind of like going out and investing in better pots and pans and knives when you don't know how to cook.
I mean, you can give me the best cooking implements in the world and my grilled cheese is still going to be my grilled cheese because I don't know what I'm doing.
I think we're kind of at that stage for most people with AI is we are not pushing its capabilities.
And so we really should just kind of chill out a little bit on these constant announcements and just kind of learn how to use the basic tools first.
What percentage of take the average user, what percentage of the capabilities of Excel do you think they use?
Probably about five.
Five?
Yeah.
You know, but Excel is amazingly useful even if all you know is at 5 or 10%.
So I think maybe that's the mindset we need to develop.
All right, let's talk about this might end up in the title is AI the death of Education?
So Rob and I had a little email exchange about this.
So Rob, why don't you lead our discussion here.
So I've seen a number of articles that people writing that say can replace education, that there's tools that people can really go down the path of self learning, where the AI tools adapt.
And this is all done.
I think I read an article about a high school where two hours a day students were spending their time with AI tools and learning as much as they would learn in a complete lecture based system.
It made me think, well, holy cow, is it the death of education?
Right?
That was the, the nature of what they're trying to get to.
But then I put my critical thinking hat on and I think about the students that I've had come through my classroom and I have some where this approach would be awesome.
Right.
They are going to succeed in whatever that they do.
And let's say that's the top 20% making up numbers.
But the other 80%, oftentimes you have to drag them, you have to pull them through the process.
And active learning is a great way to do that, to get them engaged in the learning process and to ensure that they truly learn how to apply and to do.
And I don't see most students anyways thriving in a place where they don't have a person, an instructor who's well equipped in helping students through the learning process to bring them along on that journey, to get them to the finish line of really what do they hope they would learn and apply what they learn.
So I see instructors being supported by AI and using AI as a tool in the classroom.
But is it going to be the death of education?
I don't think so.
I think if we don't change how we teach, there are certain people who might not be valuable teachers anymore.
We all have to adjust and adapt to the times.
But I don't see it being the death of education.
Though you do see those fewer articles out there.
Yeah.
I mean it's a lot of nonsense.
I'm going to put it a lot more bluntly, Rob, than you did.
Most people stink at self directed learning.
They just do.
Now if you are a really self motivated, self directed learner, you can use AI tools in amazing ways.
But look, I don't know about you, but When I was 18, 19 years old, I would have put myself pretty solidly in that stinks at self directed learning category.
Especially when you don't even know what you should be learning.
There's a big assumption.
If I'm going to learn all this with AI, you got to know what it is you're supposed to be learning.
And that's where faculty and formal curricula come in.
So I tend to agree with you.
I don't think it's going to be the death of education, but education is going to be transformed.
It's not going to be the same old thing.
We're going to have to come up with new ways to engage students.
And I know you're doing this and I'm trying to do this.
We're trying to get them to use AI in their active learning, to learn course content.
And I think that's the way to go.
Embrace it as a learning tool.
Yeah.
And what gets me excited about it is one of the things I've heard since I've been an instructor so going on 20 years now, is that students need to be better critical thinkers, and students need to be better at the softer skills of presenting.
And in many ways, AI.
And what AI can do for you lets you push students on their critical thinking skills.
Because that's what's going to separate a student who enters the workplace with AI at their fingertips is how do they critically apply this and think about it and make themselves a better employee because they have access to these tools.
Tools.
And a lot of that's going to come down to if AI can put a super perfectly written document together, the skills of being able to present that information, to sell that information in face to face conversations becomes even more important.
And if the focus then pivots from how do I write a document that is compelling and good?
To how do I actually talk about this in a way that captivates people's attention and gets them excited about the new opportunities that we've been able to put together.
I think it's just going to prepare our students even better to step out into the marketplace and succeed.
Well, you've touched on a lot there.
So I want to see if I can kind of expand on a couple of things you've said.
One is about critical thinking.
You know, students definitely need, we all need to be better critical thinkers.
That's a key skill for anybody that and has been for a long time.
Anybody that's going to engage in any kind of knowledge work or try to have a good life.
AI can definitely degrade students critical thinking skills.
Absolutely.
But it can also greatly enhance their critical thinking skills.
It depends on how it's used.
I think that's up to the instructor.
Right.
If we think about the role of the instructor in the process is how do we push their critical thinking skills in a place where they can have a beautifully written something in 30 seconds.
Yeah, yeah.
Or, and this is the second point.
Nice lead in to my second point.
By the way.
I still am a big fan of this kind of 80% mindset for anything complex at all.
AI is not going to produce a 100% great document, and it's not going to be able to do that for a long time because it lacks the context, but it can get it 80% of the way there for a lot of things.
And then our job becomes how do we use our critical thinking skills to contextualize this to the particular problem, to the particular individuals we're trying to reach?
And whatever the communication is, the particular question we're trying to answer, how do we put that polish around it?
And so I think that's the way we want to really try to direct students is to get that 80% mindset.
The other thing, and you wrote in the email, I'm going to.
You mind if I quote you?
Quote me away, Craig.
I drag many students to the finish line of learning.
The finish line of learning.
I feel like I'm going to read this differently.
I drag many students to the finish line of learning.
If left to their own devices, they would do the minimal possible get the B and move on.
And I think that's right.
And I am not denigrating current students because I did the same thing in more undergrad classes than I'd like to admit.
Get the B and move on if that's what you want to do.
You're not going to use AI very effectively.
And our job as faculty members is to make sure that they can't use AI to just get the B and move on without learning anything.
But I think the bottom line here, if I can kind of wrap it up, is AI is not the death of education, but education is going to change because of AI.
Couldn't have said it better myself, Craig.
All right, then let's move on to some interesting new tools, both of which are from Google.
Folks, if you are not keeping an eye on what Google is doing with AI, you need to be.
I'm getting the sense that Google is catching up to Claude and chatgpt pretty quickly.
Oh, one other real quick news thing.
Twitter just released Grok.
So what they call it Grok?
Yeah, to everybody.
So if you have an X account, you can now use Grok.
I played around with it.
It's kind of interesting.
I don't know what it does that I'm not doing with other tools, which is why I never paid for it.
But if you're on X, you know, check it out or not.
I don't care.
Do whatever you want.
You're adults.
Okay, so let's go to Google and Gemini.
So first of all, Gemini.
Google released what they call Gemini Deep Research.
And this is really interesting.
I wrote a little newsletter article about this and I'll put a link in the show notes.
But basically what you do is when you go into Gemini, you have a little dropdown box that shows all of the different models that are available.
They have 1.5 Pro, 1.5 Flash, 1.5 Pro with deep research, and 2.0 Flash experimental.
At least that's what I get.
And I do pay for Gemini Advanced.
So if you're not, you may not see the same things, but if you choose 1.5 Pro with deep research.
Before it starts doing anything, it puts together a research plan and it tells you, I put together a research plan.
If you need to update it in any way, let me know.
And it goes through and says, I'm going to research websites using these goals.
So the example that I've got here, I'm asking about what are evidence based strategies for adapting assignments to minimize inappropriate generative AI use or leverage a generative AI at the college level.
And it basically goes through and says, I'm going to find research papers and articles about this and then about this, about this, about this.
And in this particular case it gives me five or I'm sorry, six different things it's going to do.
Then I'm going to analyze the results and then I'm going to create the report.
Then it goes through and it starts doing its thing and it's really interesting.
So it shows you on the right hand side of the screen what it's doing and then gives you a little progress.
Circle over on the left hand side.
But basically says go do other stuff if you want to.
Because it takes it three to five minutes to do its thing.
And what it does is it's continuously trying to refine its analysis, going out and browsing the web, doing different things.
And then when it's done, it creates a report.
The reports tend to be pretty good.
You'll have to check it out for yourself.
They're quite good.
I think it's nicely formatted.
You can check it out, revise it if you need to, but it's got a little button that says Open in Google Docs and unsurprisingly, when you click on that, it creates a Google Doc and opens it up for you.
So remember we were talking about the 80%?
So now you've got a good chunk of your work done.
You can go in, change the formatting, change up the language a little bit, you know, delete some stuff, add some stuff, whatever it is you need to do.
And then you've got a report in the newsletter article.
I've got all of this laid out with a lot of screenshots and a link to the doc that it created.
And it's really amazing.
Good news, bad news.
I mean, I can see this being insanely useful.
Students are going to be using it to cheat.
So what do you think?
I'm going to comment on what you said at the beginning and then say what I think about cheating.
One is, I think this is awesome what Google's doing because they're leveraging their power of web search with what they've done for years with document editing and now bringing AI into the equation, that I was actually surprised Google was further behind some of these other companies and what they're doing.
But I think what Google wanted to do is they wanted to get it right because they already had goodwill in the marketplace, and if they made a mistake, it would have hurt a lot of their other products.
So seeing Google in this space with some really great products, I think is going to be a good thing as far as cheating is concerned.
I think that's where we as instructors have to change how we do things.
So the creation of a document is not the final output of a class.
It used to be.
Writing that report was exactly what a student needed to learn how to do to culminate a learning experience.
Well, now, if I can, with the right prompts, get Google to create that document for me in a matter of five minutes, that's 80% of the way there.
What does learning look like at that point?
And if we know students are going to do that, if that's going to be part of what students are going to do, because why wouldn't you, then what can we do then?
Right.
So now we have taken something that used to be hard to get students to be able to do, and we put our focus on getting them there.
Now students can do that easily.
Where can we push them?
Where can we take them?
And what does that learning experience begin to look like?
Right.
This is where we can use AI to really help them enhance their critical thinking abilities.
So I'm just kind of brainstorming a little bit here, but all right, we give them an assignment, tell them to use AI, even tell them to use Gemini Deep Research.
They have to show us the report that Gemini came up with.
Then they have to go in and critique that report, edit it, make comments on it, do whatever might make sense in the context of that class, and also show us that and what that does, that makes them an intelligent consumer of what AI puts out, instead of just this passive receptacle of what AI puts out.
And I think that's the direction we're going to have to go.
Yeah.
So another thing that I've heard from a colleague who's done this, and I thought it was brilliant, is they took what students wrote and then used an AI tool to make a quiz from the document that they produced and had the student then answer the quiz.
And students that didn't do well on the quiz didn't get a grade on that assignment because they at least the presumption was, didn't do the work, didn't critically assess it and know what it was.
To quote Scooby Doo Ruhro.
Yeah.
Whoops, that's, that's brilliant.
We need to have them on.
Now.
That's really a cool idea.
But, but I think we need to be willing to try experiments like that.
Some are going to work, some aren't going to work, but we'll learn along the way.
And what I'll say, Craig, this goes back to don't be a silo.
Is as you are trying things, share it with your colleagues.
Because getting these conversations going, you know, you might have tried something and failed, and they might have tried something and it failed.
But if you take what worked from both of your things and put this third new thing together, that might be the magic sauce that helps the learning get students where you want them at the finish line.
Yep, it's a great point.
Great point.
So, speaking of Gemini, we have talked about Google's Notebook LM before, and if you haven't tried it, listeners, you should.
It's pretty amazing.
So very quickly, NotebookLM allows you to upload a bunch of documents, links to websites, and then create or ask questions, create things based on those documents.
And so this is a form of retrieval, augmented generation, a rag where basically it has a lot of stuff from its underlying training data, but it also takes and processes data from the resources you give it in order to answer whatever you ask it.
One of the coolest things about NotebookLM is they have what they called audio overviews.
And basically they are NPR style podcasts with a host and a co host.
And they're pretty amazing.
They're almost indistinguishable from two humans.
So that's always been really cool.
Just recently, they rolled out a pretty substantial update.
The website looks different now.
It's laid out differently.
None of that's all that important.
But what they've done is they now let you interact with the audio overview.
I'll put a link to a little video I did in the show notes.
When you generate the audio overview and then you start to play it in the window, there's a little thing that's a little button that says join.
You click on that button and you join the conversation.
One of the hosts will say, oh, it looks like a caller wants to join our conversation or something like that.
And then you ask questions and I'll play a little clip from this.
I'll edit that in as promised.
Here's a little clip from NotebookLM's audio overview where I interrupted the conversation I was recording off of my computer speaker.
So the sound isn't great, but I think it's good enough to give you the idea.
Welcome to the deep dive.
Today we're diving into it professional happiness.
Happiness, huh?
Yeah.
What motivates them?
What makes them stay in a job?
What makes them tick?
Sounds like we're searching for the secret sauce for happy IT folks.
Kind of.
Oh, hey, our listener wants to join in.
Hey, you mentioned happiness, but I'm really curious about a number of outcomes, especially how leadership styles, different leadership styles affect those outcomes.
Not just happiness, but turnover, job satisfaction, that sort of thing.
That's a great point.
And it's definitely something we'll dig into.
Yeah, it's not just about overall happiness, but how things like leadership impact specific outcomes.
We've got a study that looks at leadership styles specifically.
Absolutely.
It's a key part of the puzzle.
So we'll unpack that and relate it to turnover and job satisfaction for sure.
And we can tie it all back to what makes IT professionals tick in general.
Okay, so we've got three.
All right, that's the end of the clip, but it's really phenomenal.
So basically, right there in midstream, you can join this conversation.
So it's very cool.
Have you played with it yet?
I've seen your video, but it really got my thought process going and seeing what it can do.
And one of the frustrating things that when I've played with the audio interview in the past and how it puts together is you never quite know what things it's going to pick up on from the source material that you give it.
And always going, well, why didn't they talk about this?
Or I wonder what they'd have to say about that.
And now it lets you drive that conversation.
So if I wanted to put together all the great writings of Craig Van Slyke and have it, you know, put an audio overview of it to, you know, put me to sleep at night.
But.
But I knew it didn't get into one particular paper, one particular aspect of something that, that you had done.
I could ask it and it would go, oh, yeah, we were going to get there.
Here it is.
And it could really drive to some of those places that the original algorithms didn't pick up on of what it should put in there.
So I find this change potentially to be a lot more helpful in making sure that the audio overview focuses on what you want it to and not just what an algorithm decided was important.
Yep, absolutely.
Yeah.
And I think the, the other thing I want to add here is that Google is really pushing the envelope on this and they do it kind of behind the scenes.
I didn't realize this update had come out until I went.
I used NotebookLM to kind of get some ideas on my doctoral seminar.
So I was loading the readings in for another week and it was just different.
So it's very cool.
Check it out.
Like I said, there'll be a link to a very low tech loom video.
It's not anything polished, but it'll give you the idea.
And if you haven't checked out NotebookLM, all the cool kids are.
You should too.
All right, we've covered a lot of ground today, Rob.
Anything else?
No, I think we've touched on a lot.
I'll wrap up with wishing everyone a phenomenal 2025.
I hope this new year is great for everyone, and I look forward to everything that's going to happen in the world of generative AI this year and look forward to talking with Craig about some of the cool things that are possible because of it.
That's right.
And we don't know what it's going to be, but it's going to be something.
So we would like to ask for your help.
If you would please share this episod and this podcast with your friends and colleagues.
Easiest way to do that is to send them to aigostocollege.com follow and there'll be links for all kinds of podcast players there.
And I would like to close with echoing Rob's wishes for a fantastic 2025.
All right, talk to you all next time.
Thanks.