Transcript
1
00:00:14,035 --> 00:00:17,870
Welcome to AI Goes to College, the podcast that helps higher education
2
00:00:17,930 --> 00:00:21,550
professionals navigate the changes brought on by generative AI.
3
00:00:22,010 --> 00:00:24,725
I'm your host, doctor Craig Van Slyke.
4
00:00:25,664 --> 00:00:29,285
The podcast is a companion to the AI Goes to College newsletter.
5
00:00:29,824 --> 00:00:33,510
You can sign up for the newsletter at ai goes to college dot com
6
00:00:33,969 --> 00:00:37,649
slash newsletter. Welcome to episode 4 of
7
00:00:37,649 --> 00:00:41,465
AI Goes to College. In this episode, I
8
00:00:41,465 --> 00:00:44,845
talk about generative AI's paywall problem,
9
00:00:45,305 --> 00:00:49,020
Anthropic's release of some excellent new Claude models that
10
00:00:49,260 --> 00:00:52,940
actually beat GPT. Talk about Google's
11
00:00:52,940 --> 00:00:56,620
bad week. And why generative AI doesn't
12
00:00:56,620 --> 00:00:57,520
follow linked
13
00:01:06,040 --> 00:01:09,640
versus few shot prompting and the best $40 you can
14
00:01:09,640 --> 00:01:12,940
spend on generative AI. It is not what you expect.
15
00:01:13,755 --> 00:01:17,595
I also wanted to give a shout out to Rob Crossler. If you haven't checked
16
00:01:17,595 --> 00:01:21,115
out his interview on AI Goes to College, you ought to. It's very
17
00:01:21,115 --> 00:01:24,780
interesting. Rob is a smart guy. Before we
18
00:01:24,780 --> 00:01:28,460
get into the main content though, I wanna thank Grammar Nut for catching
19
00:01:28,460 --> 00:01:31,920
a small typo on the AI Goes to College website.
20
00:01:32,185 --> 00:01:35,885
It's really great that some of the readers and listeners are so sharp eyed.
21
00:01:36,025 --> 00:01:39,625
I very much appreciate it. So here's my
22
00:01:39,625 --> 00:01:43,440
rant of the week. Large language models, as some of
23
00:01:43,440 --> 00:01:47,280
you probably know, are trained on data, and data
24
00:01:47,280 --> 00:01:50,785
that aren't included in the training data aren't reflected in the output.
25
00:01:51,585 --> 00:01:55,425
This means that biased data on the input side is reflected in biased
26
00:01:55,425 --> 00:01:59,125
output on the output side. So none of this is groundbreaking.
27
00:02:00,065 --> 00:02:03,490
We've talked about this before. You probably talked about this before with
28
00:02:03,490 --> 00:02:07,250
others. Demographic based bias
29
00:02:07,250 --> 00:02:10,965
is well known and it's a serious ethical issue related to generative
30
00:02:11,025 --> 00:02:14,865
AI. But when I was playing around over the last couple of
31
00:02:14,865 --> 00:02:18,620
weeks, it occurred to me that biased training data results in a different
32
00:02:18,620 --> 00:02:22,459
problem for academic researchers. I'm speculating here
33
00:02:22,459 --> 00:02:26,135
because I really don't know exactly what data the various large language
34
00:02:26,135 --> 00:02:29,975
models are trained on, but it seems to me that the training
35
00:02:29,975 --> 00:02:33,655
data may underrepresent articles from top academic journals,
36
00:02:33,655 --> 00:02:37,420
which is a huge problem. And here's why.
37
00:02:38,120 --> 00:02:41,740
A lot of top journals are behind paywalls of various sorts.
38
00:02:42,255 --> 00:02:45,555
For example, if I want to access a recent article
39
00:02:45,855 --> 00:02:49,235
from MIS Quarterly, I either have to get it through
40
00:02:49,700 --> 00:02:53,300
Association For Information Systems or maybe through my
41
00:02:53,300 --> 00:02:57,060
library. And a lot of top journals are like
42
00:02:57,060 --> 00:03:00,834
that. I'm not sure in other fields, but that's certainly the case in
43
00:03:00,834 --> 00:03:04,594
my field and I suspect it's the case in most fields. You know,
44
00:03:04,594 --> 00:03:08,280
sometimes we can get lucky lucky and an article's available through Google Scholar
45
00:03:08,340 --> 00:03:12,019
or some other non paywalled repository like
46
00:03:12,019 --> 00:03:15,845
ResearchGate or something like that. And eventually many
47
00:03:15,845 --> 00:03:19,364
of these articles make their way around the paywalls to become more freely
48
00:03:19,364 --> 00:03:23,180
available. But if those articles weren't available as part
49
00:03:23,180 --> 00:03:26,700
of the training data, then their
50
00:03:26,700 --> 00:03:30,305
findings may not be reflected when you interact with that
51
00:03:30,305 --> 00:03:32,965
large language model for your research.
52
00:03:34,225 --> 00:03:37,365
Now the training data may include abstracts or citations
53
00:03:37,665 --> 00:03:41,159
from articles that are included, but
54
00:03:41,379 --> 00:03:45,219
those are limited. The abstract only contains so much
55
00:03:45,219 --> 00:03:49,065
information and so you're really not going to get the full
56
00:03:49,065 --> 00:03:52,745
representation of the article. And since many of our top
57
00:03:52,745 --> 00:03:56,440
journals are behind paywalls, you may miss out
58
00:03:56,440 --> 00:04:00,280
on articles that are in top journals. That's kind of been
59
00:04:00,280 --> 00:04:03,965
my experience so far with perplexity. I think this
60
00:04:03,965 --> 00:04:07,645
problem is even bigger for disciplines that rely more on books than journal
61
00:04:07,645 --> 00:04:11,485
articles. Now, I'm not sure about the extent of the problem, but
62
00:04:11,485 --> 00:04:15,030
it's something to be aware of. For now,
63
00:04:15,650 --> 00:04:19,410
I advise caution. Look, generative AI
64
00:04:19,410 --> 00:04:23,235
is great. I'm a big believer in it, but it really
65
00:04:23,235 --> 00:04:26,294
isn't a shortcut for the hard work of scholarship.
66
00:04:27,074 --> 00:04:29,495
You still have to put in human thought and effort.
67
00:04:31,860 --> 00:04:35,699
So the big news item over the last couple of weeks is Anthropic releasing
68
00:04:35,699 --> 00:04:39,435
some new Claude models. This is a little bit
69
00:04:39,435 --> 00:04:43,035
complicated, so I'll refer you to the newsletter which is available
70
00:04:43,035 --> 00:04:44,575
at ai goes to college.com.
71
00:04:46,650 --> 00:04:50,270
But here's the low down. Anthropic,
72
00:04:51,370 --> 00:04:54,685
who's a competitor to OpenAI and produces
73
00:04:54,824 --> 00:04:58,185
competitors to JetGPT, released 3 new
74
00:04:58,185 --> 00:05:02,025
models: Claude 3 Haiku, Claude 3 Sonet,
75
00:05:02,025 --> 00:05:05,610
and Claude 3 Opus. I don't think Haiku is available
76
00:05:05,610 --> 00:05:08,670
yet, but Sonnet, Sonnet, and Opus are.
77
00:05:09,450 --> 00:05:11,755
I kind of like the names, since they're loosely
78
00:05:20,439 --> 00:05:24,120
the one that's the most capable. Both SONNET and
79
00:05:24,120 --> 00:05:27,800
Opus have 200,000 token context
80
00:05:27,800 --> 00:05:31,505
windows. So the context window is how much data the model
81
00:05:31,505 --> 00:05:35,345
can consider when it's creating its output. You can think of it as
82
00:05:35,345 --> 00:05:39,070
the model's memory capacity. Roughly, this is very
83
00:05:39,070 --> 00:05:42,910
roughly, a 200 k context window should be
84
00:05:42,910 --> 00:05:46,705
able to handle 3 to 400 pages of text. Now there are a lot of
85
00:05:46,705 --> 00:05:50,545
variables there. Opus can even
86
00:05:50,545 --> 00:05:54,370
go further for some uses. According to Anthropic, it
87
00:05:54,370 --> 00:05:58,210
can have a context window of up to 1,000,000 tokens,
88
00:05:58,210 --> 00:06:01,845
which is huge, but right now that's reserved for special
89
00:06:01,845 --> 00:06:05,525
cases. You can get the 200,000 context
90
00:06:05,525 --> 00:06:08,664
window for Claude 3 directly from Anthropic,
91
00:06:09,530 --> 00:06:12,570
or you can get it through Poe, and you know I'm a big fan of
92
00:06:12,570 --> 00:06:16,270
poe.com. But in Poe at least, Claude
93
00:06:16,705 --> 00:06:20,544
3 OPUS and Claude 3 OPUS 200 k are different
94
00:06:20,544 --> 00:06:24,220
models. Claude 3 OPUS is a smaller
95
00:06:24,220 --> 00:06:27,740
model than Claude 3 Opus 200 ks. Yeah. It gets
96
00:06:27,740 --> 00:06:31,099
confusing, but if you're trying to deal with large
97
00:06:31,099 --> 00:06:34,485
documents, either just default to the 200 k version
98
00:06:35,105 --> 00:06:38,545
or try the non 200 k version and see how things go. If they don't
99
00:06:38,545 --> 00:06:42,180
go well, then try the 200 k version. There's some
100
00:06:42,180 --> 00:06:45,860
limits through Poe of how many interactions you can have with Claude in a
101
00:06:45,860 --> 00:06:49,620
24 hour period, but they're pretty liberal, so I don't
102
00:06:49,620 --> 00:06:53,415
think it's going to be a big big deal. I don't think you're
103
00:06:53,415 --> 00:06:57,195
going to run into the usage limits for most use cases.
104
00:06:58,215 --> 00:07:01,760
What is a big deal is this ever enlarging context
105
00:07:01,820 --> 00:07:05,280
window. That just opens up a lot of interesting
106
00:07:05,420 --> 00:07:09,120
possibilities. For example, club 3
107
00:07:09,945 --> 00:07:13,625
should be able to summarize and synthesize across multiple
108
00:07:13,625 --> 00:07:17,385
journal articles in a single session. I haven't tested this out
109
00:07:17,385 --> 00:07:20,360
yet, but I'm going to soon and I will certainly let you know how it
110
00:07:20,360 --> 00:07:24,199
goes. The other big thing about Claude 3 is that
111
00:07:24,199 --> 00:07:27,715
according to Anthropic, Claude 3 outperforms
112
00:07:27,935 --> 00:07:31,775
GPT 4 across the board. Now if you go to the
113
00:07:31,775 --> 00:07:35,410
newsletter there's a nice little table, it's
114
00:07:35,410 --> 00:07:39,190
from Anthropic though, that shows a bunch of popular
115
00:07:39,330 --> 00:07:43,030
benchmarks and how well Claude
116
00:07:43,515 --> 00:07:47,275
3 and the various models and GPT 4 and
117
00:07:47,275 --> 00:07:51,020
Gemini Ultra and Pro did. And
118
00:07:51,020 --> 00:07:54,700
in every single instance, Claude 3 was
119
00:07:54,700 --> 00:07:58,425
better. Now who knows? You know, is Anthropic
120
00:07:58,585 --> 00:08:02,345
cherry picking here? Maybe, but even if they are, the
121
00:08:02,345 --> 00:08:06,160
performance is quite intriguing. And I think
122
00:08:06,160 --> 00:08:09,699
it bodes well for the future of these models.
123
00:08:10,240 --> 00:08:13,860
Competition is gonna push everybody. Google and Anthropic and
124
00:08:13,974 --> 00:08:17,495
OpenAI, they're all going to push each other and whenever Apple gets into the
125
00:08:17,495 --> 00:08:20,854
game and Meta and on and on and on. So I think the
126
00:08:20,854 --> 00:08:24,389
competition is good. It's gonna help us push the boundaries of what's
127
00:08:24,389 --> 00:08:28,229
possible with these models. Okay. Let's switch
128
00:08:28,229 --> 00:08:31,210
to another little bit of kind of amusing but
129
00:08:31,935 --> 00:08:35,315
insightful news. Google had a bad week.
130
00:08:35,615 --> 00:08:38,755
This is a couple of weeks ago by the time you listen to this.
131
00:08:39,520 --> 00:08:43,280
So the basic problem was that Gemini was creating some, shall we
132
00:08:43,280 --> 00:08:46,800
say, interesting images. America's founding fathers as
133
00:08:46,800 --> 00:08:50,395
African Americans, the pope is a woman, and there were some
134
00:08:50,395 --> 00:08:54,015
others. Of course, the world being what the world is,
135
00:08:54,075 --> 00:08:57,839
there was a bunch of outrage over this. Although I
136
00:08:57,839 --> 00:09:01,680
kind of chuckled and I think a lot of other people did. So
137
00:09:01,680 --> 00:09:05,495
according to Google, the problem came when
138
00:09:05,495 --> 00:09:09,175
they were fine tuning Gemini's image generation model, which is
139
00:09:09,175 --> 00:09:12,720
called Imagen 2. So the fine tuning was
140
00:09:12,720 --> 00:09:15,540
intended to prevent the tool from, and I'm quoting here,
141
00:09:16,079 --> 00:09:19,920
creating violent or sexually explicit images or depictions
142
00:09:19,920 --> 00:09:23,745
of real people. Engineers were also trying to
143
00:09:23,745 --> 00:09:27,505
ensure gender and ethnic diversity, but Google
144
00:09:27,505 --> 00:09:31,260
seems to have overcorrected, which resulted in some of these curious
145
00:09:31,320 --> 00:09:35,080
and historically inaccurate images. I think we need to get
146
00:09:35,080 --> 00:09:38,605
used to this. Widespread use of generative AI is
147
00:09:38,605 --> 00:09:42,045
still very new and we're still trying to figure out how to
148
00:09:42,045 --> 00:09:45,800
implement appropriate guardrails. In fact, there's not even
149
00:09:45,800 --> 00:09:49,480
widespread agreement on what those guardrails ought to be. So
150
00:09:49,480 --> 00:09:53,260
we're gonna continue to see these sorts of problems. There'll be a problem,
151
00:09:53,825 --> 00:09:57,585
There'll be an over correction. These are going to go back and forth swinging
152
00:09:57,585 --> 00:10:01,345
like a pendulum until we eventually find the equilibrium and the right
153
00:10:01,345 --> 00:10:04,570
balance between freedom of use and freedom from harm.
154
00:10:05,190 --> 00:10:08,710
I'm pretty confident that we'll get there eventually, but it's
155
00:10:08,710 --> 00:10:12,404
gonna take a while. So when you see these kinds of problems, it's
156
00:10:12,404 --> 00:10:16,005
good to be aware of them, but don't get unduly upset that
157
00:10:16,005 --> 00:10:19,820
there's some right wing or left wing conspiracy going on. I
158
00:10:19,820 --> 00:10:23,520
think most of it is just honest engineers trying to find the right balance
159
00:10:23,580 --> 00:10:26,960
between freedom of use and freedom from harm.
160
00:10:27,340 --> 00:10:30,965
So in the meantime, I think one of the big
161
00:10:30,965 --> 00:10:34,265
messages that I wanna take away and I want you to take away from this
162
00:10:34,805 --> 00:10:38,165
is be careful of relying on generative AI for anything important or
163
00:10:38,165 --> 00:10:41,720
anything or anything that might be seen by the public
164
00:10:42,339 --> 00:10:46,100
unless there's human review. The human in the loop is
165
00:10:46,100 --> 00:10:49,635
critical especially at this stage of generative AI.
166
00:10:50,175 --> 00:10:53,875
So make a human check part of the process whenever you use generative
167
00:10:54,015 --> 00:10:57,660
AI for anything important. Maybe it wasn't practical
168
00:10:57,660 --> 00:11:01,500
for Google, but for most of us, it will be. If you
169
00:11:01,500 --> 00:11:05,120
want some more details, there are a couple of links to articles about this whole
170
00:11:05,355 --> 00:11:09,115
brouhaha in the newsletter. Again, the newsletter's
171
00:11:09,115 --> 00:11:12,415
available at aighostocolllege.com. You really should subscribe.
172
00:11:14,000 --> 00:11:17,839
Okay. So here's my first tip of the week. And this
173
00:11:17,839 --> 00:11:21,360
comes from listener Ralph Estep, who's also a friend of
174
00:11:21,360 --> 00:11:25,074
mine, who sent me an email asking me why generative AI
175
00:11:25,074 --> 00:11:28,295
is so bad at following instructions about length.
176
00:11:28,915 --> 00:11:32,580
By the way, Ralph has a really good daily podcast that focuses
177
00:11:32,580 --> 00:11:35,700
on financial health. You ought to check it out. It's,
178
00:11:36,100 --> 00:11:39,940
available at askralphpodcast.com. It really
179
00:11:39,940 --> 00:11:43,365
is very good, and we all need to keep an eye out on our financial
180
00:11:43,365 --> 00:11:47,045
health, especially given inflation and some of the uncertainties in the
181
00:11:47,045 --> 00:11:50,600
world. So you have probably experienced the length problem of
182
00:11:50,600 --> 00:11:54,280
generative AI. You tell Chad GPT or Gemini or Claude or whomever to
183
00:11:54,280 --> 00:11:57,895
produce an output of 500 words, and there's no telling how
184
00:11:57,895 --> 00:12:01,595
long the output will be, but I'll bet it's not 500 words.
185
00:12:02,455 --> 00:12:06,280
When I try this, I get 200 words. I might get 750 words. And this
186
00:12:06,280 --> 00:12:09,800
can be kind of frustrating. So I wanted to understand what
187
00:12:09,800 --> 00:12:13,595
this is all about. So I asked Gemini, why is this a
188
00:12:13,595 --> 00:12:17,355
persistent problem with generative AI tools? I
189
00:12:17,355 --> 00:12:20,795
actually kind of liked Gemini's response, so I put it
190
00:12:20,795 --> 00:12:24,590
verbatim in the newsletter. I even put a link to the conversation in
191
00:12:24,590 --> 00:12:28,430
the newsletter, so you ought to check it out. But here's the bottom
192
00:12:28,430 --> 00:12:32,154
line. What you need to do is give it a range, not a
193
00:12:32,154 --> 00:12:35,675
target. So don't say 500 words, say between 3
194
00:12:35,675 --> 00:12:39,420
56 100 words, or something like that. You can provide
195
00:12:39,420 --> 00:12:43,100
examples of writing that fits the length that you want. These can
196
00:12:43,100 --> 00:12:46,845
be templates that AI can follow. Another good
197
00:12:46,845 --> 00:12:50,685
approach is to start small and then build up. Ask for a
198
00:12:50,685 --> 00:12:54,490
short summary first and then ask for more detail on specific
199
00:12:54,630 --> 00:12:57,990
points. This gives you more control. And
200
00:12:57,990 --> 00:13:01,670
so, how you phrase your request might also make a
201
00:13:01,670 --> 00:13:05,355
difference. And this is according to Gemini. If you
202
00:13:05,355 --> 00:13:09,035
say summarize this topic in about 400 words, it might
203
00:13:09,035 --> 00:13:12,860
work better than write 400 words on this topic. It's
204
00:13:12,860 --> 00:13:16,080
gonna take practice, so I
205
00:13:16,460 --> 00:13:20,220
just wouldn't rely on it ever to give me a specific number of words. But
206
00:13:20,220 --> 00:13:23,975
as you practice, you can find a way to get it closer
207
00:13:23,975 --> 00:13:27,735
and closer to the link that you want. This is kind of a good
208
00:13:27,735 --> 00:13:31,170
thing for those of us who are instructors because it's gonna
209
00:13:31,170 --> 00:13:34,770
make students actually work a little bit instead of just spitting
210
00:13:34,770 --> 00:13:38,290
out 500 word papers. Okay. Here's
211
00:13:38,290 --> 00:13:42,115
a very useful tool. There's a little bit
212
00:13:42,115 --> 00:13:45,555
of a pun there. More useful
213
00:13:45,555 --> 00:13:49,400
things, which comes from the creator of
214
00:13:49,400 --> 00:13:52,460
one useful thing, which is a newsletter I really like,
215
00:13:53,240 --> 00:13:56,665
Ethan Moloch and Lillek. I think it's
216
00:13:56,665 --> 00:14:00,504
Lillek, Moloch. They've got a new website called More Useful
217
00:14:00,504 --> 00:14:04,310
Things. You can go to more useful things dot com and find it
218
00:14:04,310 --> 00:14:07,910
there. It includes an AI resources page
219
00:14:07,910 --> 00:14:11,589
that, no surprise, includes some pretty useful AI
220
00:14:11,589 --> 00:14:15,415
resources. There are 3 sections. 1 is a
221
00:14:15,415 --> 00:14:19,255
pre order section for Ethan's book, Co Intelligence Living and
222
00:14:19,255 --> 00:14:22,880
Working with AI. I pre ordered it, and I think it's probably gonna be pretty
223
00:14:22,880 --> 00:14:26,720
good. There's other resources that has some stuff
224
00:14:26,720 --> 00:14:30,535
like an AI video, not an AI video, a video
225
00:14:30,535 --> 00:14:34,154
on AI and links to some of their research
226
00:14:34,375 --> 00:14:37,894
on AI. But what I really want to talk to you about is their
227
00:14:37,894 --> 00:14:41,460
prompt library. Their prompt library
228
00:14:41,520 --> 00:14:45,200
includes instructor aids, student exercises, and some
229
00:14:45,200 --> 00:14:48,995
other stuff. The instructor aid prompts are really pretty
230
00:14:48,995 --> 00:14:52,755
good, but they are long and they they are
231
00:14:52,755 --> 00:14:56,120
complex. For example, they've got one that will create a
232
00:14:56,120 --> 00:14:59,720
simulation and that prompt is over 600 words
233
00:14:59,720 --> 00:15:02,885
long. Look. There's nothing wrong with this. In fact,
234
00:15:03,365 --> 00:15:07,205
complicated prompts are often very effective, especially for
235
00:15:07,205 --> 00:15:10,805
more complicated tasks. But I want to be careful
236
00:15:10,805 --> 00:15:14,550
here. I don't want you to look at the complexity of these prompts and
237
00:15:14,550 --> 00:15:17,850
go, oh good Lord. I'm never going to be able to learn this.
238
00:15:18,445 --> 00:15:21,825
Don't have to know how to write those prompts, especially not at first.
239
00:15:22,285 --> 00:15:25,905
You can accomplish a lot with pretty simple
240
00:15:26,069 --> 00:15:29,430
prompts. So both simple and complex prompts have their
241
00:15:29,430 --> 00:15:33,129
places. You can start off simple, everything will be fine.
242
00:15:33,415 --> 00:15:37,255
But you really ought to check out more useful things. Even if it just
243
00:15:37,255 --> 00:15:41,015
gives you some ideas about how generative AI can be used, it's
244
00:15:41,015 --> 00:15:44,720
worthwhile checking out if for no other reason than that.
245
00:15:45,180 --> 00:15:48,400
So check it out. Okay. On to the next topic.
246
00:15:48,940 --> 00:15:52,685
Recently I was listening to an episode of Dan Shipper's How You
247
00:15:52,765 --> 00:15:56,365
Use Chat GPT. It's really good. I think it's on YouTube. I
248
00:15:56,365 --> 00:16:00,205
listen to it as a podcast. Dan was interviewing Nathan
249
00:16:00,205 --> 00:16:04,050
Lebens on how he uses chat gpt as a copilot for
250
00:16:04,050 --> 00:16:07,670
learning. Episode was very interesting. Check it out.
251
00:16:07,889 --> 00:16:11,675
But what caught my attention was a discussion of something called chain
252
00:16:11,675 --> 00:16:15,355
of thought versus few shot prompting. This is a
253
00:16:15,355 --> 00:16:19,010
little advanced, so I want you to stay with me
254
00:16:19,010 --> 00:16:22,290
here. But if it gets to be too much, just move on to the next
255
00:16:22,290 --> 00:16:26,070
segment. Few shot prompting is pretty easy to understand.
256
00:16:26,745 --> 00:16:29,885
You just follow your task description with a few examples.
257
00:16:30,745 --> 00:16:34,505
So let's say that you wanna create some open
258
00:16:34,505 --> 00:16:38,270
ended exam questions. Do you give chat GPT
259
00:16:38,330 --> 00:16:42,010
or Gemini or whomever, whomever, whatever? Is it
260
00:16:42,010 --> 00:16:45,635
whomever or whatever? Oh, that's scary. You give the
261
00:16:45,635 --> 00:16:49,154
tool of choice your prompt, say, create some open
262
00:16:49,154 --> 00:16:52,915
ended exam questions on this topic
263
00:16:52,915 --> 00:16:56,540
and give it some parameters, and then you give it 2 or 3 examples.
264
00:16:57,560 --> 00:17:01,399
Now what the AI tool will do I'm gonna say chat gpt here
265
00:17:01,399 --> 00:17:05,175
to just make it easy. What chat gpt will do is it will look at
266
00:17:05,175 --> 00:17:09,015
and analyze your examples and try to create questions
267
00:17:09,015 --> 00:17:12,839
that are similar to your examples. Sometimes just giving
268
00:17:12,839 --> 00:17:16,539
a single example is really useful. They call that one shot prompting.
269
00:17:17,240 --> 00:17:20,140
Chain of thought prompts are a lot more complicated.
270
00:17:20,954 --> 00:17:24,414
The main idea is that you ask chat gpt
271
00:17:25,275 --> 00:17:29,115
to think aloud. So I'm gonna give you an example of a chain of
272
00:17:29,115 --> 00:17:32,140
thought prompt, and this one was produced by Chatt GPT.
273
00:17:33,720 --> 00:17:37,480
Explain the concept of chain of thought prompting using the chain of thought
274
00:17:37,480 --> 00:17:41,205
approach. I thought that was clever. Begin by defining what chain
275
00:17:41,205 --> 00:17:44,885
of thought prompting is. Next, break down the process into its
276
00:17:44,885 --> 00:17:47,785
key components explaining each one step by step.
277
00:17:48,660 --> 00:17:52,500
Then illustrate how these components work together to guide an AI in
278
00:17:52,500 --> 00:17:55,560
processing and responding to complex tasks.
279
00:17:56,875 --> 00:18:00,555
Finally, conclude by summarizing the advantages of using chain of thought
280
00:18:00,555 --> 00:18:04,335
prompting in AI interactions, especially in educational
281
00:18:04,715 --> 00:18:08,510
contexts. And then the result was pretty long. I'm
282
00:18:08,510 --> 00:18:12,190
gonna have to send you the newsletter send you to the newsletter to check that
283
00:18:12,190 --> 00:18:15,565
out, but this can be
284
00:18:15,945 --> 00:18:19,705
a good way to really get chat
285
00:18:19,705 --> 00:18:22,960
gpt to do more complicated
286
00:18:23,179 --> 00:18:26,539
things. I don't use chain of thought prompting very
287
00:18:26,539 --> 00:18:30,059
much. I think few shot prompting works really
288
00:18:30,059 --> 00:18:33,345
well, but few shot prompts
289
00:18:33,485 --> 00:18:36,625
require knowing what good output will look like.
290
00:18:37,325 --> 00:18:40,840
If you're not sure what you want, you might consider chain of thought
291
00:18:40,840 --> 00:18:44,380
prompting. But if you're a beginner, stick with
292
00:18:44,440 --> 00:18:47,420
few shot prompting. Even one shot prompts
293
00:18:48,414 --> 00:18:51,554
really are quite useful. I use that quite a bit actually.
294
00:18:52,095 --> 00:18:55,820
Okay. So there are a couple of messages here. First, keep
295
00:18:55,820 --> 00:18:59,260
things simple when you can. Simple is often very
296
00:18:59,260 --> 00:19:02,940
effective. 2nd, don't be afraid to experiment with different
297
00:19:02,940 --> 00:19:06,735
approaches and even to blend approaches. So
298
00:19:06,735 --> 00:19:10,275
you can decide. Your prompts can be simple or they can be complicated.
299
00:19:11,530 --> 00:19:14,910
All right. Here's my favorite little part of this episode.
300
00:19:15,370 --> 00:19:19,070
This is, I think, the best $40 you can spend, and it's $40
301
00:19:19,210 --> 00:19:22,785
a year, on your generative AI use and productivity.
302
00:19:24,045 --> 00:19:27,760
Look. I know monthly software subscriptions are totally out of
303
00:19:27,760 --> 00:19:31,360
hand. I don't even wanna know how much I'm spending at 5,
304
00:19:31,360 --> 00:19:35,015
10, 15, $20 a pop every month. But
305
00:19:35,015 --> 00:19:38,795
despite this, I recently added a $40 per year subscription,
306
00:19:39,655 --> 00:19:43,280
and it's already proven to be one of my best software investments ever.
307
00:19:43,840 --> 00:19:46,660
Alright. So what is this great investment?
308
00:19:47,440 --> 00:19:50,880
Well, it's for a text expander, and I wanna give a shout out here to
309
00:19:50,880 --> 00:19:54,485
Dave Jackson of the school of podcasting who talked about this on one of his
310
00:19:54,485 --> 00:19:58,005
episodes. And so a text expander just
311
00:19:58,005 --> 00:20:00,825
replaces a short bit of text with a longer bit.
312
00:20:01,870 --> 00:20:05,470
For example, if I want to type the web address for AI Goes TO College,
313
00:20:05,470 --> 00:20:09,275
I type semicolon uaig. U is
314
00:20:09,275 --> 00:20:12,875
short for URL. And the text expander, it just gives the full
315
00:20:12,875 --> 00:20:16,555
address. The semicolon here is used to indicate that
316
00:20:16,555 --> 00:20:19,940
what follows is gonna be or could be an abbreviation for a text
317
00:20:19,940 --> 00:20:23,780
snippet. The semicolon works well because it's usually followed by
318
00:20:23,780 --> 00:20:27,165
a space rather than characters, but you could really use anything you
319
00:20:27,165 --> 00:20:30,625
wanted. Now this doesn't save me a lot of time,
320
00:20:31,005 --> 00:20:34,685
but it saves me 5 or 6 seconds every time I wanna
321
00:20:34,685 --> 00:20:37,820
type the AI goes to college website.
322
00:20:39,320 --> 00:20:43,160
And it's long. You know, it's got the HTTPS, colon, etcetera,
323
00:20:43,160 --> 00:20:46,135
etcetera, etcetera. It just takes a while.
324
00:20:46,595 --> 00:20:49,975
I have to give people my biography
325
00:20:51,235 --> 00:20:54,880
periodically. Matter of fact, once a month or so somebody asks for
326
00:20:54,880 --> 00:20:58,560
it. So normally I go find the Word file and attach it to an
327
00:20:58,560 --> 00:21:02,355
email or copy and paste it into the message. Now I just type semi colon
328
00:21:02,355 --> 00:21:06,195
bio and my bio pops up. And I
329
00:21:06,195 --> 00:21:09,955
use this for student advising. I use this for grading.
330
00:21:09,955 --> 00:21:13,610
If you're a professor, you have to grade a lot
331
00:21:13,610 --> 00:21:17,390
of kind of projects, that sort of thing. A text expander
332
00:21:17,530 --> 00:21:21,054
will change your life when you have to grade. My spring
333
00:21:21,054 --> 00:21:24,835
class has 85 students in it, and I'll grade 85
334
00:21:24,975 --> 00:21:28,355
projects for twice. What is that, A 170
335
00:21:28,575 --> 00:21:32,120
projects? And a lot of the feedback will be
336
00:21:32,120 --> 00:21:35,960
exactly the same. You know, they forget to use the 1,000 separator,
337
00:21:35,960 --> 00:21:39,795
the comma, or their their spreadsheets aren't formatted
338
00:21:39,795 --> 00:21:43,095
well, that sort of thing. Well, now I can just in a few characters
339
00:21:43,955 --> 00:21:47,790
pop in the feedback and the number of points I'm taking off for that.
340
00:21:48,890 --> 00:21:51,935
So what does this have to do with generative AI? Well, as you start
341
00:22:00,575 --> 00:22:04,380
I Now I just type semicolon w d y
342
00:22:04,380 --> 00:22:08,000
t. I have one for please critique
343
00:22:08,060 --> 00:22:11,515
this text. I have one for this is really
344
00:22:11,515 --> 00:22:15,195
lazy. I have one for thank you. And you'll
345
00:22:15,195 --> 00:22:18,860
find more and more and more uses for a text expander once you get
346
00:22:18,860 --> 00:22:22,380
into it. So I use generative AI for a lot of
347
00:22:22,380 --> 00:22:26,140
tasks, so this helps a lot. But it also helps when I
348
00:22:26,140 --> 00:22:29,945
need to provide some context. So for example, I try
349
00:22:29,945 --> 00:22:33,705
to tell generative AI whether I'm working on something related to my teaching or
350
00:22:33,705 --> 00:22:37,539
one of my podcasts or this newsletter. When I'm working
351
00:22:37,539 --> 00:22:41,299
on the newsletter, I have a whole blurb. It's, I
352
00:22:41,299 --> 00:22:45,115
don't know, 5 or 6 sentences long. Now, I just type
353
00:22:45,115 --> 00:22:48,735
in an abbreviation. That's it. So it's
354
00:22:48,875 --> 00:22:52,549
semicolondaig, which just means
355
00:22:53,570 --> 00:22:56,870
description of AI goes to college and this whole
356
00:22:57,570 --> 00:23:01,404
bit of context pops up. So if you're not using a
357
00:23:01,404 --> 00:23:05,245
text expander, you really ought to consider it. I use one called you're
358
00:23:05,245 --> 00:23:08,860
gonna love this text expander. I use it. That's the one that
359
00:23:08,860 --> 00:23:12,000
costs about $40 a year. I use it because it's cross platform.
360
00:23:12,539 --> 00:23:15,260
It'll work on a Mac. It'll work on a PC. I think it may even
361
00:23:15,260 --> 00:23:18,215
work on smartphones, although I haven't tried that yet.
362
00:23:19,315 --> 00:23:22,995
So I really would encourage you to consider making that
363
00:23:22,995 --> 00:23:26,700
investment, not just for generative AI, but for your
364
00:23:26,700 --> 00:23:30,460
use in general. Okay. All right. Last thing I
365
00:23:30,460 --> 00:23:32,559
wanna talk about is the interview I had
366
00:23:42,809 --> 00:23:45,389
AI Goes to College podcast, which is surprisingly
367
00:23:46,250 --> 00:23:49,929
available at ai goes to college.com. We talked a
368
00:23:49,929 --> 00:23:53,034
lot about a lot of different things. Rob is a really smart guy. He's got
369
00:23:53,034 --> 00:23:56,815
a lot of experience. He's a department chair. He's a fantastic
370
00:23:56,955 --> 00:24:00,475
researcher. So he's used generative AI in a lot of
371
00:24:00,475 --> 00:24:04,220
different contexts. We talked about how he's using
372
00:24:04,220 --> 00:24:07,279
AI to create more creative assignments and to generate questions,
373
00:24:08,140 --> 00:24:11,935
how he's helping students learn to use AI tools tools to
374
00:24:11,935 --> 00:24:15,695
explore and understand important concepts. We talked about the
375
00:24:15,695 --> 00:24:19,535
importance of being willing to experiment and fail with
376
00:24:19,535 --> 00:24:23,270
generative AI. We discussed
377
00:24:23,490 --> 00:24:27,270
why it's important to help students become confident but critical
378
00:24:27,330 --> 00:24:31,010
users of these fantastic new tools, and we talked about a lot of other
379
00:24:31,010 --> 00:24:34,455
things. So go to ai goes to college.com/rob,
380
00:24:36,035 --> 00:24:39,635
r o b, and you can check out the entire interview. I'd
381
00:24:39,635 --> 00:24:43,380
love to hear what you think about it. You can email me at
382
00:24:43,380 --> 00:24:47,140
craig@aighostocollege.com. Let me know if you've got
383
00:24:47,140 --> 00:24:50,945
any ideas for future episodes or if there's something you wanna see in
384
00:24:50,945 --> 00:24:54,625
the newsletter, and you might get featured. Alright.
385
00:24:54,625 --> 00:24:58,300
That's it for this time. Thank you. Thanks for listening
386
00:24:58,300 --> 00:25:02,140
to AI Goes to College. If you found this episode useful, you'll love
387
00:25:02,140 --> 00:25:05,815
the AI Goes to College newsletter. Each edition brings you
388
00:25:05,815 --> 00:25:09,575
useful tips, news, and insights that you can use to help you figure out what
389
00:25:09,575 --> 00:25:13,335
in the world is going on with generative AI and how it's affecting higher
390
00:25:13,335 --> 00:25:16,960
ed. Just go to ai goes to college.com to sign
391
00:25:16,960 --> 00:25:20,799
up. I won't try to sell you anything, and I won't spam you or share
392
00:25:20,799 --> 00:25:24,345
your information with anybody else. As an incentive for
393
00:25:24,345 --> 00:25:28,025
subscribing, I'll send you the getting started with generative AI
394
00:25:28,025 --> 00:25:31,830
guide. Even if you're an expert with AI, you'll find the guide
395
00:25:31,830 --> 00:25:34,330
useful for helping your less knowledgeable colleagues.