Beyond the ChatGPT Hype with Dr. Jeremy Kedziora
October 10, 2023
Hear Dr. Jeremy Kedziora, PieperPower Endowed Chair in AI at the Milwaukee School of Engineering, as he provides practical insights on the nature of Artificial Intelligence, its progress, and what to expect in the future, offering considerations for its impact on work, jobs, and even corporate relocation. This timely talk serves as a valuable resource for understanding the nuanced role of AI in organizational landscapes, providing a thoughtful guide for informed decision-making in a rapidly evolving technological era.
Ok
Well, thanks very much for the opportunity
to speak to you all today.
Uh, as Lon said, my name is Jeremy Kedziora.
I am the People Power Endowed Chair
and artificial intelligence at
Milwaukee School of Engineering.
We're a small institution of about 3000 undergraduate
and graduate students focused on mastering, you know,
all aspects of the engineering disciplines,
including computer science and ai.
And, uh, so I'll give you kind
of 30 seconds of my background.
Um, I have a PhD, right?
Not in computer science, but in political science, right?
And so I was kind of a kindred spirit
with the economist gentleman
who was talking earlier this morning.
And, uh,
and that's a really unusual place to begin, right?
If you're gonna end up as a computer science
professor, right?
So I spent my time in grad school kind of doing a lot
of research on decision making
and modeling how people make choices
and doing machine learning and stuff like this.
And then after grad school, I decided not to be a professor.
Instead, I ended up working at the Central Intelligence
Agency, and I spent nine years working at the CIA,
leading various advanced analytics data science
machine learning type efforts.
After that, I spent two years at a tech startup leading a
data science team, and then I spent two years at
Northwestern Mutual leading another data science team.
And now I'm finally back in academia,
hopefully where I belong.
And what that means is that I spend my days teaching
and researching on all aspects of machine learning and,
and artificial intelligence.
So none of that is really important, right?
Beyond the, uh, the extent to which it kind
of informs the way that I kind
of think write about artificial intelligence.
And, and that's what I try to do, right?
I try to do, bring this kind of unique background,
this very convoluted, unique background with me
whenever I'm doing science or, you know, teaching students
or giving a talk like this.
So the goal for today is for me to give you some sense of,
you know, how I think about machine learning
and artificial intelligence as a researcher
and as a scientist, right?
And then we can probably talk a little bit about what AI is,
how it works, and that kind of thing.
Okay? So I always like to start these talks
with a pop culture reference.
Um, and, you know, I think, uh,
hopefully some folks here will have seen the movie Ironman,
so that, you know, so that, uh, this is kind
of recognizable, right?
But, uh, I think, uh, Ironman is actually a great example
of artificial intelligence, right?
So in the movie Ironman, you've got fast talking genius,
billionaire, Tony Stark, right?
Um, Robert Downey Jr.
On the slide there, who invents the Ironman suit.
The thing is, of course, that Tony doesn't invent
the Iron Man suit right?
By himself. He has the help of an artificial intelligence,
uh, based assistant named Jarvis that he created.
And if you watch Iron Man, right?
You're gonna see Tony and Jarvis kind
of interact with each other, right?
They speak to each other, right?
Tony gives Jarvis tasks to do, and Jarvis does the tasks.
And so, you know, looking at this with a critical eye
as someone who knows how to build these kinds
of systems, right?
The implication is that if Jarvis were real,
he would be composed of many different AI capabilities kind
of welded together into a single coherent platform, right?
So when Jarvis looks at Tony, he recognizes Tony's face.
So Jarvis is doing some sort of facial recognition.
It's one kind of ai, Jarvis understands Tony's words, right?
So Jarvis is doing speech recognition.
This is another kind of ai.
Jarvis responds to Tony in natural spoken language.
Jarvis is doing generative natural language processing.
That's another kind of ai.
And, uh, the list kind of just, you know, goes on and on.
So I think that when people, you know,
think about the promise of AI
and what they kind of hope for in the future, uh, this is
what they have in mind, right?
They have in mind this like super intelligent
sidekick, right?
That can work with you and multiply your productivity
by a hundred fold or more.
On the other end of the spectrum,
you've got stuff like this, right?
You've got hell,
9,000 infamously murderous AI from Arthur C. Clarke's,
2001 Space Odyssey, right?
And so, you know, if you've ever seen 2001, you know
that Hal is on the spaceship with the astronauts, right?
And Hal and the astronauts interact in much the same way
that Jarvis and Tony do, right?
They talk to each other, the astronauts give Hal tasks
to do, and Hal listens to them
for a while, and then he starts killing them.
And you know, I think when, right?
The implication is that right?
Hell does many of the same things Jarvis does,
and so must be composed of many of the same AI systems
that Jarvis is again,
welded together into a seamless, coherent hole.
I think this is what folks worry about, right?
When they worry about AI,
when they worry about extinction events, when they call
for moratoria on the development of AI,
they worry about Hell 9,000, right?
A murderous rogue AI made so accidentally by the laws
of unintended consequences.
So both Jarvis
and Hell are examples of what we kind
of work in this field think of
as artificial general intelligence, right?
An AI that's self-adaptive, self-learning,
can take on any task
and doesn't require any human intervention of any kind
to be really effective.
So what I'll tell you today, right now, as far as I'm aware,
we have nothing like an artificial general intelligence.
No one's even come close to building something like this.
No one even knows how.
And I don't actually even see
how it's possible within the current paradigm that we use
to build, uh, build AI capabilities.
What we've got today is, is this, right?
We've got ChatGPT.
So for those who don't know,
and I'm sure most people do know,
but for those who don't know, ChatGPT is a text-based AI,
it's made a lot of like waves right?
In, uh, the, the, uh, public consciousness,
since last December.
You can go to openai.org, right?
The website, and, uh, type in the little text, uh,
field there and ChatGPT will respond to you, right?
And so I did that, like right here on this slide here.
I said, hi, ChatGPT, introduce yourself.
And the response was, hello,
I'm ChatGPT, a language model developed by OpenAI.
I'm here to help answer your
questions, blah, blah, blah, blah.
Um, so if we compare this with Jarvis and Hell, right?
Fictitious though, they might be, if they were to be real,
like I've argued, right?
They'd have to be composed of many different systems welded
together seamlessly, right?
By contrast ChatGPT does exactly one thing.
The one and only thing
that it does is generative natural language
processing, right?
And what kind of follows from that immediately is that,
you know, look, it seems very human,
but ChatGPT doesn't, doesn't, uh, hear you,
doesn't see you, doesn't have thoughts,
doesn't have preferences, doesn't know things,
has no levers to pull, right?
If it had levers, it wouldn't know what to do with them.
And in fact, you know, unlike a, you know, two-year-old kid,
it wouldn't even know how to learn what to do
with levers if it had them.
So there you have it, right?
The spectrum of the possible in the future, right?
Compared and contrasted with the extremely
limited present, right?
In the kind of represented by ChatGPT. Okay?
So, uh, so CliffNotes, right?
Just like the, uh,
economist gentleman did earlier, right?
CliffNotes for anybody who is here a couple of days ago,
when I talked, right?
Three key lessons, right? About AI.
The first is we can think of AI
as automated decision-making.
The second is we can, you know, AI is not new.
And the third is that AI is already absolutely
pervasive, right?
So keep those three things in mind, right?
If you're here a couple of days ago, then you can now like
safely zone out now that I've, uh, you know,
re-upped them, right?
Okay. So let's start with what is AI?
So I always like to start a little bit philosophically
with this kind of question, right?
And if we want to know what AI is, we want to know
what artificial intelligence is, we ought to ask ourselves
what natural intelligence is, right?
What is, what are the key things
that we would wanna emulate in an AI, right?
Taken from natural intelligence.
So natural intelligence, right?
The only ones that I see here are me and you, right?
So we should ask what is special about us
that we wanna capture in an AI?
And I would argue that the key thing
that's special about us, what are we,
we are decision-makers.
We get to make choices. We get to choose things
and impact the world in some way, right?
Here's another pop culture reference, right?
From a video game. Vula says, "What is a man?"
And the answer is a miserable little pile of secrets
where the secrets are like the history of choices
that we've all made, right?
What are we, we're decision-makers, right?
That's what natural intelligence is.
Then what is artificial intelligence?
An automated decision-maker, right? It's profitable.
Think of AI as a form of automated decision-making
that can make choices and impact the world without any need
for human intervention, right?
That's how I think of AI, right?
Heavily informed by my background in social science.
The second key lesson here, right? AI is not new.
In fact, in many ways, AI is like my parents' age
or older, it's World War II era technology, right?
The same as nuclear weapons, the same as battleships.
The first people who started doing work in AI
that was recognizably called AI started doing
that work in the 1940s, right?
With Alan Turing and his seminal contributions
on how computing works.
At that time, they would've thought of AI as, look,
any technique that enables a computer
to mimic human behavior.
In other words, any technique that allows a computer
to make choices, make decisions.
Fast forward 30 years, right?
Now the 1980s. There have been massive increases in computing power.
Everybody's got a PC, right?
Debugging is no longer literal, uh, in the way
that it was in the 1950s.
This really allowed researchers
to dive deep into a particular part
of the AI space called machine learning and machine learning.
We give computers data, we throw data at them,
we tell them, here's
how you should learn from the data,
but we don't tell them what to learn,
and they make inferences
and learn things by themselves without any
human intervention from the data.
Fast forward another 30 years, right?
Now the 2010s. Again, there's been huge increases in computing power.
There's been a lot of innovation on how these models work too.
This really allowed people to kind
of focus heavily on a particular part
of the machine learning space called deep learning, right?
Common to refer
to the deep learning revolution that started in 2012
and continues up to the present day.
Deep learning is super cool, right?
If you ever read a news story about a computer
that can look at an image
and detect things in the image, right?
Detect objects, recognize your face, right?
Read a book, that's almost certainly being driven
by some form of deep learning technology.
There's a lot of examples of each of these kinds of things.
Too many really to list, right?
But, you know, here are a few, right?
An example of an AI
that does not involve machine learning, right?
An AI that might have come from the early period here would
be like a rules-based chatbot where a bunch
of folks sat down, they wrote out like, you know,
20, 40, 60 rules,
and the AI just follows those rules when it's doing
its work making decisions.
There are so many examples of machine learning-based AI.
I wouldn't be able to list them all if I had 20 years
with you and I've got like, what?
20 minutes. So, you know, here's some examples, right?
My favorite off the list there, right?
Is the last one, scientific research,
because it really kind of hits home this message
that AI is not new, right?
People have been using AI in,
scientists have been using AI in their research
for 200 years, ever since the early 19th century,
ever since, uh,
famous mathematician Gauss used machine learning, right?
To predict planetary motion like way back in the 18,
you know, 18 teens, uh, 1820s.
And in fact, the model
that Gauss used is actually like a
cornerstone of modern economics.
We use it today. The, um, gentleman from Fannie Mae
who presented, uh, had that model, put plot, uh, put lines
through some of the point clouds he had in his charts.
And, uh, it's, uh, it's still one
of the first things you'd learn if you were a computer
science undergraduate student.
So AI is not new deep learning.
A great example of deep learning is chat GPT itself.
And we'll talk more about that in just a few minutes. Okay?
So that's lesson number two. AI is like World
War II era technology.
Lesson number three, AI
and machine learning are absolutely pervasive already.
You just don't always see it.
They're absolutely pervasive already,
and they've been so for, for a while, really.
And if you want an example of that, all you need
to do is look at your smartphone, right?
Your smartphone has tons of AI capabilities built into it.
Like when I show my phone, my face, right?
It's gonna unlock, hopefully, if it doesn't mess up,
right, it's gonna unlock.
And that's because there is some
sort of deep learning model.
Um, you know, probably a transformer
or a convolutional neural network
or something like this on there that's taking my image
that it sees, right?
When it looks at my face with a camera
and makes the decision to unlock the phone
when I move my phone around, right?
It's generating data, right?
'Cause there's an accelerometer in my phone
that generates data on, you know, where it is in space
and it can decide to unlock.
And the reason it can decide to unlock,
or rather like the screen, I'm sorry,
it, it can decide to light the screen.
The reason it can decide to light the screen is
because there is an AI that works on
that accelerometer data makes the choice.
Should I light the screen when I'm texting, right?
I can do voice to text, right?
I, right now, as I'm speaking to you all,
I'm generating like an audio waveform in, in the phone.
There's an AI that takes my audio waveform
and makes the decision to map different parts
of my spoken words into written words
and write them on the screen for me, right?
And the list just goes on and on and on and on, right?
So there's tons of AI capabilities already in your
smartphone right now, right?
And, you know, smartphones have been
around since the mid aughts, right?
So, uh, so that's a fair history already.
Now, you might look at this and you might say
to yourself, so Jeremy, right?
You told me a minute ago that Jarvis
and hell, if they were real, would be a bunch
of AIs on a single platform, right?
And you told me, Jeremy,
that we don't have anything like
artificial general intelligence right now.
And now you tell me that my phone, right?
Has a bunch of AIs on a single platform, right?
Why is this not just a step on the road
to artificial general intelligence?
And I'll say, you know, kind of firmly that no, it's,
it's not just, it's not a step on the road to an AGI, right?
And there's a couple reasons for that.
The first reason is that those, uh, examples,
those fictitious examples of Jarvis
and how weld those AI capabilities
together seamlessly, right?
Your phone doesn't do that.
It makes no attempt to weld these different,
uh, capabilities together.
They largely work independently of each other.
The second reason why I think your phone is not a great
example of the next step towards an AGI is that all
of these different AI capabilities on your phone are kind
of crappy, to be quite honest with you, right?
Like when I show my face my phone, I sort of expect,
like it's gonna fail
to unlock one in three times if I put a bike helmet
on it's game over, right?
If I put glasses on, it's not gonna work, right?
So when we think about AGI, when people
who are in this field write about AGI, when I look at Jarvis
and hell and the way they're depicted, there's a level
of absolute perfection there, right?
That, that's what's anticipated, right?
That AGI would do all these things
as well as a human could do.
And the, the stuff on your iPhone all due respect
to Apple's, very brilliant engineers
and data scientists, right?
Isn't quite up to that level yet, I think.
So there's your three lessons, right?
AI is automated decision making. It is not new.
It is absolutely pervasive already.
Despite all those things, despite all those things,
true decision making, artificial intelligence is still
really, really, really hard, really hard
to do effectively, right?
So there's a bunch of reasons for that, right?
Reason number one is that in a real world situation,
if you want an AI to make choices,
there's a lot of choices to manage.
There's just too many, in fact, right?
We call this the cursive dimensionality.
And I'll give you an example, right?
So right now my students in one of my classes are
building an AI that can play battleship, right?
The kids game battleship on the 10 by 10 grid, right?
With the hidden fleet. And you say like B four,
and then, you know, you put a white thing in
'cause you missed the, the other ship, right?
In that game, very, very simple game, very,
really, really structured.
There are, on the order of three to the hundredth power,
different choices that an AI has to manage,
that's like a one with 47 zeros after it.
That's about the number of atoms in the sun, right?
So that's crazy, right? That's, that's absurd, right?
There's too many choices. It's very
hard to manage that many choices.
The second issue is
that when you're doing AI in a real world decision-making
situation, the choice you make now
probably is gonna affect the choice you
make tomorrow, right?
Which in turn is probably gonna affect the choice the day
after tomorrow, right?
So there's this inter-time linkage
between the way the choices are made.
And that's really hard to get right.
An example of this, right?
Might be like Mario right there on the screen, right?
You can build an AI to play Mario, maybe, right?
The choice Mario has in front of him now
as he's jumping is different than the choices he had two
seconds ago when he was standing on the ground.
And it's different than the choice he's gonna have five
seconds later when he is landed wherever he's gonna land.
Managing the relationship across time
of your choices is very, very difficult.
Third issue, AI agents, when you're building them,
they really, really, really, really, really,
don't like to try new things, right?
They really don't like to try new things.
They're super conservative.
And we have to be very sneaky as scientists, right?
Trying to build these things to get them to try new things,
to get them to explore their space.
'Cause if they don't, they're not gonna make the
best choices they could make.
They're gonna make bad choices.
They're gonna crash the self-driving car, right?
Uh, they're gonna respond, you know, with some like,
you know, insensitive con uh, insensitive content, right?
When you're doing chat, GPT, uh, type stuff.
So you really have
to trick these things into exploring the space
so they make the best choices they can, right?
My kids are nine and five
and they don't wanna try fancy food, right?
They don't wanna try new things.
They want chicken tenders and burgers,
and we have to be very sneaky, right?
To convince them to try like, you know,
broccoli and things like this.
We gotta do the same thing with AI.
We have to be really sneaky, right?
To get them to try new things and explore their space.
And then because of these three things, the last issue is
that compute requirements
to build these things are really, really high.
It's thought that chat, GPT trained on a server farm
for like three months to the cost
of a hundred million dollars, right?
And, uh, and that's not unique, right? That's not special.
There are AIs that play chess and go,
and StarCraft and all of those, right?
Had to be trained at great cost for long periods of time
with many, many computers, right?
So not everybody can do those kinds of things, right?
And AI remains difficult to build
to this day for those reasons.
Okay? So generative AI is like
the talk of the town these days.
So let's dive into that a little bit. What is generative AI?
Generative AI is a class of machine learning models
that use their capabilities to make decisions
to generate new content, right?
And so usually with generative AI, new content kind
of falls kind
of falls into the three bins you can transform,
you can generate and you can predict, right?
An example of transformation might be like, you know,
here's, uh, you know,
here's some complicated explanation about something.
Let's rewrite this right?
In executive summary form,
or let's rewrite this so a fifth grader can read it.
Something like that generation might be a situation in which
you say, Hey, I need a new logo, right?
Let me write out a description of kind of
what I want right now.
Generate the logo for me, right?
Make a, make a picture of the logo for me.
Prediction might be a situation in which you've got like an
image that's missing stuff, right?
It's missing pieces. And you want an AI
to fill in the gaps, right?
Fill in the pixels, fill in the blanks.
And uh, you know, I I I always like to give, uh,
like a little like pop culture, uh,
extension here for this one, right?
Because I think this is one that you see in Popp culture a
lot depicted this, uh, prediction,
generative AI type stuff, right?
A great example of this is like a
police procedural show, right?
Where, you know, you've got the grainy
image of the bad guy, right?
And no one can really see his face.
And so the investigator takes the grainy image down
to the tech lab, and invariably the tech lab
is all in the dark, right?
And it's lit only by the computer screen.
And you can see like, you know, the ba the, the investigator
and the tech guy huddling over the screen,
you know, looking at the grainy image.
And the investigator then says,
in a very commanding deep voice, you know,
a dad voice, right?
Enhance, right? And then the image magically gets better.
That if you were to do that in the real world,
you would use, uh, a denoising deep learning model,
which is a form of generative AI to do
that job in, in general, right?
Generative ads. You're gonna
need, like you to give a prompt.
They're gonna need you to tell it what to create,
and they're gonna return in, in response to your prompt.
They're gonna, they're gonna return created output.
So an example of that is that castle on the right
of the slide there, right?
That's an AI generated image.
I took a particular kind
of generative AI called stable diffusion,
and I gave it the text prompt at the top there.
I said, castle perched the top
of rock overlooking the ocean hidden in a valley
with mountains, etc.
And it popped back that image in the space
of a couple seconds, right?
It popped back. That image, that image, nobody
before has drawn that image, right?
The AI created that doesn't exist anywhere else.
The AI created that.
And I took a look at this image
and I thought, Hey, this is great, right?
I love this. I love, I love castles. I love water, right?
I love, uh, lake Michigan. This makes me really happy.
But the thing is, I hate spring, hate spring.
Uh, I love the autumn, I hate spring,
and I see spring flowers
and buds in that, uh, in that image.
And I want those gone. Um, so what I said to myself was,
how can I, how can I force the generator AI
to remove spring from my image?
Uh, no offense to anybody who likes spring, right?
You know, spring is, is fine, right? I just like fall.
what I did was I added one word at the end of that.
I added the word autumn at the end,
and this is what the AI returned, right?
Basically a very similar image, except
what it did was strip out all of the spring blossoms
and replace them with fall foliage.
I thought, oh, this is really cool. This is great.
Can you do something else?
Can you, what, what else can you do?
Could you make, make this different, right?
Could you make me another image? Right?
And I added do it again. And it made me this, right?
And all three of these images,
as I said, don't exist anywhere.
It's not like some artist, you know, made these
and then the generative AI memorized them.
They don't exist anywhere.
The AI created them from nothing
or created them from the training data, right?
I guess that it has. Um, so I look at this and,
and the previous three, and I thought,
this is really cool, but you know what the thing is?
These are all like really clearly like pixel
art type images, right?
These are drawings. What if I want something photorealistic?
So I swapped out my generative AI model for another one,
gave it the same prompt
and asked it to make a photorealistic image.
And it gave me this, I don't know about you, right?
But I look at this
and I couldn't tell this was an AI generated image.
I would have no idea if I didn't, if I didn't know, right?
I would've no idea. As far as I know,
and maybe someone in the room will correct me,
but as far as I know, I don't think
that castle exists anywhere any more than any
of the other images I showed you.
Uh, that is a purely AI generated image.
And like I said, I would have no idea.
So this is the kind of stuff you can do
with generative ai, right?
Not only can you do like text responses like chat GBT,
but you can also do a lot
of other things like image generation.
You can make video, right? There's all kinds of stuff.
So what about chat GBT itself?
So chat, GBT is a form of generative AI
that's based on something called a large language model.
The particular large language model
that chat GBT uses is called GPT 3.5
or 4.0 if you're willing
to pay the subscription service, which I'm not.
Um, and, uh,
and GPT stands for generative pre-trained transformer,
which is a kind of deep learning, uh, deep learning model.
So what do you do with a large language model?
What do you do with a large language model?
Is you give it like a billion documents, give it a bunch
of information and it will learn,
it will learn from that information.
And when I say learn, what I mean is
that it will take the information that it has in front of it
and it will create a compressed copy of that information.
Once it's got the compressed copy, you can start
to use it for stuff, right?
You can give the large language model a prompt,
and the large language model will search the compressed copy
it has for something close to the prompt,
and then it'll give the closest thing back to you.
And that's, that's the essence of how chat PT works.
Uh, it's, it's doing this kind
of compressed copy search copy, right?
Give closest thing back. So little public
service announcement now, right?
Because there's some things that follow from
this, like sort of immediately, right?
The first thing is
that large language models are really terrible at trying
to go beyond the data on which they were made.
So if you're gonna ask chat GBT something about the Ukraine
war, it's not gonna do a good job
because its corpus of data ends in 2021.
If you want to, uh, if you wanna ask chat GBT
to do math problems for you, beware, right?
Because Chacha BT doesn't seem
to do a very good job with arithmetic.
Uh, it seems like it makes mistakes pretty constantly.
And the reason for that is that one digit arithmetic,
one plus one equals two, you can find
that all over, all over the place, right?
In, uh, in written text, right?
That you're gonna find all of the various combinations of,
you know, one plus two, one plus three, all the way up
through one plus nine, right?
20 digit numbers you will not find, right?
So if you ask chat GPT to add up to 20 digit numbers,
it may not do a very good job.
It probably won't do a very good job.
And in fact, the reason for this people think is that chat,
GPT does not seem to have learned how
to carry the one, right?
It doesn't know how to carry the one.
Hopefully we all learned how to do
that in like first grade, right?
Jackson PT doesn't know how to carry the one.
Uh, so, uh, so in general, right?
It's gonna be really, uh, really hard for it to go
beyond the data that it was created on.
Second thing I wanna mention here, right?
In general, the information in a large language model is
always gonna be a little bit worse than
what you could get out of doing an internet search.
So, so why would that be the case?
Well, so when chat GPT was made,
when a large language model is
made, remember what I said, right?
It constructs a compressed copy of the data.
That is a nice way to say that. It forgot stuff, right?
It left things on the cutting room floor when it turned
words into numbers that it could work with,
it dropped information.
So when you use chat, GBT looks super cool, right?
It's like, oh my gosh, this is amazing.
It's synthesizing information from multiple sources together
into some really articulate text.
It actually can't do anything else, right?
It couldn't give you things verbatim if you asked it to.
If you wanna see that, like ask it
to quote you write the first paragraph
of Huck Finn on page 56.
It will not do it, it will give you the gist of
what Huck Finn's about,
but it'll say, yeah, I don't really remember what, uh, what
what specifically is on that page, right?
And that's because it's, it compressed the information.
Contrast that with a Google search, right?
If you do a Google search,
you'll get stuff back verbatim, right?
And that's because Google does not compress information in
the same way that a large language model does.
The third, uh, public service announcement here, right, is
for the people who are worried about like security,
which you should be with chat GPT, right?
So, uh, so large language models are often operated in, uh,
in what I would call like an offline capacity,
a fixed capacity, meaning that they don't learn
or evolve while you use them.
This is speculative because chat GBT is proprietary,
but it's thought that chat GB T's large language model has
about 175 billion parameters in it.
Those 175 billion parameters don't change
when you're using it, right?
They don't change, they're fixed, they're constant.
It's not learning from you on the fly.
And what that means in turn is that it should be possible
to build a large language model
and a chat GPT like capability that doesn't need
to hold onto or retain confidential
and proprietary information, right?
It should be able to, it should be
possible to do such a thing.
Do not put confidential
and proprietary information in chat GBT right
on the basis of what I'm saying.
Now, um, I think open AI's terms of service, right?
Make it very clear that it's happy
to learn from your prompts
and retain them for research purposes, right?
But in principle, it should be possible
to build something like this that
is security conscious, right?
That will not compromise information.
So some examples, right?
Some examples, I dunno
how many folks have used chat PT here, right?
Or not. But I'll just give you some examples that,
you know, kind of span the breadth of, you know, some
of the ways that I've thought of using it, right?
So example number one, right?
Imagine that you are at a new, uh, place of, of work, right?
And your boss is like intimidating and you know,
and you need vacation and you wanna ask for vacation,
but you're so intimidated
that you don't even know where to start.
So you could ask chat GPT to do it for you, right?
You could say, Hey, I want you
to act as my personal assistant.
Please write informal email of no more than one 100 words
to my boss asking for a week of vacation.
User a friendly tone and do not justify the request.
And if you were an intimidated like kid right out
of college, you know, you might spend a while trying
to craft an email that you thought would land just
right chat.
GBT popped this out in less than three seconds, right?
Uh, hi boss. I hope this email finds you well.
I wanted to drop you a quick note to request some time off
and on and on it goes, right?
So this is one of the things you can use chat PT for.
You can use it to automate the minutiae of your life
and save your cognition from a more complicated,
difficult things right,
than maybe writing a an email to your boss.
Alright? Second example here.
We can use chat PT
to take ourselves into new territory, right?
To extend our knowledge into things that we don't, you know,
we don't, we don't know very well already.
We, here's an example of that, right?
Let's suppose I wanna get more fit than I am, right?
But I don't know anything about exercise, right?
I don't know how to go to the gym.
I don't wanna hurt myself. So I could ask chat GPT, right?
I could say, Hey, I want you to act as my personal trainer.
I'm gonna give you all the information needed about someone
looking to become fitter.
And at the end I say,
my first request is I need help designing a calisthenics
exercise program for someone who wants to gain strength.
So chat, GBT took that prompt
and it spat back
what you see on the screen there at the bottom, right?
Again in less than three seconds.
And it responded to my request with a question
of its own right?
It said, sure, I can do that.
Before we begin, I need
to gather some information about you.
Right? And so it asked about my fitness level,
the time I have equipment, injuries, goals, et cetera.
So I responded to this thinking, oh, you know, this is,
you know, this, it's not actually gonna use
the responses I give, right?
Like, that's, that's silly, right?
Uh, no way I can really do that, right?
So I responded to it. I said,
this person is an intermediate in terms
of calisthenics experience training at home
five days a week for an hour.
They wanna do one arm pull-ups, which is a real
and very difficult thing to do.
And they have no injuries, right?
CHATT took my response
and it actually built a full week long exercise program.
And I don't have all of it printed here
'cause it wouldn't fit on the screen,
but it did actually account
for some of the things that I asked for.
Uh, and, uh, I was, I was very impressed by this.
So we can use chat GPT
to push ourselves into new territory, right?
That we don't really understand
or that we don't know much about already.
Now, I have a third example that I don't have a slide on.
That is the cautionary tale example, right?
So I am a mathematician in my like daily academic life.
I derive equations, I approve theorems,
I do super nerdy things, right?
That's, that's what I am to the core of my being.
Um, and so I thought, hey,
maybe it would be really interesting
to see if chat GBT could do my job, right?
Could it prove a theorem?
Could it take a famous mathematical result
and give me the argument for why it's true.
So I took one, uh, a theorem that I won't bore you
with the details of, just to say
that it's a Nobel Prize winning result, right?
And if anything was gonna be in the corpus
of text the Chachi PT was built on,
it would be something like this.
I said, Chachi pt,
prove this theorem cha EPT responded again,
less than three seconds, very authoritative, very,
you know, lots of math, right?
Looked really good. Complete bss complete
nonsense from beginning to end, right?
In fact, it was so flawed
that I couldn't even like change a few things and fix it.
I had to scrap it completely right from the very beginning.
Um, and so the cautionary tale there is yes,
you can use chat GBT to do things
that you don't know much about, right?
You can design an exercise program. Sure.
However, if you're in an arena where truth
and correctness are important, math being one
of those arenas, maybe law being another one, for example,
medicine, you better be a subject matter expert
before you start asking cha GPT
to do your job for you, right?
Because you're gonna have to know, right?
Whether you, uh, whether you're getting something back
that's, that's good or that's nonsense.
So for the curious, how was chat GPT built?
Well, like I said, it read a lot of text,
read a billion documents,
and across those billion documents
there are a lot of sentences.
So what the people who built chat GPT did was they took
those sentences and they blanked out words
and they asked chat GPT to learn how
to fill in those blanks.
And so here are some examples, right?
What blank is blank from this blank, right?
Paris is the heart of blank.
So Chatt took a billion documents
with blanked out words like this
and it learned to fill them in.
And you can think about how you
might yourself do that, right?
What word is missing from this sentence?
Paris is the heart of France.
You know, it seems really like simple like when I put it
this way, but it's not right?
This is a non-trivial problem.
And I don't want to make it seem
as though the folks at OpenAI didn't do anything.
You know, impressive. 'cause this
is actually very impressive.
What they did chat, GT's great achievement was to learn how
to manage these choices, right?
Uh, across the entirety of the English language
in sometimes very ambiguous contexts, right?
So take a look at that last sentence.
Paris is the heart of France that could just as well be.
Paris is the heart of Europe that could just as well be.
Paris is the heart of culture. If you're a Franco file.
And the great achievement of chat, GPT was learning how
to manage these choices in a way
that was conversational and compelling.
In fact, so compelling that it might fool a person, right?
Interacting with it. And actually, uh,
the last sentence there, right?
The Paris is the heart of blank, right?
The sentence with a blank at the end.
That's actually what chat GPT does over and over
and over again when it responds to you, right?
When you give it a prompt, it's going to say,
great, I've got the prompt.
What's the first word that should follow the prompt?
Alright, excellent. I've got the prompt
and the first word, what's the second word?
I've got the prompt. The first and the second word.
What's the third word? And it's gonna build a
response in that way, right?
By repeatedly solving this sentence
with a blank at the end problem,
you can actually see it do that right?
When you type into the, uh, the screen, right?
Because it's gonna crawl the text across the screen.
It's solving that problem repeatedly.
And it's actually not solving anything more than that.
Uh, which is why I said before, it doesn't have thoughts,
preferences, feelings, emotions, et cetera.
So I was thinking about the mobility space when I was
putting this together, and I was thinking about my own kind
of personal, uh, mobility story, right?
So in, in, uh, in 2016, I had a job offer, right?
Uh, that would've taken me from DC
where I was living to New York City.
And I was thinking about, you know, uh, making
that move, right?
And, uh, and it was very intimidating, right?
I had no help, no assistance, right?
And my wife and I were kind of, you know, off,
you know, off on our own.
It was very intimidating trying to figure that out.
So when I was putting this together, I was thinking,
what are the AI things that I would love to have had
that would've made my life so much easier, right?
Doing this kind of thing by moving across country, right?
When I was, uh, when I was trying
to solve my own kind of mobility problem.
And maybe these are some things you might imagine might
impact the mobility relocation space in the
future if they aren't already.
One, right? First thing that I had in mind was
that when we packed up our house,
we like really messed up, right?
We totally messed up on how much stuff we had
and how much space it was gonna cost, you know, take
and how much cost it was gonna be.
So having some ability to apply AI against
that problem would've been amazing, right?
Imagine if I could just take my phone, wave it
around the room and it could like draw boxes
around all the things, estimate sizes, imagine, you know,
kinda estimate weights, figure out how much it would,
you know, how much space it would take.
Match that up with dynamic price forecasting.
Figure out a model, right? That could do that.
I think that would've been amazing.
And I think there are companies
that are trying to do this right now.
Uh, and you could imagine this would have impact in the, uh,
in the space in the future.
Another thing I would've loved would've been some very
tailored customer specific, uh, help, right?
Like, I'm Jeremy, I'm not some other person.
I'm, you know, moving from this job to this job, right?
And I have kids
and I have these things that other people don't have.
I need specific answers to specific questions.
I would love to have had a dedicated AI assistant
that could have answered those questions for me, right?
Like a, a, a chat pt, like AI
that could drive AQ and a session.
Uh, and uh, and another thing I would've loved, right?
Was New York City is huge, right? Uh, it's huge.
What borough should I live in?
What part of a borough should I live in?
Is this a good street? Is this a good block?
I would love to have known some of those things
and I could totally have imagined, right?
Uh, a machine learning AI based, you know,
housing suitability map, right?
Or a price map of some kind
that would've been incredibly helpful
and really taking the intimidation factor out of it.
Predictive risk mitigation, right?
I might like to know how long it's gonna take my stuff
to get from one place to another.
When I used pods, pods hosed it up, right?
They underestimated how long it was gonna take
to move my stuff and I was without stuff
for a couple days, right?
So it would've been nice to know that, uh, a priori
and I could totally imagine, uh, AI
and machine learning being used to solve those kinds
of problems in a very dynamic, real time fashion
with constant updates, uh, process reliability mapping.
I would love to know when a process that I have to do
because I'm moving somewhere would break, right?
I would love to know when a piece
of paperwork I've submitted will go astray when something
will get lost in the mail, when a process will not work out
and I'll have to reach out and call somebody.
And there are machine learning AI-based approaches
to do this in manufacturing, right?
I could totally imagine that they would make inroads into,
you know, a human-centric, uh, um, process.
Like, uh, managing someone's
or helping someone, uh, relocate smart inventory management.
I would love to have had the ability to know
how many boxes I needed, right?
Uh, right?
And when I was gonna run out of something
and have an agent, right?
That could, you know, warn me, right?
Or even just order it for me, right?
So there are reinforcement learning agents
that are very good at that, at managing, you know, uh,
the risk of running out versus having over having,
having too much stuff, and then finally in the limit
in the future, right?
I could totally imagine autonomous moving vehicles, right?
Where I pack my stuff up, I close the door,
I wrap on the door, and off it goes with no driver.
Uh, and, uh, you know,
and keeps me updated about where it is.
I could totally imagine all of these things, right?
Having impacts in, uh, in the mobility space in the future.
Again, if they're not already, last thing,
some practical stuff, right?
Imagine you wanna start using AI in a business context.
How should you think about doing that?
But here, here are some thoughts I have about that. Thought.
Number one is you wanna start by, uh, beginning
with a problem, right?
Find something that annoys you, find a short list of things
that are irritating, that are pain points.
And then from that short list chooses something small
specific, a small specific problem to solve
and use AI to solve that problem.
So I have a, uh, I have an acquaintance in Milwaukee,
runs a management consulting firm who had a great example
of this kind of thing, right?
They had a spreadsheet process,
it took them many days per month
to do the spreadsheet process.
And, uh, and he, you know,
he sat down one day and he said, this is absurd.
Why are we wasting so much time on this?
So he and a small group of people sat down with chat, GBT
and had it write computer code,
had it write Python code, right?
To automate the spreadsheet process in a couple hours.
They had a fully specked out automated solution
that dropped the time taken from like days, you know,
or a week per month, down to hours, right?
Hours per month, right?
He picked something small and specific
and used AI to solve it.
The second thing I think you wanna do is identify
what goal you have, right?
So his goal was to essentially buy something, right?
By using chat. Bt buy something out of house, off shelf.
Your goal could be to build something in-house, right?
You wanna decide that, right? Kind of right off the bat.
Are you gonna buy or are you gonna build?
You wanna decide if your goal is to copy someone in charge
of a process and remove them from that process
or leave them in charge of that process,
but augment them with ai, right?
So that they're more effective at it, right?
They can do it more efficiently.
The third thing you wanna do, I think, is figure out
what data you have, right?
Because all of these methods are super
data intensive, right?
You've gotta figure out, have I been collecting data
for 30 years, right?
And it's good, ready to go right now,
or do I need to make a decision
to start collecting it, right?
Uh, those are the first things you
wanna, you wanna ask yourself.
The next thing I think you wanna do is
figure out who to involve.
And I have kind of strong views on this, right?
So like, I think you should involve these three groups,
like fright from the very beginning,
the first group business problem stakeholders.
You wanna involve the people who right,
own a problem, right?
Because I, I can guarantee there is no better way
to build a solution that will not be used, right, than
to exclude that group of people from your first
conversations, right?
Involve them from the very beginning.
Um, second group data owners, right?
People who curate data, right?
Take care of it, put it in databases,
build architectures to house data, right?
You wanna involve them because they're gonna tell, be able
to tell you if what you need is available or not.
And then the third group, especially if you're gonna build
something in-house as data scientists, right?
This is gonna be a group of people
that's really good at taking
what the business problem owners have to say,
translating it into some kind of technical requirement,
and then verifying that the data exists
to solve the problem, right?
They're really good at that. If you lose one
of these groups, right, you know, it's, it's very easy
to go astray and invest a bunch of time into, you know,
a solution that won't work or can't be executed or whatever.
What should you avoid? Two things.
First, I think you wanna avoid doing nothing.
'cause AI is, you know, it's important.
Uh, and it's not just because I have the, the, you know,
letters AI in my job title, right?
It's important in the sense
that I think it's gonna be quite impactful
and it's going to, you know, have, uh, uh,
effects on large parts of, you know, the labor market,
the US economy, et cetera.
So it doesn't mean you have to use ai, right?
AI may not be right, right?
Machine learning may not be right.
It may not be a good solution for your problem,
but make that a principle decision, right?
Like, know what's out there
and say, yeah, this isn't gonna work for me, right?
Make that a principle decision rather than, you know,
uh, just getting passed by.
The second thing I think you wanna avoid is like over
enthusiasm, right?
So you don't wanna say, oh, AI is awesome,
we gotta have ai, I need more ai, right?
More ai, like more cowbell, more ai, right?
How can we AI today, right? You don't wanna do that.
That's a solution in search
of a problem fundamentally unproductive, right?
And, uh, and not a good way to execute.
I think instead you wanna work on a small specific problem.
Like I said at the beginning. Uh,
identify something that's annoying, right?
This is my dis, my, my dissertation advisor told me, right?
Find something that annoys you and fix it. Uh, Jeremy.
So same thing with ai.
Find something that annoys you and fix it.
I think that's all I have to say.
So I am happy to answer questions if we have time.
Uh, thank you.