Discover more from In My Tribe
ChatGPT: what I know and don't know about it
Recording of our Zoom session the other night, including the chat file and a transcript. Also, some further thoughts of mine
Audio above. Video link here. Chat file and transcript below the fold.
Sorry about my coughing. I came down with my first case of COVID late on a Wednesday, and this was the following Monday.
Stan Garfield, Dennis Pearce, and I discussed ChatGPT and knowledge management. I think that the discussion clarified what I think I know about ChatGPT and what I don’t know about it.
Before I get to that, I want to mention a couple of insightful things that Dennis pointed out. One is that ChatGPT is a complement to remote work. When you run into a bug or a problem in your work, you don’t need to lean over and ask a colleague for help if ChatGPT can serve that function.
A second insight is that there is a sort of symbiotic relationship between how people provide information and how tools sift through it. One example that Ethan Mollick mentioned on his substack is that you can give a talk and include a concept without explaining it, knowing that the audience can use ChatGPT to get up to speed if necessary.
We saw an example of that about 49 minutes into our session. During the Q&A Tom Grey brought up “Constitutional AI,” which none of us had heard of. Stan, and at least one other participant, asked ChatGPT to explain “Constitutional AI,” and it did so. That was far from the only instance in which Stan or other participants used ChatGPT during the session.
What I think I know and don’t know about ChatGPT
I think I know why ChatGPT is very articulate but also often inaccurate. What I said in the session is that ChatGPT does not care what words mean. It treats them as tokens, and all it cares about is the sequence in which they are used. On Tuesday ( yesterday), I came across a published paper that makes the same point.
Because ChatGPT is trained on a lot of well-formed, grammatically correct sentences, it appears to be very smart and articulate. Educated people find that very charming, for the same reason that Barack Obama comes across as charming.
Suppose you prompted ChatGPT to rewrite the content of a Donald Trump speech in the style of Barack Obama. My guess is that a lot of Obama supporters would be fooled into liking the speech, and a fair number of Trump supporters would be less happy with it. Next, suppose you prompted it to rewrite an Obama speech using Trump’s mannerisms. I predict that college-educated people would react negatively and non-college-educated people would react somewhat favorably.
The preceding examples are what I refer to as simulations. I think that simulation is an important capability with ChatGPT, for better or worse. In the session, I suggested that simulation is at the heart of the most interesting use cases and abuse cases for ChatGPT.
What I do not understand is how Reinforcement Learning Human Feedback works. Does the chatbot adjust automatically to human feedback, or do humans manually restructure the algorithm? Maybe some of both?
Supposedly, RLHF is how ChatGPT is taught how to avoid bad behavior, which might mean: taking controversial partisan stands; aiding and abetting criminality; spreading misinformation; hallucinating (getting my date and place of birth wrong), giving silly answers to math problems.
Without RLHF, all we have is a large language model that has no idea what it is reading and no idea what it is writing. But bad behavior is a function of content, as opposed to just the mere order of tokens. It seems to me that reinforcement learning has to introduce a mechanism to recognize content, and then respond to that content. For example, to avoid getting a math problem wrong, it needs the ability to notice that it is facing a math problem, so that it imports an appropriate tool to solve that problem.
In a lot of situations, “dangerous speech” or “harmful speech” is in the eye of the beholder. I don’t know how reinforcement learning can deal with that. If the process by which reinforcement learning takes place is transparent, then people with different points of view will try to game it. If the process is opaque, then people with different points of view will become suspicious of it.
My sense is that the goal of those who maintain ChatGPT as of now is to have it refuse to get into topics where it might offend people. Whether that goal is attainable is not clear.
As of now, I regard reinforcement learning as a black box. It is the biggest gap in my knowledge of ChatGPT. That is, unless there are aspects of ChatGPT that I think I know but which “ain’t so.”
20:05:06 From Chitwood, Ami : DEC
20:05:47 From Dennis P Waters : PDP-11!
20:05:52 From Chitwood, Ami : 😀 Stan's forgotten more about KM than the rest of us!
20:06:06 From Jennilyn Wiley (Auburn University) : Reacted to "😀 Stan's forgotten ..." with ❤️
20:07:20 From Stan Garfield : My KM site https://sites.google.com/site/stangarfield/
20:07:53 From Tom Grey : Reacted to "My KM site https://s..." with 👍
20:14:30 From Stan Garfield : I just asked: what should i get my wife for valentines day?
The ideal Valentine's Day gift for your wife depends on her personal preferences, but some popular options include:
Jewelry, such as a necklace, bracelet, or earrings
A bouquet of her favorite flowers
A romantic getaway or a staycation
A spa day or massage
A cooking or baking class
A sentimental item, such as a photo album or custom-made keepsake
A book or movie she has been wanting to read/watch
A thoughtful piece of home decor
Remember, the most important part is showing your love and appreciation for her, so choose a gift that fits her unique personality and interests.
20:15:46 From Bart Verheijen : Reacted to "I just asked: what s..." with 👍
20:19:53 From Barbara : I have found Viva Topics (part of the Microsoft Viva suite) to have a similar approach. The AI "suggests" a lot and users are expected to answer question like "is this correct"? Then the AI learns, I guess.
20:20:40 From Chitwood, Ami : It does not perform references interviews.
20:25:15 From Chitwood, Ami : Solving search...
20:27:27 From Jennilyn Wiley (Auburn University) : it wou;d be great if it cited sources
20:28:27 From Bart Verheijen : You can ask it to give (links to) sources
20:28:44 From Bart Verheijen : but they do tend to be wrong or sometimes non-existent
20:28:50 From Tom Grey : Replying to "it wou;d be great if..."
I think it's statistical token analysis is not like usual human sources, like wiki.
20:29:07 From Stan Garfield : https://knowledge.wharton.upenn.edu/podcast/wharton-business-daily-podcast/chatgpt-passed-an-mba-exam-whats-next/
20:29:07 From Jennilyn Wiley (Auburn University) : Reacted to "but they do tend to ..." with 👍
20:34:07 From Stan Garfield : My document on the 32 KM components and ChatGPT https://1drv.ms/w/s!Aioueb8G-fzngcQe79exETEFPnz0pA?e=gN1mA4
20:34:23 From Jennilyn Wiley (Auburn University) : just like librarians have helped people figure out search strings to be efficient and effective, that will transition to how to best write and refine prompts. A good prompt for chatGPT makes all the difference
20:35:43 From Chitwood, Ami : ^Jennilyn - 100% and the ability to evaluate results
20:36:00 From Jennilyn Wiley (Auburn University) : yes, chatGPT told me it is not able to evaluate results
20:36:55 From Jennilyn Wiley (Auburn University) : SEO for AI
20:37:59 From Chitwood, Ami : Just like you can manipulate social media (e.g., having a error) to increase chances for "viral," I predict that there will be ChatGPT stuffing/manipulations....
20:40:12 From Bill Kaplan : Always trust but verify..always
20:40:28 From Jennilyn Wiley (Auburn University) : Reacted to "Always trust but ver..." with 👍
20:40:33 From Chitwood, Ami : Reacted to "Always trust but ver..." with 👍
20:40:35 From Tom Grey : Reacted to "Always trust but ver..." with 👍
20:42:24 From Bart Verheijen : Reacted to "Always trust but ver..." with 👍
20:42:32 From Bill Kaplan : I have a 1908 Keuffel and Esser slide rule
20:45:14 From JC Monney : Correct it odes not have context
20:45:53 From JC Monney : you ned to provide GPT with your context. It is not good (yet) at asking context
20:46:13 From Bill Kaplan : Reacted to "you ned to provide G..." with 👍🏻
20:47:08 From Barbara : In addition to providing context, it helps to ask good questions or at least try to ask the question in different ways to get slightly different answers.
20:49:10 From JC Monney : Viva Topics works best with 100,000 documents
20:49:19 From Barbara : Reacted to "Viva Topics works be..." with 👍
20:49:21 From Chitwood, Ami : Imagine...looking in all of our personal KM systems? (OneDrive, Teams, Email...)
20:49:43 From Tom Grey : Reacted to "Viva Topics works be..." with 👍
20:49:51 From Jennilyn Wiley (Auburn University) : i'd love to know in the black box how it determines expertise. since it's probabilistic, is it just looking at volume? (e.g., the more prolific a writer, the more weight it gives their source info)
20:49:56 From Barbara : Viva Topics and intranets also need to be relatively "clean" for this to work.
20:50:12 From JC Monney : they will provide API to you enterprise content
20:50:18 From Bill Kaplan : A key for GPT is to provide the right best context in the query and as you engage with chat GPT. I have found that the better the context for the Q and the A the better the outcome for your query
20:50:43 From JC Monney : deep mind
20:51:09 From Bart Verheijen : Replying to "Viva Topics and intr..."
clean meaning no duplicates or the content within the topics being true or verified?
20:51:43 From Stan Garfield : Constitutional AI refers to the use of artificial intelligence (AI) systems and technologies in accordance with the principles, values, and laws that are enshrined in a nation's constitution. This approach involves the development, deployment, and use of AI in a manner that aligns with the fundamental rights and freedoms guaranteed by the constitution. It aims to ensure that AI is used ethically, responsibly, and transparently, while also protecting the privacy and human rights of citizens.
20:52:06 From Bill Kaplan : I believe Microsoft is going to announce tomorrow or next day how chat GPT has been integrated into Edge.
20:54:49 From Bart Verheijen : Google news today: We’ve been working on an experimental conversational AI service, powered by LaMDA, that we’re calling Bard.
20:54:50 From Barbara : Bing already separates the external search from internal system search, so I look forward to seeing how ChatGPT will be integrated in Bing.
20:55:04 From Bart Verheijen : https://blog.google/technology/ai/bard-google-ai-search-updates/
20:55:36 From Tom Grey : Reacted to "https://blog.google/..." with 👍
20:56:34 From Tom Grey : https://analyticsindiamag.com/openai-rival-anthropic-starts-claude-early-access/
20:57:24 From Bill Kaplan : The non disclosure issue reminds me about the original discussions and concerns with CoPs about what can be discussed online. within an organzation
20:57:35 From Barbara : Reacted to "The non disclosure i..." with 👍
20:57:44 From Jennilyn Wiley (Auburn University) : https://www.cnn.com/2022/10/21/tech/artists-ai-images/index.html
20:58:18 From Jennilyn Wiley (Auburn University) : Artists not happy about their art being used to train Open AI. I believe Getty sued.
20:58:30 From Bart Verheijen : Reacted to "Artists not happy ab..." with 👍
20:59:23 From Jennilyn Wiley (Auburn University) : https://www.cnn.com/2023/01/17/tech/getty-images-stability-ai-lawsuit/index.html
20:59:27 From Bart Verheijen : Replying to "Artists not happy ab..."
Thanks! That's basically artists requesting their (creative) work be removed or excluded from the training set; and thereby from the responses
20:59:53 From Tim Wood Powell : All of this is a huge rip off of the content creators! Information is created by people who get paid to do it — this is just a way to “reuse” this on a massive scale — without attribution or [ayment!
20:59:54 From Barbara : Knowledge graph+ChatGPT for internal use?
20:59:56 From Bill Kaplan : Thank you for this time
21:00:04 From Jennilyn Wiley (Auburn University) : Reacted to "All of this is a hug..." with 👍
21:00:30 From Dennis P Waters : Lord help us if financial institutions decide to use this for enterprise risk management
21:00:31 From JC Monney : thank you for the invite and great sharing
21:00:34 From Dennis Pearce : http://www.aistudy.com/paper/aaai_journal/AIMag10-02-002.pdf
21:00:57 From Chitwood, Ami : Right on - I remember when excel came to the workplace...people are still here 🤓
21:01:06 From Jennilyn Wiley (Auburn University) : Reacted to "Right on - I remembe..." with 😂
21:01:09 From Tom Grey : Reacted to "Knowledge graph+Chat..." with 👍
21:01:53 From Jennilyn Wiley (Auburn University) : +1 Ami - disruptive and changing but not necessarily career ending
ARNOLD all right well I got the idea for this discussion when I came across an article in which the chief technology officer of a top consulting firm was quoted as saying that in the near future these these new technologies were going to empower him to demand more Prestige at his company a fancier office and a bigger budget um no like what he said out loud was something else what he said out loud was that this new technology was going to uh really have a big effect on this on the Knowledge Management function of this
company and uh I saw the phrase Knowledge Management and I immediately realized that I have a long time friend uh Stan Garfield who is recognized as a leader in the field of Knowledge Management so I asked him to join the discussion and then uh he said well this guy Dennis Pierce is an expert in Ai and computers and so you should have him on and so that's that's what we we've got here so I'd like you guys to give us uh two minutes each of a little bit more of an introduction of yourself you know so
Stan in two minutes or less not only uh introduce yourself but maybe explain to my audience what Knowledge Management is again for less than two minutes and then what you know sort of where would they look on the internet to uh see the justification from calling you a leader in the field
STAN all right thanks Arnold and thanks for inviting Dennis and me to join you today um so Knowledge Management to start with that is essentially the ability of an organization to reuse what one part of the organization knows in another part
so it's an attempt to take full advantage of the knowledge skills and expertise of an entire organization so that you don't reinvent the wheel make mistakes over and over again and you take full advantage of everyone that's there it becomes more and more important as organizations grow large so if you have a small team it's usually easy for everyone to know what each one knows and does and does talk to each other to take advantage of their background but in larger organizations that's pretty hard so now it's
management attempts to deal with that through a variety of mechanisms so that people can get in touch with each other including people that they don't know but are able to help one another out at the time of need and involves both capturing knowledge and reusing knowledge for the benefit of the organization that's that's one way of summarizing it my own background in it is that I was doing it for many years before we had a name for it I worked in the computer industry and the Consulting business for
a long time but somewhere in the mid 90s we started to see the term Knowledge Management and in 1996 I was asked to start the first Knowledge Management program for digital Equipment Corporation some of you may remember that company it was actually the number two computer company in the world at one time uh behind IBM before it eventually disappeared altogether but I started Knowledge Management as a full-time job in 1996 and essentially been in the field ever since and as far as where you can go to find out more about why not Arnold invited me
if you Google my name Stan Garfield go to my site which is on Google sites you'll get a wealth of information about it more than you can possibly digest so you probably need to use chat cheapyt to help simple simplify
ARNOLD okay and so you've written more than just one or two tweets that had the word Knowledge Management
STAN that's right I've written four books now a Knowledge Management contributed chapters to another um other books and writing blogs for a long time since 2006 and have a regular series of blogs with a company called
lucidia and um doing a series of webinars for them on something called the five C's of Knowledge Management so if you're interested in that information will be available on my website and you can attend those for free
ARNOLD okay now Dennis you're I guess supposedly know something about computers did you like take a night course in Cobalt at some point or What's your deal
DENNIS no actually it was Fortran not too well uh but uh uh yeah I'm not definitely not a an expert in computers in AI I wouldn't call it that I
um most of my career was spent at uh IBM and then Lexmark the printer company that was spun off from IBM's uh printer division in Lexington Kentucky um when I was at IBM this was in the late 80s I did some work uh creating some expert systems and that rolled over into Lexmark um also creating some case-based reasoning systems there sort of the the old school AI not the Deep learning kind of things that are going on now um and uh transition actually the way I got into Knowledge Management was actually through AI because I was doing
this AI work at Lexmark and I went to this conference in Boston in the late 90s that talked about Knowledge Management because it seemed to relate to some of the AI work I was doing and there I got a broader picture of what Knowledge Management was and got very interested in that ended up coming back at Lexmark I got a PhD at the University of Kentucky in decision sciences and information systems uh along the way and uh I have not been actively doing that I retired from Lexmark in 2017 and the last four years I've been working uh as
a collaboration strategist for a non-profit based out of Chicago called start early that does research and provides training to teachers for early childhood education and so I manage their collaboration tools and systems there so I've not been you know deep into developing AI for a while but I'm more of a strongly interested bystander these days
ARNOLD great thanks okay well I'll get back to you with a question it'll probably take me about 10 minutes to get there the question that I have is just what is
this uh reinforcement learning human feedback that I hear about with chat GPT um you know it sounds to me like I mean you know some guy and greasy overalls with a tool belt crawls into the algorithm holds up a flashlight and says well we're going to have to put an if statement here and we're going to connect a subroutine there I mean anyway I have no idea what what what that is but a hold off uh just I'm gonna it'll take me a little while to get there just to let people know I wanna and by the
time we're finished here get into some really kind of big questions like a skeptic who's aware of the unreliability of chat GPT is bound to ask at some point how can something that unreliable be useful in Knowledge Management so that you know that's a question we'll get to later and then the uh the question that The Optimist uh the Evangelist for Chad GPT would ask is uh is this technology and the AI technologies that are kind of following behind it right behind it uh how radically are they going to
change knowledge work itself let's say over the next seven years so that's that that's another question that we'll get to uh but first I just want to first I just want to start with a little rant uh which is that um chat the large language models and chat GPT in particular they don't know what they're reading and they don't know what they're writing I mean it really is true that the it has no idea what it's reading no idea what it's writing it's just parsing uh you know if if you're a human being
but probably the last time you did parsing was in Middle School when your teacher gave you a problem of order of operations and so you had to say all right well this is what I do with the parentheses you know this is uh by the way give me a thumbs up if you're hearing if you're not okay good all right so um so the the you know so what do you do with the parentheses what do you do with the exponents you know what do you do next what do you do next and what I the way I think of chat GPT composing answers is
playing a game of what comes next so if it started to write Little Red Riding the next thing it would write would be Hood because you know in all the billions and billions of sentences that it's been trained on every time it's seen Little Red Riding those three tokens I mean don't even call them words call them tokens Little Red Riding that what comes after it is hood and so that's what it's going to write and that's how how it proceeds to go now if it had only written little red then it
on that basis it wouldn't know whether to follow that up with writing or wagon or barn or whatever I mean it would have a very limited number of tokens that could follow little red with but it couldn't it couldn't just pick any token uh and so it would probably have to look at the context of other tokens so let's say I asked um you know but anyway so to me like parsing is like digestion it's not like thinking you know the people who are treating chat gpts if it's thinking and they're
asking you questions if it's thinking it's actually it's more like digestion you you put these token the sequence of tokens in there you feed it the sequence of tokens and sort of these enzymes go to work breaking them down and kind of reconstituting them and then finally it craps out an answer and that's um you know that that's kind of what what I think of it as doing is this kind of parsing so let's say I ask Chad GPT what should I get my wife for Valentine's Day it's gonna give an answer of the form
for Valentine's Day you should get your wife because that's the natural sequence you know if someone asks a question of the form for blah blah blah day what should I get blah blah blah it's gonna know well to respond for that day you should get Bubba you know so it might say something like for Valentine's Day you should get your wife a box of chocolates because those are the tokens that it kind of sees and that's kind of the order that it sees them uh you know in its billions and billions of
of sentences that it that it's that it's seen and uh on the other hand I could say well I'm at the florist what should I get my wife for Valentine's Day and it'll say it'll say well it'll it'll notice the token florist and then it might come back with something for Valentine's Day you should get your wife a bouquet of red roses okay so anyway you know but it's um it anyway I don't think it's thinking it's just processing these things um and it can get very uh articulated that
I mean the sentences will be very articulate but the content is what's not reliable that you know people have who've read about it have read about the phenomenon of hallucinations and the very first time I put a question in and you can tell about my ego is I I said write an essay about Arnold clang The Economist and it starts out Arnold cling is an economist he was he was born in New York City in 1961.
so structurally that's the right answer content wise is completely wrong I was born in St Louis in 1954 not in New York in 1961. you know it just you know the probably in its Corpus of billions and billions of sentences that it's read it included me next to another Economist who was born in New York City in 1961 and so came out with that answer um anyway so that's that's enough on that uh so excuse me so then uh there's this phenomenon of how this was corrected because obviously if this was all you had and and and this was
just the way it worked and there was nothing you could do about it it it would be a joke but there's this other element other element to creating it which is called I think reinforcement learning human feedback and now I'm going to get back to you Dennis and say can how does that actually you know what what are the steps in that what's the recipe for reinforcement learning human feedback
DENNIS um that's a good question and to be honest I don't know uh I don't know what's going on behind the scenes uh
with chat GPD uh I I do know that um they do have mechanisms both for um doing their own adjustments and for uh taking the responses in in the conversations that people have with it uh I I in fact I think the the people who are in the sikm uh session might have seen some of the things I posted I can't remember I posted this but I had a lengthy argument with it um where it was convinced that birds were mammals and we seem to go round and round where part of the reason was because bats are mammals and bats fly
and birds flying and then the you know every time I tried to ask it another question it would skirt around and end up back around again but within a few days that seemed to have been corrected um and so now it's aware that birds are not mammals so uh I don't know you know with as many people as are you know testing it right now I can't imagine that there's some set of people back there taking all this in and manually you know making these changes because that would just be overwhelming um yeah but um so that's that's yeah
ARNOLD well that is it's interesting this mystery I mean I hear the word reinforcement and I I think of sort of you know training a dog and it does what you wanted to do and you give it a treat and it doesn't it does something wrong and you slap it on the nose and I think if you use chat GPT you can give it like a thumbs up or thumbs down am I correct on these things and so that's like giving it giving it a treat or uh slapping it on the nose but I guess what I was wondering is does it respond automatically or is it like
you say that that the humans have to kind of you know look through these you know uh these transcripts and then like you know like my guy with the overalls walk in there and you know make an if statement and a subroutine call and it just so so if anyone out there is more familiar with it and wants to uh give me give me a better sense of what that uh what that looks like that would be that would be uh you're welcome to kind of raise your hand or whatever
DENNIS one thing I have found interesting about it is that um
you can and maybe that's a question you ask it and see what it says I I found when I'm not sure how it works I ask it questions about itself and uh it's able to respond um in fact this morning I got curious because I've never seen it respond in in the dialogues I've had with it I've never seen it respond with a question back to me you know so I was wondering can it even do that so this morning I asked it can do you just issue statements based on you know the questions people ask you or
do you ever ask questions back and it says I ask questions back you know clarifying questions it's obviously and in fact it even said that it's not going to ask original questions um but it can ask clarifying questions and so I said well can you give me an example of one and it said sure in fact they even said sure it said uh for instance if somebody says I need help with something I'll ask what do you need help with and I thought well that's about the most vague you know yeah so I don't I don't think it does a
very good job of uh if at all of clarifying asking clarifying questions like your example of the um the Valentine's Day gift a human might say well does your wife like chocolate before they recommend chocolate you know just to make sure that they're not recommending something that could be ruled out instantly it doesn't seem to have that that capability the um
ARNOLD you know the most interesting story I've heard and it's a story that's too good to check but the guy who who alluded to it uh I think
is reliable and the story is somebody put a prompt in and I forget what the content of the prompt was they got that answer back and it was a a kind of answer and they said well what would a really smart A.I how would a really smart AI answer this question and they got a much smarter more sophisticated answer and what that brings up is the element that I call simulation you know in some sense both of those answers were simulations the first answer was simulating like a you know a a nondescript Ai and the second answer was
it was simulating a sophisticated AI um and It just strikes me that some of the most interesting use cases and abuse cases come from this ability to simulate you know so people have talked about you know taking some modern pop song and saying write it like um you know like Shakespeare and you know that that kind of simulation and I was thinking you know about 10 years ago maybe more maybe 15 years ago now uh this guy John popola who is a film director and Russ Roberts who's an economist collaborated on a rap video
between two early 20th century famous economists Friedrich Hayek and John Maynard Keynes and it was quite a hit so but in effect they were creating a simulation of in the you know 10 years ago of these guys who lived a hundred years ago arguing about you know the fundamentals of Economics it was very entertaining very educational but the amount of work they had to go through to create that was just you know intense it probably took more than a year of writing a script to get you know like a five to ten minute video and then
they've had to find actors and all that stuff now would be available uh through these new AIS uh so that you could you know you could maybe throw that together you could throw a mediocre version of that together you know in a few days you know if you probably to get a really good script you need humans to to think of more clever lines uh but the the simulation possible is the other simulation possibility that intrigues me is is you know his mentors you know I don't know if you've heard remember the movie Stand and Deliver I
don't know do you know what that's what this calculus teacher who was just great out in Los Angeles well you imagine you could you know put it out of you know turn every calculus teacher into a simulated version of him uh using some of this technology um so that's great but then you have the abuse cases which I think people have already had to deal with I mean somebody you know uh you know suppose somebody decide to put out a simulation of you know Stan and Barb doing something nasty or simulation of Stan being assassinated
you know those are you know those things are very real possibilities and create real challenges I know Sandy you have any sense of of the of the of the simulation you use in abuse cases you have any thoughts on on the pitfalls or Promises of that
STAN I haven't thought about the examples you just gave and hopefully won't come back but the the opportunity part of it I think we've already all started to internalized with some of the risks and dangers are but I hadn't really given a lot of thought to all the opportunities until I
started playing with it today and putting in uh the use cases that I had for Knowledge Management and the answers that I got back to me suggests that there is a lot of potential there I've defined in my writing over the years 50 different components of Knowledge Management so I went through trying to think about which ones would this maybe apply to I thought it'd be maybe a handful of them it turned out that I came up with 32 out of the 50 that it could apply to and then I went back and I asked it for each
one of those 32 what can chat GPT do for this and it gave me back in most cases reasonable answers there was a few that just seemed like a little bit forced but otherwise that they were coherent they made sense and if they were in fact valid they would really help Knowledge Management like they could do a much better job of creating user interfaces much better job of responding to search queries and so forth and then the other tests that I put it through just asking it questions it seemed like it came back with pretty
good answers just like you were talking I know I put in the question about Valentine's Day and I've reproduced it here in the chat seemed like a pretty reasonable answer that it came back with so I think there's plenty of risks and dangers but I think on the surface the the value that it can provide that that's what I I'm more focused on I would say
ARNOLD can you give an example of of one of those things where it came up with something that you hadn't sort of considered and seemed like a an
interesting use case
STAN well for instance I asked that uh what how can we use for training as an example and it says it can be used for Content Generations generating training materials such as lesson plans modules and presentations it can cut it can do a virtual instructor integrated into a virtual learning platform provide personalized interactive training experiences it can answer questions related to the training material it can do assessment generation to generate assessments and quizzes ensuring Learners have a solid
understanding of the material adaptive learning it can be used to tailor training experiences based on the learner's progress preferences and needs well that's all true and of course we could be skeptical about it it's pretty good that's
ARNOLD yeah yeah yeah and that's just one thing and I've got to believe you could do that for like you know a 10 year old right you could you know you could have a very enthusiastic encouraging Mentor for a ten-year-old and just uh you know it's that that would be amazing
but yeah but what does the reliability issue bother you for Knowledge Management at all either Stan or Dennis
STAN well I would say that if you're going to use it there's going to be two distinct use variations one is focused on the general knowledge that in the World At Large which is where most of us have been experimenting with it and then there's the potential use where you focus it on the knowledge and resources within an Enterprise or within an organization if you can properly do that if you can
turn it loose on all of an organization's content and then it can do the same thing that it it's done with the more Universal content and that's a big question because as We Know Google the search engine is different than internal search engines by a big big margin because of the differences in scale but assuming that it can work similarly on internal content then yeah I'd say I wouldn't be too worried about the the negative side of that I'd I'd be excited about how it could do a lot
better job of finding content and serving it up in useful formats than what we currently do with cobbling together stuff from you know search results and so forth
ARNOLD Dennis you have any thoughts there
DENNIS yeah um I um I looked at a couple of things I said you might have seen recently in the news um you know now uh universities are giving it tests uh like uh I think uh was it Wharton the MBA operations management test and gave it a B minus or something like that and then um CNET was using it to write articles for money and had a bunch of errors but
the errors in both cases seem to be math errors which I thought was sort of funny that um in order to make it more human you make it poorer at math than a computer is um and I I think maybe people are you know thinking this is like you know the Wizard of Oz or something that that knows everything but I if you think of it more as an explainer rather than a calculator I think that's where it's its Niche area is and when you were talking about simulations and and then just recently what Stan was talking about too I think the ability to
take a concept and explain it at whatever level is appropriate for the person asking the question uh could be really valuable and just this morning I I pulled up an old paper I I remembered from a long time ago an AI paper that I thought uh has some relevance here and I was reading through it and um the guy who wrote it uh was talking about horn Clauses and I remembered them vaguely uh but I didn't remember exactly what they were so I had cat you know gtt up and I said what's a horn clause and it gave me this
explanation that seemed to come right out of a logic uh math textbook and so I just responded back and I said can you explain that at the level that a high schooler would understand and it came back with a very nice simpler explanation so to your point about a 10 year old you know I think I and maybe also the idea in the opposite direction what you said before about the AI you know sophisticated AI it might provide an explanation at a default level and then if you come back and ask for a different level of explanation it can tailor it up or down
depending on the sophistication of the of the reader so I think that kind of thing could be really powerful um you know if you want to do the math calculations to see what the interest rate is or whatever then we've got other tools for that maybe this is not the right tool for that
ARNOLD yeah the uh reminds me that uh just today uh a blogger that I follow Tyler Cowan said that he now reads with chat GPT so if he's reading about some foreign country and and a region gets mentioned that he doesn't know anything about he'll just you know
ask chat GPT to tell him about what he needs to know about that region uh so he can kind of you know go back and forth and kind of follow the follow the book he says it's like having 10 books open at the same time um the uh I want to ask one more question and then I I hope the audience is kind of primed to to uh jump in uh and that's this big question of what will knowledge work work look like let's say in 2037 years from now will it be about the same now or radically different and I'm leaning toward the radically different
so those you know knowledge workers they're what Robert Rice used to call symbol analysts people who deal in words or equations or computer code um you know if you follow the software industry I mean it is being turned upside down because chat gbt is just great as a putting together what they call an IDE which I forget you know integrated development environment or whatever I mean people people can produce software just incredibly faster those skilled people and I you could sort of see the same thing happening in
all these fields again being able to put together you know a a complex video in you know in in days that would have taken you know months or years to put together um so let me ask the question this way suppose I say that um you know there's a the 75 percent chance that seven years from now the tools that we've seen and that are coming down the pike will be an absolute necessity for someone to be uh an employable knowledge worker they they'll have to be able to work with those tools or they'll you know they
just won't be uh won't be productive at all you know I'd say that there's this like it looks to me like there's at least a 75 chance that that scenario plays out Stan do you have an opinion would you go over or under
STAN I I think you're you're right Arnold I think you you didn't you write that you thought it would be a good time for people to start a business using chat yeah so that's sort of another statement of its potential impact isn't it so I think you're right this is very similar to me as an
inflection point as you pointed out on one of your blogs about the the World Wide Web right to me it has that kind of potential impact
ARNOLD Dennis you have a DENNIS yeah I think things will change in in two different directions one is things like this um especially you know as as more people are working uh remotely what you don't get when you're working remotely is the person sitting in the cubicle next to you that you can lean over and ask a question and say hey do you know how to do this um just in a casual way and I think GPD
wife was thinking about starting a business I live in South Carolina so I asked chat GPD you know where the forums to start a business in South Carolina and it came back and said they're on the Secretary of State's website um but it didn't give me a link it just said that's where they are so I I asked I followed up I said do you have a link gave me the link not just to the website but to the page on the website where the forums were um what I was thinking was you know if if this means that people can just dump
their stuff into their systems in any old way they want and not have to worry about it and and something like chat GPD can you know scan all that and figure out where everything is and just tell you where it is whenever you need to know that's pretty powerful the other the flip side I was thinking of is every technology ends up escaping the way the information the knowledge is formed because there's a there's an iterative kind of thing right where um for instance um people generate uh have button to
create a CSV file to be able to export into Excel so there's some structure to that data specifically for a tool I've read the companies are telling um you know applicants not to bother with a cover letter with their resume because the resume is being ingested into an automatic you know system so many websites are designed for SEO you know uh even more so than the actual content so I would not be surprised that if if something like GPT gets sort of downsized to where it could be deployed within an organization
um or even on the web um once people start to get the feeling for like how it's ingesting this information and they're probably going to go check their own content against it to see what it's saying about about them they'll probably start tweaking their content in a way that makes it you know as palatable as possible whenever these tools use it so there's a sort of you know iterative kind of backward uh loop that that changes the shape of our knowledge to fit the food we're using versus the other way around
ARNOLD yeah those
are all great observations about the last one or like I remember when I had my commercial website in the in the 90s and um you know we at first we were focused a lot on the design of the home page because people were going to come to it from the home page then once the search process take took over people were just were coming into the back pages and you had to completely rethink you know how am I going to accomplish what I want from you know from the website's point of view get people where they where we
want them to be when they're coming in to to different pages they're not coming through the home page and the what your last Point reminded me of that that and um anyway so I I I didn't want to I I suspect we have a very interesting audience so if let me just get a get a gallery view up and if people want to put up their virtual hands and please don't make long comments just because I think there'll be other people who who want to talk I suspect but if you'll raise virtual hands or maybe physical hands I'll be
able to see it and some of you um and uh and then I can call on people and again try not to give long speeches and you know as they say on Jeopardy be sure it's in the form of a question by the end it doesn't absolutely have to be but I'll do that okay so JC can I ask you to unmute yeah can you hear me yeah yeah so uh you asked a lot of question about uh judge GPT and Knowledge Management so I think one way to look at it is uh as a knowledge assistant and you ask the question will it fundamentally uh revolutionize the the
knowledge worker the answer is yes so how it will do that and I'll give you some example I've been doing with child GPT um I needed to write some code I can write code but I ask him to write code for me and you wrote code for me okay so um I asked him to write a blog so I gave him some information and what is very important to understand with those generative model is to introduce the context of what you're doing so you can ask the assistant gpts as a imagine you are a lawyer and I need a contract in
who is following the rule of the State of California Etc and it will provide you with a contract following the rule of state of California so now the question here is how do you trust the answers and there is always this degree of what is an expert right so if I'm a novice and I'm looking for an answer of a domain I have no knowledge about you know I get an answer I have not really a way to appreciate can I trust this answer if I have some domain knowledge I can use some of my knowledge in that domain to appreciate the validity of
that answer but what I see happening is that those generative model will come into the Enterprise already charge GPD as an API and you could train the model with your own Enterprise data and what Stan said which is very relevant companies basically they do a terrible job of organizing the internal data and a model like a tool like chat gbt will do a much better model than that and today companies are using search engine but search engine you can I encourage anyone who has a company to go into the search engine log the number of
keyword given in a search engine is an average of two or less okay so there's a there's a limitation of the technology by the human interaction to the technology and this number has not changed for the last 15 years uh I I'm the former Chief knowledge of a piece of Microsoft I can tell you that I talked to the being my big colleague and internally with the same number so I think there will be a fundamental change there on a motor change because we will accelerate the task we will give better result to most of the basic tasks where
the system would have to improve is everything that is reasoning which we we see is not ready to do that but I think we are like touching the first Universal uh you know knowledge assistance and they would need some specialization there and I think as the time comes back it goes on we will see tremendous Improvement it's a little bit like you and I are probably the age of the slide ruler okay and then we are the TI calculator and then we had we used programming in Fortran right so this this transformation of a tool going from
a slide ruler to compute certainly is what's happening with AI right now and those kind of models so I think yes it will be extremely uh transformative and for the other reason that it's available everywhere and I would say okay sorry I'm gonna have to ask you to wrap it up because I'm sure other people are talking and and I want to say one thing that we've been talking about chat GPT and search and one difference is chat GPT maintains state so Dennis mentioned you're asking the question and then
reformulating it that's very natural with Chad CPT if if Google needs to um if that's all Google needs to fight they they can they can do that you know next week they can maintain the state and let people reformulate their queries uh so that if that's a revolution which it may be uh that'll happen really quickly all right Robert Boyd you have a hand up can I get you to unmute here still okay go ahead and ask sure hi uh thanks well uh you know uh you talked about uh having chat GPT write code for you and
I've been using a copilot for a little while which is github's um uh you know open AI codecs for writing code and uh I was using it to write react and material which doesn't really matter what it is but it was new to me um and I learned it quite a bit faster I think because it kept me out of the uh the semantic weeds if you say you know write this kind of code using react and material UI then it'll give you 10 suggestions uh and you can you can't exactly cut and paste it but it did make some really interesting suggestions
which of equality that I've never gotten in an agile meeting where everybody else is sitting around and supposed to be commenting on the way you're supposed to uh the best way to solve this problem but what they're really doing is wondering about their own problem at least chat GPT was paying attention to me um but I'll uh I'll tell you three things that I've seen that it doesn't it kind of doesn't do you know you you say write some code like this uh and and it will give you a suggestion
but what it won't suggest what it doesn't know is that hey you could have written this code better if you already had a typescript interface file or something like that so it doesn't go backwards it doesn't say you know it starts from where you are and goes forwards a little ways but it won't go backwards it's not like a real good Mentor who says well you don't really need that kind of code you need this other kind of code to solve that problem and you need these prerequisites and it doesn't go very far forward so
it's good for the next five or ten lines of code but you know if you're missing something like well you forgot to tell it to include a variable to you know a react thing called use state to maintain the state of all the information that you're using when people are typing it in and changing it well it won't tell you that unless you know that you need that so there is there's uh there's a good Advantage for some of us who were born in 1961 who kind of know all the pieces you do need but don't
really know the semantics or the specific um you know what is material UI have in to offer compared to semantic and some of the others it'll come up with those specifics for you but other than that it's uh it's it it it doesn't really do the mentoring thing quite yet um okay let's have uh Dennis can I okay sure thanks uh thank you um I I'm I'm curious getting back to the to the original question of of solar what I what you would call uh Enterprise Knowledge Management I mean a lot of the a lot of the
functionality of chat GPT seems to come from the fact that it has this this vast textual input petabytes worth I have no idea how much really um and I'm wondering sort of what is the minimum amount of text that you would have within an Enterprise that you could feed into this engine and have it be useful in other words is it it obviously operates at a at a very great scale when it's when it's looking at the whole internet but when it's looking at just what's within an uh an Enterprise what
is the how how much data does an Enterprise have to have before it could actually become useful does anybody know the answer to that point well I I don't I know nothing but I would guess you could do it with small scale as long as it as long as it it it it it knows semantics and and language structure and uh grammar from its large model well it doesn't know that does it know that in other words can it transfer that from the large model to a smaller model or is it or is it an engine that just works on what it's given
I would I I'm gonna guess it can transfer but uh oh yeah one thing I've noticed that it does is uh it takes it reads the files that I have in my project and does suggest things related to things in my project that I'm working on now even if I haven't told it you know there's some function in my file it will it will look for things that seem relevant so if your company knowledge was in a particular file and you were writing in that context it probably looks there first certainly does for co-pilot yeah Stan or
uh or Dennis you have any well I would say that as you said on you could you could do like a pilot by focusing it on a set of content but for it to work the best you'd probably turn it loose on your intranet and let it crawl and find everything that's there it's going to be a lot of stuff that wouldn't be all that useful but hopefully it can then differentiate and that's the problem with search engines it doesn't differentiate it just throws you out a bunch of stuff that you have to Wade
through and the advantage of this would be it could look through all of your Enterprise content and then figure out using its own algorithms what to do with it and I think the answer would be turn it loose on your old internet and if necessary figure out ways of giving it access to things that are that are behind passwords and security and the more content that you feed it the better Dennis any thoughts yeah I I was thinking um when you were talking about you know the sort of generalized capability versus the specific content
that Harkens back to like old days of expert systems where there was an inference engine and a knowledge base and the inference engine was the the general purpose kind of way of you know turning through things and then the knowledge base was the specific content so but again I don't know anything about what this tool was like behind the scenes but if it was ever going to be deployed within an organization if they hope to sell it that way I would think you would have to have those two components you'd have to have the basic
engine to drive you know how it looks at things and figures things out in general and then applying that to the specific content of the particular organization yeah I don't think it you know it has it if it has a digestion if you give it the digestion system to a different set of tokens uh it would kind of work that way okay Tom you've had your hand up so let me uh I I I had a quick comment question uh I understand that Google uh made a 300 million dollar investment in a competitor to chat GPT called Claude I
think from Claude Shannon and this is a constitutional AI and I'm very interested in the difference between a constitutional Ai and this chat GPT because I I have a feeling that the Constitutional AI is going to be more of the uh expert system type uh hybrid but I'm not sure about it so I was hoping someone would have an answer to that I don't turn constitutionally yeah yeah I'm not I did see that Google something came out just I think today they have a tool called Bard and the internal memo went out that they got to
deploy this internally and start testing it right away and they're they're very worried that chat GPT is going to suck upon that you know press and so they've got to get theirs out quickly so they do have one coming out to compete with it I I didn't hear anything about constitutional AI there so I'm not sure exactly what what that means just ask uh at GBP is the answer yeah yeah okay yeah it gave the answer yeah and social value into AI system to ensure they operate in responsible and fair manner okay yeah well you know this
whole project started out of you know the fear of the uh you know the paper clip maximizer scenario um and it's you know most of the people who who have that fear are very angry that that it was kind of let out to the public as soon as it was Ernie you had a physical hand up so you wanna thank you I've done mute of course okay yeah yeah I did yeah um a couple of comments I'll make them real fast first uh for small smaller companies they're not going to be able to adopt uh their own engine because most of it is requires uh
pretty much massive parallel processing which requires many many computers and uh if you're a small company and you want to use something like that you're going to have to use a tool that other users are also using you could have to pay a subscription fee or something to access a general a model of some kind which brings up a question I've had now for a couple of weeks it's how do you get the chat GPT to sign a non-disclosure agreement I I don't know and but it's a it's an interesting question uh my
daughter-in-law works for IBM she's using it and her groups are using it but I asked her what was she doing about non-disclosure because they're they're asking you to to deal with some questions that really ought to be remain private inside the company so I I I'd like to hear anyone's opinions about that how that's going to work in the in the future secondly Google is having a meeting on Wednesday a public uh some kind of announcement on Wednesday and the rumors are that it's about a competitor to chat GPT I don't
know that that's true or not but that's what I've heard and lastly um going back to coding you I I've been disappointed recently in some of the things I've asked it to do mainly because it doesn't quickly interpret a string as a string you put a string in a uh it's like a date you can write a date uh out as a string and then tell it you want to use a system date too well we'll use the system date as the date but it doesn't convert the string to a date at least in oracle's database
uh unless you put quotes around the data let it know it's a string so for novices they're not going to see that they're not going to anyone that's going to ask this thing that's not really familiar with the way systems work it's not going to perform very well okay that's all I have to say I I am interested in an NDA uh question though well I you know that problem kind of precedes uh this kind of thing you know the you know I started this saying that that you talked about this Chief technology
off officer of a big four consulting firm and it wasn't McKinsey but when I think of McKinsey you know what that what they sort of specialize in doing is talking to all the big players in an industry and sort of sharing in a in a very careful way what's going on at the other three or with other five players with the sixth player uh you know with disguising kind of where you know disguising where they're getting their information from it's a it's a very dicey kind of thing um and so I think what you're raising I
you know I don't think it's it can be solved with something as simple as a non-disclosure agreement because McKinsey you know just has to enforce in a detailed way has to know what it can disclose and what it can't and and they'd probably crosses the line at least indirectly all the time um okay uh Bart can you unmute and ask you a question or give your comment yeah well I I wanted to expand on the question from Ernie for the NDA um would you be looking at an NDA for the questions you ask it or would you be
looking for an ndea for maybe previous content you have written before like hey I do not want my stuff to be indexed or used to train the algorithm to begin with where we could be looking for an NDA or maybe both of those situations or yeah I would guess that that all those possibilities if they're a stand do you have any thoughts on you you've done some Consulting what what how do people deal with it just now without and they uh well I haven't had to deal with mbas much typically when you're consulting or
maybe they may ask you to sign something like that or they may just say that's part of the work agreement I haven't seen it uh come up in my own experience much okay um all right anyway the queue is open it's also approaching the end of the hour so I um I'll give give people like a couple moments to come up with any last question okay and I say Tom go ahead um yeah I wanted to comment on the Knowledge Management um I remember 15 20 years ago there was this huge search at Dell for the one source of Truth which is at the
accounting level so right now almost all of these multinational organizations have multiple uh jurisdictional silos of information and they've been doing uh trying to organize them in a way so that the president can get an overview and if there's a section that wants more detail you go to that detail and uh get an overview of that section and keep going down uh and it will all be consistent and while I was there 15 years ago they failed to create such an one source of Truth uh when I was at IBM um even eight years ago they were doing
a similar thing with blue Harmony which they then gave up part and now they've gone to something called a date in sap uh and now they've gone to something called Data Lakes where they're trying to make a lake of data so that it's available to uh everybody in the organization and what I'm certain of is that this AI is of which are specialized in the organization but have the uh capability of talking like chat GPT has the capability some hybrid mixture of that is going to allow the executives to
get rid of a lot of mineral managers and thereby as Arnold mentioned early raise their personal budget but lower their head count foreign okay um I don't see any of their hands up uh and I'd like to thank everybody for their questions and their thoughts and and what the things that they added in chat and I'd especially like to thank Stan and Dennis and uh give Stan the next to last word and Dennis the last word so Stan okay so one of the things to point out is that the main use of AI in my opinion is to augment human
capability not to replace it so I think there's a lot of potential for a tool like this to give you something that you could start with and then work from as opposed to just take and always use as is so in that context can save us a lot of time and effort but we might still want to spend some time checking it verifying it and refining it okay and Dennis the last word um I'll put a link to a paper that's 35 years old but I still find interesting and it's the idea that we don't get hung up on the term artificial intelligence
uh he makes the case that every technology um has this habit every major technology has a habit of first being named in an adjective noun form that relates back to something that's already known so you had locomotives were an iron horse a car was a Horseless Carriage printing presses were actually originally called artificial writing radio was Wireless Telegraph you know so there's this pattern of new technologies being named as old things until we figure out exactly what it is and realize that it's completely different
from anything that we've had before so when I think people think of this as intelligence it's not intelligence it's it's another tool it's a tool to use for information and um knowledge uh management and and practice but it's not really intelligence okay well I'll agree with that and then we'll sign off thank you everybody thanks Arnold thank you bye