What Netscape brought to the market was a product that was easy to use. A lot of technologists don't seem to understand this, but tech products are not used by the masses because the tech is cool. Tech products are used by the masses when the tech is easy for normal people to use. And, while ChatGPT or Claude 3's chat interface is miles easier to use than arcane HTTP command line interfaces that were common before Netscape came on the scene, it is *still* hard for normal people to understand how to interact with this tech. Yes, as Ethan Mollick has repeatedly said, if you sit in front of your computer for a few hours playing with the tools, you will figure it out, but even that learning curve is far too steep for most people to traverse.
Yes! AI IS in the hands of millions of K-12 educators already. I was at the ASU+GSV AI expo in San Diego a few weeks ago, before their well known annual edtech summit. https://www.asugsvsummit.com/airshow
It's worth signing up and exploring it. It's a platform that provides 60+ ready-to-go prompts, matching what K12 teacher do, from creating a newsletter, quiz from video, lesson plans, IEPs, email to parents, and is set up to draw from quality sources aligned with learning standards even. They've provided a way for teachers to EASILY leverage AI and get decent results, saving them from having to start from scratch with ChatGPT or another AI model. It's brilliant and I wish the equivalent existed for my profession (b2b marketing).
"I think that LLMs are missing these sorts of intermediaries. We are waiting for someone to come up with a Netscape, an eBay, and a GeoCities for large language models."
I've found many intermediaries in the SaaS (software as a service) world, though none are ubiquitous!
There are 1000s. I am seeing every existing platform in my techstack integrating AI, so it's showing up in what people/orgs already do (sales, marketing, customer success, finance, operations). I also see new tools launching daily focused on specific needs. For example, there's about a dozen AI platforms that will take everything customers say about a product/company (support tickets, chats, website, social media, surveys, emails, phone calls) and tell orgs which bugs/requests they might pay attention to. Another dozen with really, really good chat support that are going to reduce human support agents for sure. It's an arms race.
Tools I use that let me save/optimize my prompts, to leverage AI for repeatable work (not code, mostly written/research tbh) include Copy.ai, text.cortex. They do provide smart overlays. I have looked through the public saved prompts on these platforms, and like ChatGPT most aren't great. Not yet Netscape/EBay
Also, if you haven't tried Google's AI tool for academic RESEARCH - have a look at https://notebooklm.google.com/. You manually put in your research papers/sources. Then use AI to dig into them. We might end up in a world where oral exams are the only option to prove learning? idk.
"LLMs have not yet broken through to the mass market. This reminds me of the World Wide Web before Netscape."
I don't exactly disagree with you but maybe it's worth adding more details to the history. My memory is that email was a pretty important business tool well before Netscape. One could argue AOL brought the internet to the masses before Netscape came on the scene. I wonder if the Yahoo search engine wasn't as important or more than Netscape even if Netscape got far more attention. Either way, lots of apps followed the GUI and search capabilities. They made it just about as ubiquitous as it could be until the smartphone. Ebay seems near the top of that list but I'm missing how GeoCities gets a mention even if it goes somewhere on the list.
Until then... Microsoft has blended its Copilot into regular Bing searches. While I thought that the LLMs were interesting, I couldn't think of much to do with it day-to-day. Now that my daily Bing searches are also integrating Copilot results, doing a plain old Google search that just displays a list of links seems primitive.
This is how it might work out: The technology just works its way into what we already use in ways that won't seem revolutionary until you look back years later -- like looking at an old Yahoo page from the 90s.
I am savagely indifferent to it myself, but if I were to venture a guess, it’s arrived at a moment when the people are less verbal in terms of being able to read or write, than a few decades ago. Literacy, fluency with language - is moving in retrograde with, uh, progress. Now you might say, oh, that’s perfect - AI is just in time to help people with their searches, with writing whatever little text they need to write or absorbing what they need to understand.
I think that would be a serious misconception, but y’all do y’all :-).
An analog: my elderly father has deteriorated mentally, and like many old people gets stuck on a theme. Lately it is an oft-referenced bafflement brought on by watching Fox News and reading the WSJ. He recurs to this sort of talk: “We never felt this way about the Jews. I went to school with them. I threw their papers (he lived in the modest section adjacent to the wealthy neighborhood where prosperous, majority Jewish families built their houses, originating when the deed restrictions of River Oaks had not yet been broken though they would be pretty shortly). It never would have crossed our minds.”
One of his best pals is Jewish, and like all the gin rummy players at the country club - now integrated as to religion - he is of course very pro-Israel. The news makes him sad. He never imagined he’d see it.
So he is not very imaginative and his understanding of things is not great.
I try to make him feel better by telling him it is less a phenomenon of anti-semitism than of anti-American/anti-westernism. But he doesn’t know what they have taught in schools all these decades, coming to fruition.
And he doesn’t understand that immigration - which the country club old men, served by a series of obsequious immigrant waiters, their yards needlessly clipped by immigrant lawn crews - never had a problem with - has made this a different country than the one he knew.
I can't comment on how people used to be because I wasn't paying attention, but I can certainly vouch that there is something very wrong with how people use language.
Personally, I attribute it to how the news, or "reality" in general, is increasingly composed of propaganda meant to persuade rather than inform. The human mind is a lot like an LLM, its performance and output is a function of its training.
Seeing their products. Listening to them talk, both in person and on videos or on the news. Seeing the results from the school districts honest enough to post them. Watching "text" in the books or publications aimed at an average IQ audience, dumb down over time.
What is your evidence that people are in the main, the same as ever?
I can think of many reasons abilities might have gone down and many why they might have gone up. If anything, I'd say the reasons for going up are slightly more compelling but it also seems likely the differences are small, in either direction. Status quo is a good first guess without overwhelming evidence.
Some of your evidence seems rather anecdotal. Surely you don't mean an individual's skills have degraded. So do people of a given age have worse abilities than of that same age in the past? I'd bet as we (particularly you, me, others of above intelligence and education) get older we become more aware of the weaknesses of others that might not have been as obvious when we were younger. That's certainly true of me. I did a lot of writing in my job and my skills improved massively, even if they still leave a lot to be desired. Is your anecdotal evidence influenced more by real changes in other people or by how you have changed? I'd bet on the latter.
I hear a lot about test scores going down but know that this is often, if not always, influenced by more people being tested. Looking at these SAT scores, they've gone down since 1972 but that happened the first couple years and they've been flat since 1979. Can you find scores that tell a different story? Almost certainly but is the story compelling?
To me, "dumb down" sounds a lot like writing more clearly. Maybe it's some of both but when I think of the best writers of today (not necessarily the best thinkers), I think of people who write incredibly clear and readable text. If that's "dumb down," I'll take that for sure.
I'm not speaking about skills around the jobs people do, insofar as they have jobs. I have no anecdotal evidence about that except that the department of transportation of our state seems to have totally lost the plot on how to conduct roadwork while meeting the necessary safety requirements!
My observation is confined to language use.
I have no idea who takes the SAT anymore, or what it looks like.
Late to this but there has to be a matter of language. I'm a complete layman on AI and indeed all IT, and as much as I admire The Zvi and have read his stuff for a while I understand less than 2% of his posts on LLMs and AI. It's mostly gobbledygook, at least to me, and he is a writer who is easy to understand on usual subjects. Everyone knows what it means to google something but I bet on the average British or American street, or any other street for that matter, less than 5% of people would know what LLM even stands for. I would think that most people sat in front of an LLM would struggle before they found a utility for it in their day to day life.
At this point it is easy to be impressed with what LLMs can do but I have found its limits quite readily, even as someone without very much idea of how to get the best out of one, meaning it should be impressing me. It's not obviously better than Google for most things for most people. It can't give the user copyrighted material. It can't answer a question like "give me the transcript of what Jordan Peterson told Joe Rogan about Dostoyevsky on their podcast eighteen months or so ago." You couldn't for instance ask it 'Find me the highlights video on Youtube featuring Shohei Ohtani's 29th home run of 2023 and tell me exactly which time to skip to."
For obscure reasons I shan't go into, last week I wanted to know if the grazing reserve in northern Senegal was of a highly comparable size in square km to any of Nigeria's 36 states. It took twenty minutes of to and fro. The thing just did not get it at all.*
Frankly the only real-world use I have yet encountered for LLMs is people cheating on job applications.
*It's just about the same size as Plateau state, incidentally...
LLMs are not always truthful. Not 9 of 10, 99 of 100 nor even 999 of a 1000 right answers are "good enough" if the user can't trust the LLM bot to be correct.
There's plenty of sources on the internet for fake news, and fake facts.
An aiBot which dosn't, ever, tell wrong facts authoritatively, is probably a prerequisite for a killer app. It might be enough for all potentially wrong answers to be identified like,
"Maybe Arnold Kling went to Harvard", with the "maybe" fudge word to indicate where the fact might be false.
Maybe in K-12 education, where the facts taught are well known & documented, an LLM teacher won't make any mistakes. But repeating a sarcastic "add bleach to baking soda" to clean sweaters, because it became viral as being so terrible, but was called "this excellent advice" -- it's not clear LLMs can avoid that mistake.
It might be that LLMs are a tech dead end for true, and only true, facts.
Interesting analogy, but I just don’t think these models are, or even can be, as generally useful as the internet. Rather than saying these models are like the internet waiting for Netscape, I think about them as being like general relativity waiting for Garmin.
GPS wouldn’t work without taking into account general relativity, but general relativity isn’t the product. Maps and geolocation are the product. In the case of the internet, the product is cheap long distance communication, and non-tech normies needed Netscape to properly use the communication channel.
A computer that’s able to generate reasonable text (or images or whatever) at scale is just not a problem normies ever have, in the same way that properly accounting for space-time curvature isn’t a problem normies ever have. Might there be a product someday that leverages these newfound computing powers? Maybe, but my guess is that if there is a “killer app” built on the models, users/customers will care about *what* the app does (e.g. maps and driving directions) more than *how* it does it (e.g. satellite triangulation).
You are making me less amenable to "falter." You are mostly correct in saying their revenue stream went away. Up until that point they had success. Afterwards they would have needed to become a completely different company to survive. They didn't and sold the assets for a very high price. Sounds like success to me.
Netscape also had a blockbuster IPO that allowed early people to take some chips off the table. Andreesen didn't become super rich until LoudCloud sold to HP, and later a prescient investment in Skype sold to Microsoft.
What Netscape brought to the market was a product that was easy to use. A lot of technologists don't seem to understand this, but tech products are not used by the masses because the tech is cool. Tech products are used by the masses when the tech is easy for normal people to use. And, while ChatGPT or Claude 3's chat interface is miles easier to use than arcane HTTP command line interfaces that were common before Netscape came on the scene, it is *still* hard for normal people to understand how to interact with this tech. Yes, as Ethan Mollick has repeatedly said, if you sit in front of your computer for a few hours playing with the tools, you will figure it out, but even that learning curve is far too steep for most people to traverse.
Yes! AI IS in the hands of millions of K-12 educators already. I was at the ASU+GSV AI expo in San Diego a few weeks ago, before their well known annual edtech summit. https://www.asugsvsummit.com/airshow
Magic School AI alone has 1.5 million educators using it. https://www.magicschool.ai/mission.
It's worth signing up and exploring it. It's a platform that provides 60+ ready-to-go prompts, matching what K12 teacher do, from creating a newsletter, quiz from video, lesson plans, IEPs, email to parents, and is set up to draw from quality sources aligned with learning standards even. They've provided a way for teachers to EASILY leverage AI and get decent results, saving them from having to start from scratch with ChatGPT or another AI model. It's brilliant and I wish the equivalent existed for my profession (b2b marketing).
AI is also now embedded into existing edtech tools for curriculum , analytics, supplemental learning, administration. https://www.asugsvsummit.com/airshow/partners
"I think that LLMs are missing these sorts of intermediaries. We are waiting for someone to come up with a Netscape, an eBay, and a GeoCities for large language models."
I've found many intermediaries in the SaaS (software as a service) world, though none are ubiquitous!
There are 1000s. I am seeing every existing platform in my techstack integrating AI, so it's showing up in what people/orgs already do (sales, marketing, customer success, finance, operations). I also see new tools launching daily focused on specific needs. For example, there's about a dozen AI platforms that will take everything customers say about a product/company (support tickets, chats, website, social media, surveys, emails, phone calls) and tell orgs which bugs/requests they might pay attention to. Another dozen with really, really good chat support that are going to reduce human support agents for sure. It's an arms race.
Tools I use that let me save/optimize my prompts, to leverage AI for repeatable work (not code, mostly written/research tbh) include Copy.ai, text.cortex. They do provide smart overlays. I have looked through the public saved prompts on these platforms, and like ChatGPT most aren't great. Not yet Netscape/EBay
Also, if you haven't tried Google's AI tool for academic RESEARCH - have a look at https://notebooklm.google.com/. You manually put in your research papers/sources. Then use AI to dig into them. We might end up in a world where oral exams are the only option to prove learning? idk.
Slovakia has long used the oral exam approach. So the students study some 20-25 questions, intensely, and get 3 to answer. Then a grade.
Takes quite a bit of time for exams, but that's what the professors get paid, poorly, for.
Not much grade inflation here yet, my professor wife often has to fail a student, and seldom gives the 1 (of 5, =A) as a grade.
William Gibson wrote: “The street finds its own uses for things.”
"LLMs have not yet broken through to the mass market. This reminds me of the World Wide Web before Netscape."
I don't exactly disagree with you but maybe it's worth adding more details to the history. My memory is that email was a pretty important business tool well before Netscape. One could argue AOL brought the internet to the masses before Netscape came on the scene. I wonder if the Yahoo search engine wasn't as important or more than Netscape even if Netscape got far more attention. Either way, lots of apps followed the GUI and search capabilities. They made it just about as ubiquitous as it could be until the smartphone. Ebay seems near the top of that list but I'm missing how GeoCities gets a mention even if it goes somewhere on the list.
Until then... Microsoft has blended its Copilot into regular Bing searches. While I thought that the LLMs were interesting, I couldn't think of much to do with it day-to-day. Now that my daily Bing searches are also integrating Copilot results, doing a plain old Google search that just displays a list of links seems primitive.
This is how it might work out: The technology just works its way into what we already use in ways that won't seem revolutionary until you look back years later -- like looking at an old Yahoo page from the 90s.
I am savagely indifferent to it myself, but if I were to venture a guess, it’s arrived at a moment when the people are less verbal in terms of being able to read or write, than a few decades ago. Literacy, fluency with language - is moving in retrograde with, uh, progress. Now you might say, oh, that’s perfect - AI is just in time to help people with their searches, with writing whatever little text they need to write or absorbing what they need to understand.
I think that would be a serious misconception, but y’all do y’all :-).
An analog: my elderly father has deteriorated mentally, and like many old people gets stuck on a theme. Lately it is an oft-referenced bafflement brought on by watching Fox News and reading the WSJ. He recurs to this sort of talk: “We never felt this way about the Jews. I went to school with them. I threw their papers (he lived in the modest section adjacent to the wealthy neighborhood where prosperous, majority Jewish families built their houses, originating when the deed restrictions of River Oaks had not yet been broken though they would be pretty shortly). It never would have crossed our minds.”
One of his best pals is Jewish, and like all the gin rummy players at the country club - now integrated as to religion - he is of course very pro-Israel. The news makes him sad. He never imagined he’d see it.
So he is not very imaginative and his understanding of things is not great.
I try to make him feel better by telling him it is less a phenomenon of anti-semitism than of anti-American/anti-westernism. But he doesn’t know what they have taught in schools all these decades, coming to fruition.
And he doesn’t understand that immigration - which the country club old men, served by a series of obsequious immigrant waiters, their yards needlessly clipped by immigrant lawn crews - never had a problem with - has made this a different country than the one he knew.
I can't comment on how people used to be because I wasn't paying attention, but I can certainly vouch that there is something very wrong with how people use language.
Personally, I attribute it to how the news, or "reality" in general, is increasingly composed of propaganda meant to persuade rather than inform. The human mind is a lot like an LLM, its performance and output is a function of its training.
What is your evidence that people are less able to read and write?
Seeing their products. Listening to them talk, both in person and on videos or on the news. Seeing the results from the school districts honest enough to post them. Watching "text" in the books or publications aimed at an average IQ audience, dumb down over time.
What is your evidence that people are in the main, the same as ever?
I can think of many reasons abilities might have gone down and many why they might have gone up. If anything, I'd say the reasons for going up are slightly more compelling but it also seems likely the differences are small, in either direction. Status quo is a good first guess without overwhelming evidence.
Some of your evidence seems rather anecdotal. Surely you don't mean an individual's skills have degraded. So do people of a given age have worse abilities than of that same age in the past? I'd bet as we (particularly you, me, others of above intelligence and education) get older we become more aware of the weaknesses of others that might not have been as obvious when we were younger. That's certainly true of me. I did a lot of writing in my job and my skills improved massively, even if they still leave a lot to be desired. Is your anecdotal evidence influenced more by real changes in other people or by how you have changed? I'd bet on the latter.
I hear a lot about test scores going down but know that this is often, if not always, influenced by more people being tested. Looking at these SAT scores, they've gone down since 1972 but that happened the first couple years and they've been flat since 1979. Can you find scores that tell a different story? Almost certainly but is the story compelling?
https://blog.prepscholar.com/average-sat-scores-over-time
To me, "dumb down" sounds a lot like writing more clearly. Maybe it's some of both but when I think of the best writers of today (not necessarily the best thinkers), I think of people who write incredibly clear and readable text. If that's "dumb down," I'll take that for sure.
I'm not speaking about skills around the jobs people do, insofar as they have jobs. I have no anecdotal evidence about that except that the department of transportation of our state seems to have totally lost the plot on how to conduct roadwork while meeting the necessary safety requirements!
My observation is confined to language use.
I have no idea who takes the SAT anymore, or what it looks like.
Late to this but there has to be a matter of language. I'm a complete layman on AI and indeed all IT, and as much as I admire The Zvi and have read his stuff for a while I understand less than 2% of his posts on LLMs and AI. It's mostly gobbledygook, at least to me, and he is a writer who is easy to understand on usual subjects. Everyone knows what it means to google something but I bet on the average British or American street, or any other street for that matter, less than 5% of people would know what LLM even stands for. I would think that most people sat in front of an LLM would struggle before they found a utility for it in their day to day life.
At this point it is easy to be impressed with what LLMs can do but I have found its limits quite readily, even as someone without very much idea of how to get the best out of one, meaning it should be impressing me. It's not obviously better than Google for most things for most people. It can't give the user copyrighted material. It can't answer a question like "give me the transcript of what Jordan Peterson told Joe Rogan about Dostoyevsky on their podcast eighteen months or so ago." You couldn't for instance ask it 'Find me the highlights video on Youtube featuring Shohei Ohtani's 29th home run of 2023 and tell me exactly which time to skip to."
For obscure reasons I shan't go into, last week I wanted to know if the grazing reserve in northern Senegal was of a highly comparable size in square km to any of Nigeria's 36 states. It took twenty minutes of to and fro. The thing just did not get it at all.*
Frankly the only real-world use I have yet encountered for LLMs is people cheating on job applications.
*It's just about the same size as Plateau state, incidentally...
LLMs are not always truthful. Not 9 of 10, 99 of 100 nor even 999 of a 1000 right answers are "good enough" if the user can't trust the LLM bot to be correct.
There's plenty of sources on the internet for fake news, and fake facts.
An aiBot which dosn't, ever, tell wrong facts authoritatively, is probably a prerequisite for a killer app. It might be enough for all potentially wrong answers to be identified like,
"Maybe Arnold Kling went to Harvard", with the "maybe" fudge word to indicate where the fact might be false.
Maybe in K-12 education, where the facts taught are well known & documented, an LLM teacher won't make any mistakes. But repeating a sarcastic "add bleach to baking soda" to clean sweaters, because it became viral as being so terrible, but was called "this excellent advice" -- it's not clear LLMs can avoid that mistake.
It might be that LLMs are a tech dead end for true, and only true, facts.
The killer app is a talking software agent like HAL 9000 that accurately does, digitally, all the stuff you as a human could do.
We're not there yet, but getting closer -- 2023 was a big step.
“I’m sorry, I can’t do that, Dave.”
Interesting analogy, but I just don’t think these models are, or even can be, as generally useful as the internet. Rather than saying these models are like the internet waiting for Netscape, I think about them as being like general relativity waiting for Garmin.
GPS wouldn’t work without taking into account general relativity, but general relativity isn’t the product. Maps and geolocation are the product. In the case of the internet, the product is cheap long distance communication, and non-tech normies needed Netscape to properly use the communication channel.
A computer that’s able to generate reasonable text (or images or whatever) at scale is just not a problem normies ever have, in the same way that properly accounting for space-time curvature isn’t a problem normies ever have. Might there be a product someday that leverages these newfound computing powers? Maybe, but my guess is that if there is a “killer app” built on the models, users/customers will care about *what* the app does (e.g. maps and driving directions) more than *how* it does it (e.g. satellite triangulation).
Also needed for bit coin
"They succeeded in that vision, even though they faltered as a company."
I don't disagree with you but it still seems an odd statement. How many "successful" companies make the owners as rich as Andreesen and Bina?
You are making me less amenable to "falter." You are mostly correct in saying their revenue stream went away. Up until that point they had success. Afterwards they would have needed to become a completely different company to survive. They didn't and sold the assets for a very high price. Sounds like success to me.
Netscape also had a blockbuster IPO that allowed early people to take some chips off the table. Andreesen didn't become super rich until LoudCloud sold to HP, and later a prescient investment in Skype sold to Microsoft.
I don't doubt Andreesen got richer but he made mid-1990s millions on Netscape.