Mr. Khan is a nice guy who means well but ... The main reason that students don't learn is not that the presentation is all wrong but that they don't really care about the subject matter ("When am I ever going to use this?). If a student cares, Khanmigo may be great, but if a student doesn't, it won't do much to "move the needle". To the extent that school "works" nowadays, it is because young people are forced to be there and then are forced to do well on various assessments to get their diploma. (Alas, most forget a large part of it within a few months of the assessment. That's why I say "to the extent that school works".)
Being forced to do things—and voluntarily submitting to do the desired things—is valuable human capital formation in and of itself. Employers value conformity.
re: "People need to build the “killer apps” on top of LLMs. "
Unfortunately some of those may be niches where its unclear for startups whether providing the layer atop the LLM is enough to keep ahead of competition and get enough revenue. For instance, this page:
discusses using them to fix the news media (forget if I posted about it on this substack before) to nudge mainstream media reporting to the neutrality that may lead consumers to trust them again (again, mainstream, niche with acknowledged bias is different), and to otherwise aid in improving journalism and allowing efficiencies to perhaps rebuild local news sites. Its unclear what the biz model would be for some of that vs. what it takes to inspire a startup to chase it, especially if a dinosaur industry resists updating.
Its also unclear what'll happen once the big players insert AI into their tools like office suites (assuming Google improves their AI to be more useful), whether there will be room for niche players, if it will be as easy to swap out their AIs for 3rd party ones, or whether they will even be allowed to create plug-ins or if the big players will prevent competition and disallow them other AIs from being added. AI competition is needed to drive things to prevent for instance arguably woke AIs from steering most of the world's writing, as this page discusses:
There was a video going around twitter the other day that had Ron DeSantis' face on a television character's face. Now, I could still see it was a fake versus knowing it was fake, but it is getting harder and harder to see the tells in such videos- such fakes get better every single month. Within a year or two, one won't be able to use their eyes to tell fake from authentic.
We’ve had picture-perfect “photoshopping” (by humans) for 20 years. How often are people fooled by this for things that are either interesting or important? Not very often, and I think the reason is *not* that we can detect the fakes (usually you can’t). It’s rather because the primary impact of photoshopping has been for us to treat photography as equivalent to painting.
So AI generated fakes will have no impact on still photos (because we’ve already accepted that they’re unreliable). Their impact on video and audio will be the same. It’s only during the brief transition period where our heuristics about A/V (that they’re reliable) won’t match reality and we’ll see some people get fooled. But that phase will be *very* short-lived.
"Surely, if technological advances and automation were likely to lead to mass unemployment, we would already have arrived at a world where only 10% or fewer of adults have jobs?"
I am frequently surprised that even economists don't get this.
It's unclear to me how we trust AI to summarize or otherwise edit content without introducing fake information.
Mr. Khan is a nice guy who means well but ... The main reason that students don't learn is not that the presentation is all wrong but that they don't really care about the subject matter ("When am I ever going to use this?). If a student cares, Khanmigo may be great, but if a student doesn't, it won't do much to "move the needle". To the extent that school "works" nowadays, it is because young people are forced to be there and then are forced to do well on various assessments to get their diploma. (Alas, most forget a large part of it within a few months of the assessment. That's why I say "to the extent that school works".)
Being forced to do things—and voluntarily submitting to do the desired things—is valuable human capital formation in and of itself. Employers value conformity.
https://www.overcomingbias.com/p/school-is-to-submithtml
"This could save me a lot of time. And maybe eventually intellectuals will stop writing books with lots of throat-clearing, academic jargon, etc."
LOL! The "intellectuals" will just use LLMs to write even more such books.
re: "People need to build the “killer apps” on top of LLMs. "
Unfortunately some of those may be niches where its unclear for startups whether providing the layer atop the LLM is enough to keep ahead of competition and get enough revenue. For instance, this page:
https://FixJournalism.com
discusses using them to fix the news media (forget if I posted about it on this substack before) to nudge mainstream media reporting to the neutrality that may lead consumers to trust them again (again, mainstream, niche with acknowledged bias is different), and to otherwise aid in improving journalism and allowing efficiencies to perhaps rebuild local news sites. Its unclear what the biz model would be for some of that vs. what it takes to inspire a startup to chase it, especially if a dinosaur industry resists updating.
Its also unclear what'll happen once the big players insert AI into their tools like office suites (assuming Google improves their AI to be more useful), whether there will be room for niche players, if it will be as easy to swap out their AIs for 3rd party ones, or whether they will even be allowed to create plug-ins or if the big players will prevent competition and disallow them other AIs from being added. AI competition is needed to drive things to prevent for instance arguably woke AIs from steering most of the world's writing, as this page discusses:
https://PreventBigBrother.com
There was a video going around twitter the other day that had Ron DeSantis' face on a television character's face. Now, I could still see it was a fake versus knowing it was fake, but it is getting harder and harder to see the tells in such videos- such fakes get better every single month. Within a year or two, one won't be able to use their eyes to tell fake from authentic.
We’ve had picture-perfect “photoshopping” (by humans) for 20 years. How often are people fooled by this for things that are either interesting or important? Not very often, and I think the reason is *not* that we can detect the fakes (usually you can’t). It’s rather because the primary impact of photoshopping has been for us to treat photography as equivalent to painting.
So AI generated fakes will have no impact on still photos (because we’ve already accepted that they’re unreliable). Their impact on video and audio will be the same. It’s only during the brief transition period where our heuristics about A/V (that they’re reliable) won’t match reality and we’ll see some people get fooled. But that phase will be *very* short-lived.
"Surely, if technological advances and automation were likely to lead to mass unemployment, we would already have arrived at a world where only 10% or fewer of adults have jobs?"
I am frequently surprised that even economists don't get this.
It's unclear to me how we trust AI to summarize or otherwise edit content without introducing fake information.