GPT/LLM links, 6/12
Brian Chau predicts a slowdown; The Zvi pooh-poohs my idea; Moses Sternstein on how AI will leave out the VC's; Marc Andreessen makes the pro-AI case
AI progress in general is slowing down or close to slowing down.
AGI is unlikely to be reached in the near future (in my view <5% by 2043).
Economic forecasts of AI impacts should assume that AI capabilities are relatively close to the current day capabilities
Overregulation, particularly restriction on access rather than development, risks stomping out AI progress altogether. AI progress is neither inevitable nor infinite.
He offers engineering analysis to support his view that machine learning is not on an every-increasing growth curve.
Even if AI capabilities stay relatively close to current day capabilities, the productivity impact over the next twenty years will still be large. It really takes a long time for new technology to have an impact.
In the case of large language models, people still disagree about the use cases: many people are seeing research assistants and low-level writers; instead, I am seeing simulators and educators, as in The House-Concert App, the Teach-a-Thon and in personal tutors. Of course, we could see both use cases, plus others.
But the apps to do these things well and make them usable have yet to be written. The business models have yet to be worked out. And you have to allow time for gradual adoption and rollout.
Speaking of the House-Concert App, Zvi Mowshowitz writes,
nothing about this in any way requires AI? All the features described are super doable already. AI could enable covers or original songs, or other things like that. Mostly, though, if this was something people wanted we could do it already. I wonder how much ‘we could do X with AI’ turns into ‘well actually we can do X without AI.’
I could be snarky and say that X = extinguish humanity. Unlike the Zvi, I do not think it is useful to think of AI as a new existential threat. I agree with Marc Andreessen (see below).
Unfortunately for VCs, their status power was derived from being integral to the hottest thing in hottest things (and if they’re no longer integral, then they no longer enjoy the status). To be sure, they can still make plenty of money by investing in boring old software companies (assuming they’re capable of something other than trend-chasing, which many are not), but the status is going to shift to where the action is. From a capital allocation standpoint, the action is decidedly with the formerly boring Big Cos.
If you can get going with hardly any investment, then you don’t need to sell your soul to venture capitalists. If you need a ton of investment, then only a giant, well-capitalized firm can fund it. Somewhere in between is the sweet spot for VC. Sternstein says that the sweet spot is not there for AI.
I disagree. I think that the opportunity is to build apps on top of LLMs. Without apps, LLMs are like the Web in 1994. It was a Zvi world, in which anything you could do using the Web you could do—usually easier and better—using other available tools. Venture Capital can fund the apps. Many of these will fail, but some will be successful. The successful ones will come to symbolize the AI world, just as Amazon, Facebook, and Google symbolize the Web world.
My view is that the idea that AI will decide to literally kill humanity is a profound category error. AI is not a living being that has been primed by billions of years of evolution to participate in the battle for the survival of the fittest, as animals are, and as we are. It is math – code – computers, built by people, owned by people, used by people, controlled by people. The idea that it will at some point develop a mind of its own and decide that it has motivations that lead it to try to kill us is a superstitious handwave.
Later,
if you are worried about AI generating fake people and fake videos, the answer is to build new systems where people can verify themselves and real content via cryptographic signatures. Digital creation and alteration of both real and fake content was already here before AI; the answer is not to ban word processors and Photoshop – or AI – but to use technology to build a system that actually solves the problem.
Substacks referenced above:
@
@
@
@
No real disagreement on the fact of apps. It's just that they don't need VC money for a chance to discover product-market fit--the barriers to entry are low-enough that you can bring products to market fairly quickly. The flipside is that the apps are unlikely to have enduring, capital-lite pricing power . . . because the barriers to entry are too low. Bootstrapping will be the new-normal.
re: AI uses cases, can't recall if I posted this here before, but there is a page talking about using AI to fix journalism to make it more efficient to allow more local news outlets (i.e. transcriptions of public meetings and summaries, other automated filtering, which to the public may be better than no local news at all), as well as AI to assist mainstream media to regain the public's trust. Niche media can be biased, but mainstream claiming to be neutral would benefit from having AI help nudge journalists to be more neutral, e.g. explaining to a progressive journalist how a conservative would react to their article. In the real of software there is the concept of "pair programming" where two programmers work together: this would be "pair journalism" except with a human paired with an AI. Its here:
https://FixJournalism.com
Though its unclear the potential revenue, which is true for many AI applications at the moment, especially "apps" are merely thin layers over the big models which risk lots of competition from many chasing the same low hanging fruit that comes to mind. Tech in search of a problem. The issue is whether there are problems that have been searching for answers that AI will can solve and where there is some way to grab the niche early and develop a competitive advantage.