9 Comments

"We would rapidly go from human-level to vastly superhuman AI systems."

What does this mean?

Do we compare to an IQ of 100? Is it an IQ of 150? 180? 300? What does an IQ of 300 even mean?

It seems that AI's intelligence is already vastly superior in many ways. The big question is whether it will stay like Raymond in Rainman or overcome its gaps.

Expand full comment

And how will it learn to stop hallucinating aka lying?

Expand full comment

I will stick my my metric- I will start believing artificial general intelligence when one of these AIs solves a math problem that hasn't already been solved by human beings, and I am talking about a mathematical proof, not a brute calculation.

Expand full comment

I don't doubt that AI can do OOMs more work. What I wonder about is the "pollution" of all its output. We already see how it's clogging up Google. What happens when there is three orders of magnitude more junk in the info space?

Expand full comment

Leopold Aschenbrenner's argument appears to depend, crucially, on the ability to overcome data constraints in future models. I wrote a bit about this here: https://davefriedman.substack.com/p/thoughts-on-leopold-aschenbrenners

Expand full comment

Biggest "bubble" issue isn't so much the time-horizon (although that is an issue). It's that the CapEx itself doesn't have long shelf-life. A fiber optic cable is useful for decades. A GPU and/or model becomes obsolete in a few months or years.

My sense is that's what's really driving the AI Safety stuff. Without some regulatory moats, there isn't enough rent-extracting to make the upfront lift worth it. The industry needs to slowdown in order to make money (or it thinks it does). That or it needs to create higher switching costs, such that the "slightly faster model" doesn't immediately eat their lunch.

Expand full comment

Regarding Aschenbrenner: I urge you to look this over: https://garymarcus.substack.com/p/agi-by-2027

There are a number of assumptions in Aschenbrenner's growth curve that are. . .problematic, to say the very least.

Expand full comment

“…spending on AI is probably getting ahead of itself in a way we last saw during the fiber-optic boom of the late 1990s—a boom that led to some of the biggest crashes of the first dot-com bubble.”

I know a lot of smart people are making this analogy, and the Nvidia = Cisco analogy. Including the extremely sharp Ben Thompson (although he’s not going so far as to predict a bubble, but rather pointing out the possibility).

I don’t think that the two situations are comparable. At least, they’re not *fully* comparable. The folks purchasing all of these GPUs and building out data centers are companies with epic cash flow and such unimaginable amounts of money lying around that they really don’t know what to do with it. (And thanks to the competition authorities, deploying capital by buying companies is sort of a pain these days.)

These companies are not at all comparable to the WorldComs of the 1990s who really needed to generate a return on their investment into networking infrastructure to remain solvent. If Google builds out too much GPU infrastructure, they’re just going to shrug their shoulders.

And, practically, the kind of … I don’t want to be rude, but … stupid part of the Mims comment, is the bit about AI being “wildly expensive to build and run AI”, paired with his comment about a “bubble”. If Google “overbuilds” AI training and inferencing infrastructure, and if Google doesn’t need to generate a return on any investment from that infrastructure because they’re printing more money than they can spend, then what will Google do? Start dropping the cost of using its AI training and inferencing infrastructure, which will solve the problem of AI being expensive, which will increase demand and use of that infrastructure.

Another thing: really what’s going on is that more-or-less the data center spend is shifting from CPUs to GPUs, from Intel and AMD to Nvidia. I think if you look, for example, at Google’s capex spend, it may be a bit larger, but not dramatically so. You’ll see, rather, their spend shifting from spending with Intel/AMD on CPUs to spending to build its TPUs. (I’m not a financial expert. Not even financial-curious. So this is a guess and I could be wrong. But I don’t think I’m very wrong.) There are exceptions to this, with startups focused on AI data centers and sovereign data center build outs. But at least some significant portion of the spend is substitutional and not incremental.

So, I don’t think this is a bubble akin to the dot com boom, for the reasons stated above. Definitely the build out will lead to a lowering of costs to build AI systems, but… again, duh – that’s the point of the build out. Certainly the use cases need to be figured out. But I think there are tons of very high-quality use cases that exist today, and that people just aren’t seeing because it’s very technical (e.g., code generation, code annotation, updating and refactoring of old software).

Expand full comment
founding

The software libraries for working with LLMs are very bad. The hype together with the VC money broke the normal quality assurance in Open Source. The prominent example here is Langchain. That means the we’ll have to wait a bit longer before the already available capabilities of LLMs are used in apps.

Expand full comment