6 Comments

“Listening to AD, one can envision a scenario in which the top executives and controlling shareholders (if shareholders have any control) in the big tech companies become so much wealthier and more powerful than everyone else that they are almost a different species.”

I’m reading Neal Stephenson’s The Diamond Age (published c.1995)—it remarkably anticipates not only personal AI tutors, but also this wealth concentration effect (amidst other unrelated nanotechnology-driven weirdness).

Expand full comment

It depends on where the economies of scale come from. If they come from the practicalities of financing huge investments, concentration could be temporary. If they come from the self-reinforcing advantage of private training data leading to big market share leading to private collection of lots more private training data, then there will likely only ever be a few giant players unless or until the data gets stolen or 'leaked'. In the former case, one possibility is concentration-then-diffusion. Let's say that today you need to throw ten billion dollars of capital to build a marketable and profitable AI tool. Only a few companies and nations are going to be able to do that and it's hard to enter and disrupt. But if the price eventually comes down to ten million, that's in tech start-up territory, and it's not impossible for more players to enter the fray with better or higher quality or niche services and take market share away from the few big titans.

Expand full comment

Yes agreed. FWIW I don't buy the massive concentration of wealth argument. I don't necessary not buy it either. In addition to your good points, it seems like Zuck's insistence on open-sourcing their Llama versions will push toward diffusion rather than concentration. Ditto for on-device models, which for sure will be coming.

Expand full comment

There's open and then there's open. Model source code is one thing. Training data and sets of derived and refined weightings is another. Zuckerberg isn't going to give away the store, so I think it's safe to infer he is confident that what he's giving away isn't what's really valuable.

Expand full comment

"become so much wealthier and more powerful than everyone else that they are almost a different species"

Contingent on not reversing the Regan, GWB and Ryan-Trump "tax cuts for the rich to increased deficits" laws.

Expand full comment

Behold! The pattern matchers are impressive, but much like SQL and Visicalc only worked on structured data, LLMs only work well for problems where pattern matching is critical. In many ways, the pattern matching ability is remarkable, and provides a good facsimile of AGI, but it is not sufficient.

In the future, someone will connect a LLM to a canonical object model, a causal logic engine, a fact database and a higher-level pattern synthesizer, and that system will more closely resemble general intelligence

Expand full comment