8 Comments
May 25Liked by Arnold Kling

I don’t understand the push for “general intelligence”. I don’t think we’ll get there because I don’t think it will turn out to be very useful. My prediction is that we will take the basic semantic structure embodied in general purpose LLMs and direct that towards highly specialized tasks. I have a bunch of electric motors in objects around my house – the coffee grinder, the blender, the toaster, the ones that make my side view mirrors tucked away – each specialized for a particular task. But I don’t have one “general motor” that can do everything. The nature of the tools we build is to specialize in a dimension that complements and extends human ability.

Expand full comment

You didn't hallucinate that they are talking about the glasses, but they don't have anything concrete there yet, and given the history I decided to wait until we have something more concrete.

The AR/VR experiences are coming, but they're taking a while.

Expand full comment

I know the problem of central planning is often expressed as a lack of knowledge on the part of planners but to me it's more about not knowing with certainty the decisions consumers will make when allocating limited resources than knowing what inputs a factory needs to produce a certain number of widgets. This is why central planners always seek to limit consumer choice, either explicitly through quotas and rationing, or implicitly by 'nudging' people to make choices the central planners favor.

Expand full comment
founding

Arnold

Seems to me that Gödel’s insights, incompleteness- undecidability, are not given enough significance.

Gödel was a convinced Platonist.

The ‘forms’ exist. We have to work to decipher them. They are infinite (or the infinite mind of God)

That’s where the word/,concept ‘information’ came from.

Still valid.

Or Cantor’s many levels of infinity.

Or, even quantum physics. Not reducible to Newton’s physics or logic.

Humility is . . . very . . . hard.

Thanks

Clay

Expand full comment

What I took away from that experience is that once a computer gets close to matching human performance on some task, it will quickly surpass the human. Computers have Moore’s Law and other exponential scaling properties going for them.

That.s true for basically logical systems like most games with simple mathematical rules, not for complex tasks in the Smith sense. Cars traveling in fix terrain would be close to simple mathematical rules could be speeding along at superhuman speeds, but in the environment of working cities with random traffic congestion and emergency vehicles claiming right of ways, etc, cars have come close to human capabilities and might replace most driving but don't seem poised to achieve super human capabilities.

Expand full comment

I'll plug AnomalyUK's old post on AI again because it's very relevant [https://www.anomalyblog.co.uk/2012/01/speculations-regarding-limitations-of/]. You can substitute GPT-4o and Claude Opus for Google Search and Siri and you wouldn't know this was written in 2012:

---

[W]hat is “human-like intelligence”? It seems to me that it is not all that different from what the likes of Google search or Siri do: absorb vast amounts of associations between data items, without really being systematic about what the associations mean or selective about their quality, and apply some statistical algorithm to the associations to pick the most relevant.

There must be more to it than that; for one thing, trained humans can sort of do actual proper logic, about a billion times less well than this netbook can, and there’s a lot of effectively hand-built (i.e. specifically evolved) functionality in a some selected pattern-recognition areas. But I think the general-purpose associationist mechanism is the most important from the point of view of building artificial intelligence.

If that is true, then a couple of things follow. First, the Google/Siri approach to AI is the correct one, and as it develops we are likely to see it come to achieve something resembling humanlike ability.

But it also suggests that the limitations of human intelligence may not be due to limitations of the human brain, so much as they are due to fundamental limitations in what the association-plus-statistics technique can practically achieve.

Humans can reach conclusions that no logic-based intelligence can get close to, but humans get a lot of stuff wrong nearly all the time. Google Search can do some very impressive things, but it also gets a lot of stuff wrong. That might not change, however much the technology improves.

There are good reasons to suspect that human intelligence is very close to being as good as it can get.

One is that thinking about things longer doesn’t reliably produce better conclusions. That is the point of Malcolm Gladwell’s “Blink” (as far as I understand it; I take Gladwell to be the champion of what Neal Stephenson called “those American books where once you’re heard the title you don’t even need to read it”).

The next, related, reason is that human intelligence doesn’t scale out very well; having more people think about a problem doesn’t reliably give better answers than having just one do it.

Finally, the fact that, in spite of evolutionary pressure, there is enormous variation in the practical usefulness of human intelligences, suggest that making it better is not simply a case of improving the design. If the variation were down to different design, then the better designs would have driven out the worse ones long ago. I think it is far more to do with circumstances, and with the fundamental difficulty of identifying the correct problems to solve.

The major limitation on conventional computing is that it can only do so much per second; only render so many triangles, only price so many positions or simulate so many grid cells. Improving the speed and density of the hardware is pushing back that major limitation.

The major limitation on human intelligence, particularly when it is augmented with computers as it generally is now, is how much it is wrong. Being faster or bigger doesn’t push back the major limitation unless it can make the intelligence wrong less often, and I don’t think it would.

What I’m saying is that the major cost of human intelligence is not in the scarce resources required to execute the decision-making, but the damage caused by all the bad decisions that humans make.

The major real-world expense in obtaining high-quality human decision-makers is identifying which of the massive surplus available are actually any good. Being able to supply vastly bigger numbers of AI candidates would not drive that cost down.

---

He did stipulate that human-level performance by AI agents could bring large changes in the economy, but not because of their intelligence per se.

Expand full comment

I'll add that rapid improvement in computer performance on tasks has always followed discoveries of ways to precisely specify the task and the level of performance on it. Board games like Reversi and Go come with a built-in specification, as it were, and accordingly with enough computational power it was easy to improve (note that with Go, deep learning networks still have to be supplemented by an algorithm, usually variations of Monte-Carlo tree search; deep learning networks without search are not at professional level). Rapid progress on language generation followed the discovery that the next-token prediction approach produces interesting enough results.

Expand full comment

Smith's and Mangrebe's arguments, at least as characterized by Mingardi, seem more applicable to brain emulations than generalized AI per se. Why can't an AI system, following the same physical laws as all other physical systems, be complex, multi-layered, and evolutionary?

Expand full comment