29 Comments
Nov 23, 2023·edited Nov 23, 2023

One can always look for issues, and one can always guess that if the issues can be fixed and there is a lot of potential money to unlock by doing so, there will be a lot of trial and error attempts to discover how to fix then. The question is whether there is something inherent in the fundamental approach to these AI systems that will not be feasible to fix by any plausible tweaks even in the long run, especially when other kinds of sensor data are thrown in. Bad training data? Ok, curate the data. Can't tell the good from the bad, ok, augment with a discernment weighting system. Doesn't have a theory or model or set of rules about how things work in some area? Ok, give it one. And so forth. I haven't seen any kind of argument from first principles or anything close that there is some fundamental limitation baked in the cake. An example of such an argument could be some mathematical demonstration of logarithmic diminishing returns to computing power, you need to double every time you halve your distance from the mark or something like that. But so far as I can tell, there are no such arguments. In the past 20 years I saw literally thousands of distinct arguments for why we wouldn't be here now. But, as none were based in fundamental principles, they could all have been wrong, as they were indeed just proven to be.

Expand full comment

Anomaly UK thinks that yes, there is something inherent in the fundamental approach. I am going to paste big quotes from his posts here, with my editing in brackets, with apologies to him but my goal is to spread the word by saving readers clicks. His main idea, which he wrote down in 2012 (https://www.anomalyblog.co.uk/2012/01/speculations-regarding-limitations-of/), is this:

---

[W]hat is “human-like intelligence”? It seems to me that it is not all that different from what the likes of [GPT] do: absorb vast amounts of associations between data items, without really being systematic about what the associations mean or selective about their quality, and apply some statistical algorithm to the associations to pick the most relevant.

There must be more to it than that; for one thing, trained humans can sort of do actual proper logic, about a billion times less well than this netbook can, and there’s a lot of effectively hand-built (i.e. specifically evolved) functionality in a some selected pattern-recognition areas. But I think the general-purpose associationist mechanism is the most important from the point of view of building artificial intelligence.

If that is true, then a couple of things follow. First, the [GPT] approach to AI is the correct one, and as it develops we are likely to see it come to achieve something resembling humanlike ability. But it also suggests that the limitations of human intelligence may not be due to limitations of the human brain, so much as they are due to fundamental limitations in what the association-plus-statistics technique can practically achieve.

Humans can reach conclusions that no logic-based intelligence can get close to, but humans get a lot of stuff wrong nearly all the time. [GPT] can do some very impressive things, but it also gets a lot of stuff wrong. That might not change, however much the technology improves.

There are good reasons to suspect that human intelligence is very close to being as good as it can get. One is that thinking about things longer doesn’t reliably produce better conclusions. That is the point of Malcolm Gladwell’s “Blink” (as far as I understand it; I take Gladwell to be the champion of what Neal Stephenson called “those American books where once you’re heard the title you don’t even need to read it”).

The next, related, reason is that human intelligence doesn’t scale out very well; having more people think about a problem doesn’t reliably give better answers than having just one do it.

Finally, the fact that, in spite of evolutionary pressure, there is enormous variation in the practical usefulness of human intelligences, suggest that making it better is not simply a case of improving the design. If the variation were down to different design, then the better designs would have driven out the worse ones long ago. I think it is far more to do with circumstances, and with the fundamental difficulty of identifying the correct problems to solve.

The major limitation on conventional computing is that it can only do so much per second; only render so many triangles, only price so many positions or simulate so many grid cells. Improving the speed and density of the hardware is pushing back that major limitation.

The major limitation on human intelligence, particularly when it is augmented with computers as it generally is now, is how much it is wrong. Being faster or bigger doesn’t push back the major limitation unless it can make the intelligence wrong less often, and I don’t think it would.

What I’m saying is that the major cost of human intelligence is not in the scarce resources required to execute the decision-making, but the damage caused by all the bad decisions that humans make.

The major real-world expense in obtaining high-quality human decision-makers is identifying which of the massive surplus available are actually any good. Being able to supply vastly bigger numbers of AI candidates would not drive that cost down.

---

When Anomaly had occasion to toot his horn about this post earlier this year (https://www.anomalyblog.co.uk/2023/04/ai-doom-post/), he wrote:

---

The recent spectacular LLM progress is very surprising, but it is very much in line with the way I imagined AI. I don’t often claim to have made interesting predictions, but I’m pretty proud of this from over a decade ago [i.e. the above post - C.] [...] As I wrote back then, humans don’t generally stop thinking about serious problems because they don’t have time to think any more. They stop because they don’t think thinking more will help. Therefore being able to think faster – the most obvious way in which an AI might be considered a superintelligence – is hitting diminishing returns. Similarly, we don’t stop adding more people to a committee because we don’t have enough people. We stop adding because we don’t think adding more will help. Therefore mass-producing AI also hits diminishing returns.

None of this means that AI isn’t dangerous. I do believe AI is dangerous, in many ways, starting with the mechanism that David Chapman identified in Better Without AI. [...] But they are all the normal kind of new-technology dangers. There are plenty of similar dangers that don’t involve AI.

---

and he has this to say about superintelligence:

---

Intelligence at or above the observed human extreme is not useful, it becomes self-sabotaging and chaotic.

That is also something I claim in my prior writing. But I have considerably more confidence in it being true. I have trouble imagining a superintelligence that pursues some specific goal with determination. I find it more likely it will keep changing its mind, or play pointless games, or commit suicide.

I’ve explained why before: it’s not a mystery why the most intelligent humans tend to follow this sort of pattern. It’s because they can climb through meta levels of their own motivations. I don’t see any way that any sufficiently high intelligence can be prevented from doing this.

Expand full comment

Thanks for that link and excerpt. I'll wait until the next AI post (certainly soon, lol) to flesh out my case, but in general I don't think AnomalyUK's argument will age well. My position is that there is a lot of low hanging fruit out there, and indeed, the recent rapid improvements we've been observing are a consequence of picking it.

Consider, just this week, "So, when we hear that OpenAI is using Q-learning and RLHF (which is about teaching AI through human feedback), they're trying to make their AI smarter, kind of like creating a super-intelligent guidebook that can play any game you give it!"

For decades, most of what I've read on the topic converged to a kind of expert consensus that of course you couldn't boil down most verbal or graphical outputs characteristic of human intelligence to applications of "one neat trick", because .. reasons.

I think what really shocked everybody was when these teams wanted to see just how far one could get with the one neat trick, by focusing more of the early effort on greatly expanding training data, computation, and other resources, it turned out, holy cow, we can get really, really far and much, much farther than anybody guessed just a few years ago. Almost overnight!

What that foundation enables is at least the theoretical possibility of thousands of tweaks to deal with issues and make the things smarter or more discerning or informed by good models of physical and human reality, and even 'aware' of their Hayekian limits, and so forth.

And the economics of the situation mean that so long as these developments are hard to copy or there is big first mover advantage, there is every reason to dump massive investments into these amelioratation efforts as quickly as possible.

I'm going to out-Nick Land Nick Land and say that the idea of Accelerationism itself is superfluous to the operation of the very economic-technological process one seeks to accelerate, because that process is already maximally auto-accelerating. The reason is because one is dealing with a situation of high upfront fixed cost but marginal costs not just low but expected to quickly fall to negligible levels. So scaling up is pure profit. But what distinguishes this situation from other "mere intellectual property" analysis is that the info is like, say, a movie, popular for a limited time for a limited audience. The scaling potential for a ultra-generally applicable tool, indeed, a meta-tool to make other general-purpose or customized tools, is potentially unlimited.

So I foresee a ton of improving tweaks coming out regularly in the next few years, and that will make certain human wet-brain-based limits and failure modes seem less inevitable and more 'treatable', if only we could tweak neurons in real time like we can tweak the code for these systems.

Expand full comment

> there is a lot of low hanging fruit out there, and indeed, the recent rapid improvements we've been observing are a consequence of picking it

> applications of "one neat trick"

There was one big fat low hanging fruit, which is scale*. AI research groups picked it and have been eating it for the last several years. There has been little innovation in neural network architecture over this period (compare GPT-1 (2017) and GPT-4 (2023)), only in preparation of training data, computational optimization of training and inference, etc. I don't deny that this was a ton of hard engineering work, but I don't think of all that as low-hanging fruit in the field of intelligence, just as vastly better automatic milkers, feeders and fat yield moneyball statistics methods can hardly be said to advance our understanding of cow biology. And in my opinion AI groups don't really know what is in the fruit they're eating, no more than we know what we do inside our brains when we're thinking. If we describe what AI models are doing as "one neat trick", we might as well describe our own intelligence as "one neat trick": it is in a sense, but we don't know what it is.

* Gwern predicted many years ago that increasing scale would overcome every AI hurdle (at least in terms of success at performing intellectual tasks; I am not at the moment considering agency, self-awareness and other vaguely defined goals of that bent), and so far he has been vindicated. I was skeptical until AlphaGo and then converted to a position which, in hindsight, was similar to Anomaly's, except his is much more neatly formulated.

Expand full comment

Those are good points. I'll reflect on them and research some more before writing anything more about it except to say that I think there my impression we are getting closer to knowing what the one neat trick is for at least the ordinary form of "intelligence" is, which, like in Surfing Uncertainty, is an iteratively refining pattern recognition system with a lot of hard-wired base-level tools and rules.

I say ordinary intelligence because it seems to me that there there is the kind of intelligence associated with normal human acculturation, learning by observing and imitating and some trial error, and what we might think of as the work of genius and pushing envelopes and certain kinds of breakthrough conceptual creativity, discovery, extension, unifying linkages between seemingly distinct phenomena, etc.

I'm not sure how to explain this, I concede that it is mysteriously at odds with the notion that there is just one g factor and it somehow correlates with most things. But my own lying eyes tell me I work with a lot of people at the very top end of "ordinary" intelligence, and they can produce output at the highest level within existing forms and established structures but they can't think new thoughts on their own, and you wouldn't want to put any of them in charge of, say, trying to solve a novel problem by designing anything from scratch.

The gpts can now do what these smart coworkers can do with the one neat trick, so I think it's probably the same one neat trick. I think going beyond that requires more insight into the still mysteries (to me) other, neater tricks.

Expand full comment

I agree that LLMs are a source of useful insights about the nature of our own intelligence, however defined, because what LLMs do is now sufficiently like what our intelligence does to really show up the differences in high relief. Sci-fi writers used to hope that contact with space aliens might help humans understand themselves better, but space aliens have not been forthcoming. Instead, we have learned to breed aliens in digital test tubes.

Expand full comment

I think ML as we have it and real thinking-beings are different. The issue is something like limited data but not quite. LLMs are trained to optimisation some cost function, which was require verbal felicity. Then they were tweaked to show off that felicity in ways that aren't just text completion.

Great.

But humans aren't really optimising some cost functions -- though particular faculties of our brains probably are. Our brains are using ML-like techniques as part of a larger picture that does something else. And does so in the service of evolved drives. It's that evolutionary history that turns us (along with trees and jellyfish) into agents that want things of our own and act in the world to get them.

There's no *fundamental* reason why machines couldn't do all that too. But that's not what ML gives us right now. And no mere of tweaking will get us there. What *might* get us there is some emergent behaviour arising when we deploy networks of them, and evolve them as products, and link them up to other kinds machines etc.

Expand full comment

Agreed. I just asked the 2008 question to my chat GPT3.5 and it gave an answer that was IMHO better and less biased than anything Krugman would write. I think we have a "second grader" version of AI today, and it is already amazing me with its knowledge. I can’t imagine what it could do when it advances to "high school" level and beyond. And that may be early next year.

Expand full comment

Happy Thanksgiving—appreciate the time that you take to share your wisdom with the rest of us over the years.

Expand full comment

This makes me think of GIGO.

Expand full comment

A similar point is made here. Every LLM converges to its data set.

https://x.com/Nexuist/status/1727568733420339374?s=20

Expand full comment

I remain afraid of AI taking over the world, primarily with the help of humans wanting to take over the world.

a) creating an ebola like lethal virus that is quite infectious, but with enough incubation for wide spreading, and for which there are no known, nor quickly found, treatments.

b) Skynet - Berserker style drone warfare, with ever increasing power & coordination, controlled by humans, using ai help, then using ai for faster control with human veto, then pure ai, then humans sabotaging the ai control so that there is no differentiation between enemy & non-enemy.

b2) massive ai controlled drone warfare leading some war losing, like Russia or Pakistan, using nukes to stop from losing.

Here's a swiss excavation robot making a 19 foot wall out of big rocks: https://www.upi.com/Science_News/2023/11/22/switzerland-autonomous-excavator-builds-19-foot-wall/9251700680743/

Robot manipulation is coming as well as software ai.bots.

Tim Lee referenced Sam Hammond, with lots of positive stuff which I like, including metrics! Like the need for more training data, and more computing.

https://www.secondbest.ca/p/why-agi-is-closer-than-you-think

"Benchmarked to the task of generating an original scientific manuscript that’s INDISTINGUISHABLE from one written by an expert human, the baseline Direct Approach model suggests a transformative AI training run will require on the order of 10^32 FLOPs, with a median forecast of TAI by 2036 and a modal forecast of 2029. This comports with the Metacalus forecast of “strong AGI” by 2030."

Footnoted is that gets the AI up to top expert human level, not superhuman. Which is what I believe.

We will get, topic by topic, top 10% expert human output from ai. But not better than top human for quite awhile.

But there is the fundamental issue of whether an ai.bot can help create fusion (I think so), or Faster Than Light drives (I think not). What are the atomic limits of human scale control? AI will help us find those physical boundaries sooner, but they're there, as well the ai boundaries.

And Computer Aided Telepathy remains a near future thing. Direct mind thinking to text/ pictures that an ai.thinkGPT can understand and communicate with is coming. Text, pictures, & voice output from the ai.

HAL 2000 from A Space Odyssey remains the model of ai thinking.

Expand full comment

Cleaner [contextual] data is at least as important as volume of data overall. I suspect that we are holding AI back by trying to make something so general; performance will increase radically towards 'good human' performance if we specialize the AIs on good data from specific contexts; so maybe it won't be a 'great' entry level employee substitute, but it can become a solid performer in a specific role in a bureaucracy, for example. At a fraction of the cost. The only question is how much generality is required to get the overall human interface piece which was so surprising in recent advances.

Expand full comment

I should mention - we spend tons of effort trying to lobotomize humans to make them useful for machine-type tasks. This is essentially what I'm suggesting, is that the AIs will do those much better than the humans... which doesn't create a superhuman intelligence, it creates a superior to human subhuman.

Expand full comment

The issues you discuss aren't really the limits of "AI", but the limits of the current in-vogue transformer-based LLM models.

Expand full comment

AI will not take over the world, but those who control the AI and the internet easily could. Swamp the internet with biased information, use LLMs to seek out disfavored terms, opinions, and people, then simply censor, edit, or delete.

Expand full comment

The Hayekean knowledge problem will be an important practical limit for a while, but is not fundamental. The reason being: people will be falling over themselves to share their dispersed knowledge with the AIs.

Buisinessfolk are already talking explicitly about this in the commercial domain where they say the real money is having a proprietary dataset and applying AI to it. But a retail case exists too: every time you ask an AI to help with your problems, you tell it something about your life.

It's a big question whether these data will remain siloed so that each user can make use of AIs without a handful of AIs becoming omniscient. That latter *can* happen, and it might take explicit institution-shaping laws to prevent it.

Expand full comment

Happy Thanksgiving, Arnold!

Expand full comment

Great points Arnold. Thank you.

So isn’t there this simple solution? The Actually Intelligent Economics AI would require its architects and engineers to hire the Arnold Klings of the world, to curate the Best Economics Library? This AI would only use that library.

Could this AI outperform Arnold Kling on certain tasks? Almost certainly, since Arnold has not read everything in the Best Economics Library, i.e., not all of Mises, Friedman, Buchanan, Becker, Alchian, etc. assuming all their work or almost all their work was put into the library.

So I think you’re saying that expertly judged FITs are the key to Actually Intelligent AI in that such a FIT is key to curating the Best Economics Library.

Sounds like a profitable business opportunity to me.

And yes, the Actually Intelligent AI still wouldn’t be able to figure out the cause of the “latest”financial crises if latest were last month.

Expand full comment

From what I've read, LLMs trained on quality material outperform general purpose AIs like ChatGPT - for example, a LLM trained on programming textbooks outperforms Chat in generating code that works. So I like the idea of an AI trained on Mises, Friedman, et al. This seems like a relatively easy way for a basic engine to be trained on custom libraries.

I suspect that the result will be that Arnold Kling + Best Econ Library AI outperform either the Best Econ Library AI by itself or AK by himself, much as in chess, humans + computer outperform really good chess computer alone or really good human chess player alone.

That actually seems pretty exciting.

Expand full comment

The best AIs will be those curated by the best curators.

To maintain status of Best AI of a Given Type, curation updates will need to be timely and accurate.

There will be specialist curators in thousands and maybe millions of different specialties. There will also be hundreds or thousands of generalist curators who will oversee and integrate the work of specialist curators.

All of this will require that curators read and organize huge amounts of information.

Expand full comment

At the beginning of the Internet, it was common for people to say that it would lead to greater wisdom as people could easily access much of the knowledge that humanity had accumulated. It would bring us together. Instead, people created "silos" where they had their worldviews and prejudices confirmed and where they competed to be popular by denigrating "them" and lionizing "us".

Will the profitable business opportunities be AI optimized for "the way my people think"? A woke one. A libertarian one. An Islamic one. A New York Times/New Yorker/Atlantic one?

Expand full comment

And of course it will not be marketed as "the AI that confirms your prejudices". It will be marketed as an AI that is expertly curated to give you true answers, not "Hoover[ing] up everything on the Internet, that is going to include a lot of material written by people with mediocre skill levels", not training on things that are "ideologically loaded". Alas, since "one man's terrorist is another man's freedom fighter", different people will have different opinions on what is ideologically loaded and what is obviously fair and correct.

Expand full comment

Will the profitable business opportunities be AI optimized for "the way my people think"?

Yes. Let’s try to predict what will happen here.

Profit and loss signals will guide AI architects and curators to curate libraries that are accurate representations of reality AND that incorporate “good character.”

The best AI will require high fidelity not only in terms of reality (through curation), but also of fidelity to best moral character (through curation).

Let’s say we want the AI to help us make a best decision. The AI would need to be curated with “the best thinking for the way my people think and act.” But what is that? It’s certainly not tribal thinking, unless the tribe is the Best Thinking Tribe.

How do we know what the best thinking is? Careful thinking and curation. Trial and error. Feedback.

So we should expect AI to eliminate false narratives, untruths and bad character more quickly.

This will incentivize and motivate people to learn and improve because human learning and improvement will always limit AI performance.

Expand full comment

"So we should expect AI to eliminate false narratives, untruths and bad character more quickly."

I am tempted to quote my father-in- law, "People don't want the truth; they want a good story." I find it hard to resist temptation.

There are obvious situations where people do want the truth. They don't want to think gravity pushes up if they might fall out of a building. But often what you want to know is "what my people think", "what is socially acceptable". Because often that's the kind of skin you have in the game. Do you want to know exactly how Donald Trump has handled race or do you want to believe he is a racist asshole? Do you want to know why there is inflation or do you want to believe it's all Joe Biden's fault?

Expand full comment

Yes, curation can be with a goal of fidelity to reality or fidelity to ideology. In the political world, people would need to discern between narratives generated by these competing AIs.

So for politics, where untruths are a good thing, AI may make things worse, i.e., more polarized, more ignorance.

But for markets, where business and personal decision demand models based on reality, AI would seem to make things better. Better prediction and decision making, would lead to greater profit and less loss.

Expand full comment

I agree. Though I can't help thinking that social reality is also a reality. A company doesn't want to be seen as going against the zeitgeist, whether it is Black Lives Matter or "lockdowns are necessary" in 2020 or "Japanese-Americans can't be trusted" in 1941.

Expand full comment

[Updated] We’ll need AIs that help people learn what is true and what is good. Since “good” is subjective, trust and reputation of an AI will become critical.

Curation for AI will require human knowledge of what is true and what is good.

For example, consider an AI used to teach character education. Curating for such an AI would presumably require curators to consider content worthy based on questions like:

Is it shareable?

Is it true?

Is it good?

Is it respectful?

Is it interesting?

Is it lovely? 

Is it worthy of respect?  

Is it humble?

Is it virtuous?

Is it motivating?

Is it positive?

Is it constructive?

Is it instructive? 

Is it creative?

Is it intelligent?

Is it honest?

Is it brave? 

Is it generous?

Is it kind?

Expand full comment

🤣🤣🤣

Expand full comment