[For] most important cognitive tasks—from translating a document to inventing a new technology to running an organization—…knowledge is at least as important as computing power. And because knowledge isn’t fungible, we’re not going to get a single super-AI that’s better than humans at everything. Instead, we’ll get a lot of different AIs with different strengths and weaknesses, none of which will be able to take over the world.
Later, he writes,
I suspect that on many tasks, their performance will start to plateau around human-level performance. Not because they “run out of data,” but because they reached the frontiers of human knowledge.
I think that even reaching the frontier of human knowledge is difficult for large language models. If you Hoover up everything on the Internet, that is going to include a lot of material written by people with mediocre skill levels. Regression toward the mean is going to work against these models acquiring superior intelligence.
If you ask ChatGPT to give an account of the Financial Crisis of 2008, what will come back is a lot of conventional wisdom written by pundits with no particular knowledge of economics. If you then ask for an account that is based on Ph.D economists, what comes back is only slightly better. In fact, it still contains a lot of claims that I think many economists who have studied the event would reject. Many of the claims are ideologically loaded, such as blaming rising income inequality or deregulation of banking, but are not well justified.
Most disturbing, I asked ChatGPT to give Arnold Kling’s account of the Financial Crisis of 2008, and I could not recognize what came back. The chatbot’s response did not resemble what I wrote in Not What They Had in Mind. The chatbot also hallucinated, attributing to me explanations that are popular with other conservatives but which I dismiss.
I think that the chatbot approach can be really great at human interaction. It knows how to relate to people. But I have a lot of doubts about whether it can achieve expertise.
Imagine that you were trying to create an expert chess program. If you input a database that includes games played by chess masters and games played by patzers, it will develop a playing style that is an average of the strong players and the weak ones. It will only become an expert if you make it clear what the criteria are for playing chess well.
Using the data gathered from the Internet, today’s AIs will become “good enough” at many things. Many powerful applications will come from this. But in many areas, economics included, expertise will elude them.
Lee makes a Hayekian point. It might be tempting to
assume that all problems can be solved with the application of enough brainpower. But for many problems, having the right knowledge matters more. And a lot of economically significant knowledge is not contained in any public data set. It’s locked up in the brains and private databases of millions of individuals and organizations spread across the economy and around the world.
I would add that a lot of the “knowledge” that is available to an AI is wrong.
Lee makes another Hayekian point, which is that a lot of knowledge evolves from trial and error. It is not sitting around waiting to be sifted and regurgitated by an AI.
The past year has seen a lot of people say “AI will never be able to do such-and-such,” followed weeks later by an AI doing exactly that. So I will be cautious here. I will say that for now, it appears that AI’s have difficulty separating out superior ideas when not-so-good ideas are prevalent in its training data. And they do not demonstrate to me the ability to employ the discovery process that Hayek describes as market competition: conceiving new ideas, testing them, and evaluating them.
Substacks referenced above:
@
One can always look for issues, and one can always guess that if the issues can be fixed and there is a lot of potential money to unlock by doing so, there will be a lot of trial and error attempts to discover how to fix then. The question is whether there is something inherent in the fundamental approach to these AI systems that will not be feasible to fix by any plausible tweaks even in the long run, especially when other kinds of sensor data are thrown in. Bad training data? Ok, curate the data. Can't tell the good from the bad, ok, augment with a discernment weighting system. Doesn't have a theory or model or set of rules about how things work in some area? Ok, give it one. And so forth. I haven't seen any kind of argument from first principles or anything close that there is some fundamental limitation baked in the cake. An example of such an argument could be some mathematical demonstration of logarithmic diminishing returns to computing power, you need to double every time you halve your distance from the mark or something like that. But so far as I can tell, there are no such arguments. In the past 20 years I saw literally thousands of distinct arguments for why we wouldn't be here now. But, as none were based in fundamental principles, they could all have been wrong, as they were indeed just proven to be.
Happy Thanksgiving—appreciate the time that you take to share your wisdom with the rest of us over the years.