The latest buzzword in AI, making front page news thanks to DeepSeek, is “reasoning,” or chain of thought. My understanding of how this works is vague and speculative. For that matter, my understanding of how humans reason is also vague and speculative.
Computer thought: It’s All Parsing
Pop Quiz: 3 + 4 x 5 = ?
If you remember “order of operations,” the answer is 23, because first you multiply and then you add. What you learn in “order of operations” is called “parsing” in computer science. About 30 years ago I attended a short class taught by a computer science professor, whose catch-phrase was “It’s all parsing.” We write computer programs in order to automate repetitive operations. Whether you realize it or not, chances are your program does parsing.
As a human, I think of logic and data as different things. But they are not fundamentally different in the Von Neumann architecture of the computer. When you feed a program’s logic into the computer’s interpreter or compiler, parsing turns it into data. Or at least that is my impression of how computers work.1
Human Thought: It’s All Metaphors
Steven Pinker’s The Stuff of Thought takes the view that the ingredients in human thought are metaphors. Even the sentence I just wrote uses “ingredients” as a metaphor. If you stop and consider anything you read or say, you will find that it involves metaphors.
Metaphors are neither true nor false. They are either more or less appropriate and insightful. The metaphor of an atom as having a nucleus that is like a planet with electrons orbiting around it like moons is useful in some ways, misleading in others.
You may recall that I think of economic models as metaphors. Just because you have an equation does not mean you have truth. The equation is a metaphor for economic behavior. The question is how well that metaphor applies. The diagrams of supply and demand are very useful metaphors. The diagrams that show firms setting price equal to marginal cost are often misleading.
When we think, we are trying to apply metaphors. When I learn something new, I think of myself as taking metaphors that I know and using them to interpret a new metaphor or set of metaphors.
I think of creativity as coming up with a new synthesis of metaphors. I think of the Beatles’ initial musical creativity as coming up with a synthesis of Everly Brothers style music and Chuck Berry style music.
I suspect that there is a trade-off between creativity and reliability. If you stick with a limited set of metaphors in any situation, your thinking will be reliable. But it won’t be very creative.
AI Reasoning: Searching the space of metaphors?
The latest buzzword in artificial intelligence is a “reasoning” model. In the case of DeepSeek r1, it actually shows you its “reasoning.” See Shannon Sands’ post on X for an example. (Gemini Flash Thinking Experimental also shows you its “reasoning.” It’s a great feature.)
I imagine r1 as having a large database of analogies to work with in processing the user’s query. What we call computer reasoning involves searching the space of analogies, trying out various combinations. Most of these combinations do not seem promising, so they are discarded. I think of this as pruning, the way a chess program would prune its set of possible move sequences to eliminate ones that lead to bad positions. Eventually, r1 arrives at a set of analogies that optimize according to its evaluation criteria, and it provides the result as output.
If my metaphor is appropriate, then the designers of r1 will have trade-offs to make in terms of how the software prunes the set of possible analogies as it goes about responding to a query. If r1 does a lot of pruning quickly, it will eliminate many possible analogies to focus on the few that seem most promising. This will lower the cost of operation, but it may frustrate users by either missing the point of the query (requiring the user to reply “I meant ____, not ____”) or failing to come up with a creative answer. If it does relatively little pruning early, it may arrive at a better first guess as to what the user wants, and it may be more creative. But it will take more time and effort to come up with a response.
My point here is that I think of AI “reasoning” as searching a space of analogies. Here is what Claude had to say about that:
Say we give a reasoning model this problem: "Why do successful tech startups often have trouble maintaining their innovative culture as they grow larger?"
The model might reason like this:
"Let me think about this like a growing city. Just as a small town has informal, direct communication between residents, a startup has natural person-to-person collaboration.
When a town grows into a city, it needs formal systems like traffic lights and zoning laws. Similarly, a growing company needs formal processes and hierarchy.
Just as a big city's bureaucracy can slow down simple tasks that were easy in a small town, a large company's processes can slow down decisions that were quick in startup phase.
Like how neighborhoods in a big city can become isolated from each other, departments in a large company can become silos..."
This type of reasoning combines logical steps (like in my previous example) with analogical thinking to generate new insights. The model isn't just matching patterns or doing brute force calculations - it's using analogies to transfer understanding from familiar domains to help solve problems in new domains.2
Wow. Claude came up with a great metaphor. I wish I had come up with it myself.
When I had my web business, my favorite computer consultant explained to me that the Von Neumann architecture is one in which data and programs are in some sense interchangeable. When I thought in terms of program logic, he convinced me to throw everything into a database. For example, to vary a web page based on different partnership arrangements, I would have used program logic (“if the partner id is so-and-so, then use the header that includes their logo”). But instead, we put the partner id and header into a database, and had the code for the web page call the database to select the header. That kept the number of lines of program logic from becoming unwieldy as we obtained new partners.
https://claude.ai/chat/99e58e50-ca95-423b-8983-f2b3e6836729
"Wow. Claude came up with a great metaphor."
But did it? Or was that analogy already part of its database?
“Our conclusion, therefore, must be that to us mind must remain forever a realm of its own which we can know only through directly experiencing it, but which we shall never be able fully to explain or to 'reduce’ to something else. Even though we may know that mental events of the kind which we experience can be produced by the same forces which operate in the rest of nature, we shall never be able to say which are the particular physical events which 'correspond' to a particular mental event.” (F.A. Hayek, The Sensory Order: An Inquiry into the Foundations of Theoretical Psychology)