The way you came up with your retort is the way Claude came up with its metaphor. In one way your retort was already in your mind’s “database” and in a different way it wasn’t.
I don't think this is quite right- what I am saying is that Claude didn't come up with it at all- some human at some point in the past applied exactly the same metaphor to the question and Claude simply copied it. I would be very impressed if Claude created this thought as a result of reasoning- I just don't buy it since it sounds like a metaphor any number of economists might have created in the past and that exists somewhere on a blog or in a book that has been uploaded to the web.
What „exactly the same” means? Does it mean that it used exactly the same letters in the same sequence? That should be possible to eliminate, and I don’t think this is what you had in mind, but if we allow for deviations then where is the limit? Deviations from „literal” are actually the centre of what metaphor is.
I mean the thought comparing a town and a company has already been made by a human and Claude scraped it in its training regurtitating it, probably rewritten, on the query. This is no different from me plaigarizing in the same manner.
My contention is that all thinking is like that - part that is copied and a part that is new. Metaphors.
Also it is quite evident that llms are very good at making lists and also combining them, the results are sometimes very surprising. Aren’t they something new? Does the mechanistic way that they were created make them not innovative?
“Our conclusion, therefore, must be that to us mind must remain forever a realm of its own which we can know only through directly experiencing it, but which we shall never be able fully to explain or to 'reduce’ to something else. Even though we may know that mental events of the kind which we experience can be produced by the same forces which operate in the rest of nature, we shall never be able to say which are the particular physical events which 'correspond' to a particular mental event.” (F.A. Hayek, The Sensory Order: An Inquiry into the Foundations of Theoretical Psychology)
"I imagine r1 as having a large database of analogies to work with in processing the user’s query. What we call computer reasoning involves searching the space of analogies, trying out various combinations. Most of these combinations do not seem promising, so they are discarded. I think of this as pruning, the way a chess program would prune its set of possible move sequences to eliminate ones that lead to bad positions. Eventually, r1 arrives at a set of analogies that optimize according to its evaluation criteria, and it provides the result as output."
This is actually a pretty good description of (metaphor for?) human creativity.
>> "I think of creativity as coming up with a new synthesis of metaphors."
Agreed — I think that metaphors (along with mental images, and somatic stuff) are close to the core of what "understanding" means. I'll just add that building understanding also requires critiquing and correcting the many things each metaphor gets wrong.
For example, it's taken me years to realize that my problem with understanding the layers of the Earth comes from the term (and metaphor) "crust". "Crust" works well to describe how thin and brittle the crust is, but after that, it steers us wrong. Thinking of the planet as a loaf of bread makes one imagine that the inside layers are less substantial than the outside... when precisely the opposite is the case. The crust doesn't float around on a sea of lava: it floats on a giant super-heavy rock of slow-churning green crystal, which itself floats on an even heavier ball of iron. It's always a surprise to people to discover that lava is melted crust — why wouldn't it be melted mantle, or the already-melted core? But that makes sense when you realize that the Earth's crust is made of the wimpiest atoms that were shoved out of the way by the heavier ones back when the Earth was a glowing ball of magma.
It's our metaphor (now frozen into the official scientific terminology) that makes it hard to understand the planet.
Metaphors may be the closest units that we can still comprehend and spell out, somewhat. But they are not quanta. 🧩
Elements of them are just harder to pinpoint but they are there. When you hear that rock ballad from teen years some suppressed memories and emotions swell up without forming a metaphor yet. ⚛️
Jeff Hawkins in “A thousand brains” digs into what he calls frames of reference” which are a smallest neural patterns that can be replicated by our system and glued to one another and to memory, typically with emotions and other hormonal triggers.
They are processed en masse by cortical columns, call them “cores, which may produces different new patterns and behaviors. Cores then “vote”.
At their neural level they are just bioelectrical signals. Touch, smell, light and memory all look the same — series of neural activities.
Jeff would kill me for this oversimplification though.
"Patterns-based" is a better and more general word than "metaphors" to describe human cognitive processes around reasoning and language, and it allows more conceptual commensurability between those intuitive processes on the one hand and more rigorous ones like formal logic, set theory, statistical approaches, and so forth, on the other.
We notice and guess at patterns, develop criteria and weightings and refine them over time (see Andy Clark's "Surfing Uncertainty" for an overview of instinctive Bayesian pattern formulation updating), use patterns to generate and use categories and models, extend patterns creatively, and compare (or experiment to probe for novel) particular instances or cases to established patterns to determine similarities and differences, judging whether they are good or poor fits.
A "metaphor" is just an instance of leveraging a partial pattern-match to persuade or accelerate grasping by "loading the conceptual library" already associated with the pattern, and with the implicit assertion that the similarities are more relevant and important than the differences such that they justify an analogy instead of a distinction. In other words, "metaphor" is in the set of "pattern-based approaches" because it fits the pattern.
Most of the vocabulary of any rich and well-developed language emerged as (usually phonetic) extensions from a relatively small number of 'roots' slightly modified to apply to closely related concepts. This is especially evident for modern forms of abjad Semitic languages with their trilateral roots and nonconcatenative morphology. This is often metaphorical - for example in the case of a new name for a structure being a modification of the names of an otherwise unrelated but visually similar object. But I would say not always, as some novel extensions are more "grammatical" than metaphorical, that is, "extensions" being more of the character of applying the language's 'rules' (i.e. pattern) for how to transform cases across categories.
The arguing of cases under law is a good example of using more-general level pattern-based reasoning (Does this follow the pattern / is it a member of the set or category) specifically to argue whether the application of metaphor-based reasoning by analogy - and thus whether the application of the rule applicable to the proposed analogy - would be valid in the particular instance. Is dynamite more like a wild tiger than a toxic substance, or is that choice between legal regimes too limited and dynamite is something new entirely and warrants the invention of a novel regime?
It doesn't seem quite right to say we reason by metaphors but also reason about whether to reason with metaphors by using meta-metaphors. Just better to say we reason with patterns (which is how even the crudest animal brains learn), and metaphors are a common way humans do it, especially to communicate with and persuade each other.
Andy Clark's 2019 "Surfing Uncertainty: Prediction, Action, and the Embodied Mind" is a fine book but technical. His 2024 popularization "The Experience Machine: How Our Minds Predict and Shape Reality" is very good and much better for an ordinary reader (of course, no one here is an ordinary reader :)).
An even shorter (but not short) introduction is Scott Alexander's very good "Book Review: Surfing Uncertainty".
I know it’s an Econ blog so unlimited city populations are supposed to bring prosperity, even if there are few historical examples of this causation.
But I must push back on this, from Claude:
“Just as a big city's bureaucracy can slow down simple tasks that were easy in a small town, a large company's processes can slow down decisions that were quick in startup phase.”
My own hometown MSA has grown by nearly 6 million people since I made one of them.
It is not the greater complexity of its bureaucratic “processes” that makes the problems of my hometown intractable (if I’m representing the word salad correctly) it’s the scale.
No, it's not scale. Sure, scale introduces some problems everywhere, but those are not the ones to which you are referring. If you take a global and historical survey of cities or polities of any particular scale, you will see far too enormous of a variance in social and state capacities to conclude there's even a correlation, let alone strong causation. For every American MSA of several million which can't do X, there are 100 East Asian cities of similar scale who did it well all the time and can't even understand why doing X would be a problem.
I’m sorry I misplaced the above comment, which was in response to the other, but I won’t move it now.
I would have to know what X is that they’re doing that you applaud or approve of before I could understand what you’re saying.
All I know is that my hometown is not doing anything that will be of interest to posterity or indeed to the world at the moment; all that it did that was so interesting (X?) was in the past and largely in that period before I was born when there were but a million and a 1/2 people in the MSA And most of the county was rural.
Those things that made my city, transiently notable were owing partly to LBJ and partly to the bonanza that came gushing out of the ground. But all cities benefit from contingencies like that I suppose so I still think it’s illustrative.
Nothing more complex or interesting is happening there now than ever before. It’s just a big number of people.
I shared the video of Deepseek coming up with Tetris with a friend of mine who was a programmer, and she said it was just how she would have done it. So I was vicariously impressed since that's all I can be in this wholly unfamiliar realm.
Likewise, I think Unhinged Claude, such as Golden Gate Bridge Claude, is very funny and I would love to hear more from him.
But I am underwhelmed with Claude as a generator of triteness, i.e. the exam answer about startups being like growing cities, that pleased you.
Hey Claude, Philadelphia was around for 250 years before traffic lights and zoning laws.
Philadelphia was around, but not a Philadelphia full of dangerous automobiles. The lights followed the cars is every developed big city in the world as surely as the nights followed the days. But a farmer didn't have to erect traffic lights on his land when he got a tractor or truck because, unlike in the city where it was a clear and present danger at all times, the danger of collision with other tractors was nil. So, another point for Claude's metaphor.
There is a real me, with a real will, who chooses what comments to write. Sometimes as they come to me, like now, more often after some thinking & time & reading the (usually excellent, always at least good) links from Arnold.
The real me includes my thinking, feeling, dreaming/fantasizing internally, as well as my external words and actions. The ai won’t have these internal processes, but will be able to simulate them thru words and actions.
Like actors playing a character (De Niro in Taxi Driver, or New York, New York). The. Actors are not the real characters, but are simulating them. All ai answers are, to some extent, like actor simulations of characters, with the ai simulating … a sexless, self-less person who knows it all?
Metaphors, like maps, are simplifications of reality. More or less useful, depending on the goal.
I want an ai to be extremely accurate with facts, all the known knowns, as well as those knowns unknown to me. And to be accurate about the known unknowns—like the 2022 Q will Russia invade Ukraine? If they do, will they win? No famous American expert got both these questions right, so it’s a bit unrealistic to expect an ai, with AGI or merely simulating it, to get it right. But that’s what I, and many, want.
Thanks, Arnold, for stimulating my thinking, again.
Great use of Fox & Hedgehog analogy! Plus I agree with the bunch of easy to use ai augmented apps that work well with humans OR with a person’s aiAgent, who somebody talks to like an assistant, tells it what you want, and it handles it for you.
Everybody gets to be like the rich folk in movies who tell their go-for servant what they want and the ai gets it done.
"Wow. Claude came up with a great metaphor."
But did it? Or was that analogy already part of its database?
The way you came up with your retort is the way Claude came up with its metaphor. In one way your retort was already in your mind’s “database” and in a different way it wasn’t.
I don't think this is quite right- what I am saying is that Claude didn't come up with it at all- some human at some point in the past applied exactly the same metaphor to the question and Claude simply copied it. I would be very impressed if Claude created this thought as a result of reasoning- I just don't buy it since it sounds like a metaphor any number of economists might have created in the past and that exists somewhere on a blog or in a book that has been uploaded to the web.
What „exactly the same” means? Does it mean that it used exactly the same letters in the same sequence? That should be possible to eliminate, and I don’t think this is what you had in mind, but if we allow for deviations then where is the limit? Deviations from „literal” are actually the centre of what metaphor is.
I mean the thought comparing a town and a company has already been made by a human and Claude scraped it in its training regurtitating it, probably rewritten, on the query. This is no different from me plaigarizing in the same manner.
My contention is that all thinking is like that - part that is copied and a part that is new. Metaphors.
Also it is quite evident that llms are very good at making lists and also combining them, the results are sometimes very surprising. Aren’t they something new? Does the mechanistic way that they were created make them not innovative?
“Our conclusion, therefore, must be that to us mind must remain forever a realm of its own which we can know only through directly experiencing it, but which we shall never be able fully to explain or to 'reduce’ to something else. Even though we may know that mental events of the kind which we experience can be produced by the same forces which operate in the rest of nature, we shall never be able to say which are the particular physical events which 'correspond' to a particular mental event.” (F.A. Hayek, The Sensory Order: An Inquiry into the Foundations of Theoretical Psychology)
"I imagine r1 as having a large database of analogies to work with in processing the user’s query. What we call computer reasoning involves searching the space of analogies, trying out various combinations. Most of these combinations do not seem promising, so they are discarded. I think of this as pruning, the way a chess program would prune its set of possible move sequences to eliminate ones that lead to bad positions. Eventually, r1 arrives at a set of analogies that optimize according to its evaluation criteria, and it provides the result as output."
This is actually a pretty good description of (metaphor for?) human creativity.
>> "I think of creativity as coming up with a new synthesis of metaphors."
Agreed — I think that metaphors (along with mental images, and somatic stuff) are close to the core of what "understanding" means. I'll just add that building understanding also requires critiquing and correcting the many things each metaphor gets wrong.
For example, it's taken me years to realize that my problem with understanding the layers of the Earth comes from the term (and metaphor) "crust". "Crust" works well to describe how thin and brittle the crust is, but after that, it steers us wrong. Thinking of the planet as a loaf of bread makes one imagine that the inside layers are less substantial than the outside... when precisely the opposite is the case. The crust doesn't float around on a sea of lava: it floats on a giant super-heavy rock of slow-churning green crystal, which itself floats on an even heavier ball of iron. It's always a surprise to people to discover that lava is melted crust — why wouldn't it be melted mantle, or the already-melted core? But that makes sense when you realize that the Earth's crust is made of the wimpiest atoms that were shoved out of the way by the heavier ones back when the Earth was a glowing ball of magma.
It's our metaphor (now frozen into the official scientific terminology) that makes it hard to understand the planet.
Metaphors may be the closest units that we can still comprehend and spell out, somewhat. But they are not quanta. 🧩
Elements of them are just harder to pinpoint but they are there. When you hear that rock ballad from teen years some suppressed memories and emotions swell up without forming a metaphor yet. ⚛️
Jeff Hawkins in “A thousand brains” digs into what he calls frames of reference” which are a smallest neural patterns that can be replicated by our system and glued to one another and to memory, typically with emotions and other hormonal triggers.
They are processed en masse by cortical columns, call them “cores, which may produces different new patterns and behaviors. Cores then “vote”.
At their neural level they are just bioelectrical signals. Touch, smell, light and memory all look the same — series of neural activities.
Jeff would kill me for this oversimplification though.
"Patterns-based" is a better and more general word than "metaphors" to describe human cognitive processes around reasoning and language, and it allows more conceptual commensurability between those intuitive processes on the one hand and more rigorous ones like formal logic, set theory, statistical approaches, and so forth, on the other.
We notice and guess at patterns, develop criteria and weightings and refine them over time (see Andy Clark's "Surfing Uncertainty" for an overview of instinctive Bayesian pattern formulation updating), use patterns to generate and use categories and models, extend patterns creatively, and compare (or experiment to probe for novel) particular instances or cases to established patterns to determine similarities and differences, judging whether they are good or poor fits.
A "metaphor" is just an instance of leveraging a partial pattern-match to persuade or accelerate grasping by "loading the conceptual library" already associated with the pattern, and with the implicit assertion that the similarities are more relevant and important than the differences such that they justify an analogy instead of a distinction. In other words, "metaphor" is in the set of "pattern-based approaches" because it fits the pattern.
Most of the vocabulary of any rich and well-developed language emerged as (usually phonetic) extensions from a relatively small number of 'roots' slightly modified to apply to closely related concepts. This is especially evident for modern forms of abjad Semitic languages with their trilateral roots and nonconcatenative morphology. This is often metaphorical - for example in the case of a new name for a structure being a modification of the names of an otherwise unrelated but visually similar object. But I would say not always, as some novel extensions are more "grammatical" than metaphorical, that is, "extensions" being more of the character of applying the language's 'rules' (i.e. pattern) for how to transform cases across categories.
The arguing of cases under law is a good example of using more-general level pattern-based reasoning (Does this follow the pattern / is it a member of the set or category) specifically to argue whether the application of metaphor-based reasoning by analogy - and thus whether the application of the rule applicable to the proposed analogy - would be valid in the particular instance. Is dynamite more like a wild tiger than a toxic substance, or is that choice between legal regimes too limited and dynamite is something new entirely and warrants the invention of a novel regime?
It doesn't seem quite right to say we reason by metaphors but also reason about whether to reason with metaphors by using meta-metaphors. Just better to say we reason with patterns (which is how even the crudest animal brains learn), and metaphors are a common way humans do it, especially to communicate with and persuade each other.
Andy Clark's 2019 "Surfing Uncertainty: Prediction, Action, and the Embodied Mind" is a fine book but technical. His 2024 popularization "The Experience Machine: How Our Minds Predict and Shape Reality" is very good and much better for an ordinary reader (of course, no one here is an ordinary reader :)).
An even shorter (but not short) introduction is Scott Alexander's very good "Book Review: Surfing Uncertainty".
https://slatestarcodex.com/2017/09/05/book-review-surfing-uncertainty/
I know it’s an Econ blog so unlimited city populations are supposed to bring prosperity, even if there are few historical examples of this causation.
But I must push back on this, from Claude:
“Just as a big city's bureaucracy can slow down simple tasks that were easy in a small town, a large company's processes can slow down decisions that were quick in startup phase.”
My own hometown MSA has grown by nearly 6 million people since I made one of them.
It is not the greater complexity of its bureaucratic “processes” that makes the problems of my hometown intractable (if I’m representing the word salad correctly) it’s the scale.
It’s the number of people, put there by policy.
No, it's not scale. Sure, scale introduces some problems everywhere, but those are not the ones to which you are referring. If you take a global and historical survey of cities or polities of any particular scale, you will see far too enormous of a variance in social and state capacities to conclude there's even a correlation, let alone strong causation. For every American MSA of several million which can't do X, there are 100 East Asian cities of similar scale who did it well all the time and can't even understand why doing X would be a problem.
I’m sorry I misplaced the above comment, which was in response to the other, but I won’t move it now.
I would have to know what X is that they’re doing that you applaud or approve of before I could understand what you’re saying.
All I know is that my hometown is not doing anything that will be of interest to posterity or indeed to the world at the moment; all that it did that was so interesting (X?) was in the past and largely in that period before I was born when there were but a million and a 1/2 people in the MSA And most of the county was rural.
Those things that made my city, transiently notable were owing partly to LBJ and partly to the bonanza that came gushing out of the ground. But all cities benefit from contingencies like that I suppose so I still think it’s illustrative.
Nothing more complex or interesting is happening there now than ever before. It’s just a big number of people.
I shared the video of Deepseek coming up with Tetris with a friend of mine who was a programmer, and she said it was just how she would have done it. So I was vicariously impressed since that's all I can be in this wholly unfamiliar realm.
Likewise, I think Unhinged Claude, such as Golden Gate Bridge Claude, is very funny and I would love to hear more from him.
But I am underwhelmed with Claude as a generator of triteness, i.e. the exam answer about startups being like growing cities, that pleased you.
Hey Claude, Philadelphia was around for 250 years before traffic lights and zoning laws.
Philadelphia was around, but not a Philadelphia full of dangerous automobiles. The lights followed the cars is every developed big city in the world as surely as the nights followed the days. But a farmer didn't have to erect traffic lights on his land when he got a tractor or truck because, unlike in the city where it was a clear and present danger at all times, the danger of collision with other tractors was nil. So, another point for Claude's metaphor.
There is a real me, with a real will, who chooses what comments to write. Sometimes as they come to me, like now, more often after some thinking & time & reading the (usually excellent, always at least good) links from Arnold.
The real me includes my thinking, feeling, dreaming/fantasizing internally, as well as my external words and actions. The ai won’t have these internal processes, but will be able to simulate them thru words and actions.
Like actors playing a character (De Niro in Taxi Driver, or New York, New York). The. Actors are not the real characters, but are simulating them. All ai answers are, to some extent, like actor simulations of characters, with the ai simulating … a sexless, self-less person who knows it all?
Metaphors, like maps, are simplifications of reality. More or less useful, depending on the goal.
I want an ai to be extremely accurate with facts, all the known knowns, as well as those knowns unknown to me. And to be accurate about the known unknowns—like the 2022 Q will Russia invade Ukraine? If they do, will they win? No famous American expert got both these questions right, so it’s a bit unrealistic to expect an ai, with AGI or merely simulating it, to get it right. But that’s what I, and many, want.
Thanks, Arnold, for stimulating my thinking, again.
Great use of Fox & Hedgehog analogy! Plus I agree with the bunch of easy to use ai augmented apps that work well with humans OR with a person’s aiAgent, who somebody talks to like an assistant, tells it what you want, and it handles it for you.
Everybody gets to be like the rich folk in movies who tell their go-for servant what they want and the ai gets it done.