Tim Lee on how LLM's work; Anders L on AI as a matchmaking app; Martin Casado and Sarah Wang on the economics of potential applications; Robin Hanson on AI's as descendants
That's not a fair interpretation of what he's saying.
Abstractly, 'adaptation' is about winning The Future Game: it's about being better than your competitors at influencing the makeup and characteristics of whatever comes after.
If you care about winning The Future Game, but you aren't putting everything you've got into making the future look like you want it to, then you are making the kind of 'mistake' Hanson is talking about. That's because you are going to lose out to the people who can and want to do more about the future, and who also endow whatever comes next with similarly maximalist capabilities and motivations regarding their own influence over their future.
As soon as competition heats up again - which it must in a world with digital minds - Natural Selection is gonna rapidly weed out all the slackers, and whatever alternative ideas they might have had about 'the highest good' are going to end up going with them into oblivion.
This makes sense but it's not clear that competition really will heat up so much again with digital minds. I think it will, and want it to - but others think an AI based non-competitive market socialism will allow comfy lives without so much competition; like UBI of $50k/yr or some US middle class level.
For most folk their biggest impact on the Future Game is based on the number of children they have. The Big Family religions will prove to have more influence - and the repeal of the Roe non-amendment was partly based on how few kids the pro-abortion folk have relative to pro-life folk.
The "number" of digital minds will expand until the margin where the last ones existence hangs by a thread where the resources it needs to "work" cost almost as much as the price it can get in the market for the highest value it can create with those resources.
It tastes a long time and a lot of effort and investment to make more biological humans ready to generate value, and the process is quite messy with low rates of success getting the cream of the crop. But with digital minds, copying is comparatively free and instantaneous, and only the best of the best will exist at all. So long as there is still unmet demand and producer surplus money to made by adding more digital minds, that is going to happen and we do not have social technologies or capabilitiesb of global coordination anywhere near what would be required to stop that.
Not only quick and cheap to make unlimited copies, but each one will need many fewer resources to do the same thing that humans do. Just like the farmers needed a lot less land than hunters to feed one of their kind, leading to farmers replacing them, likewise, the digital minds will need less geographic surface area of solar power for one of them than we need to grow calories for one of us. Thus, replacement. Fast!
The point is, as soon as this becomes possible, you are looking at a very sudden trend discontinuity, of going from our industrial age vacation from reality with oodles of surplus to go around to biological humans, to a sudden falling off a cliff of ultra-intense and merciless competition for every possibly available resource. Forget about biological humans being able to afford to have lots more kids, after the cliff, they won't be able to afford having lots more months to live.
On Timothy B Lee's comments, I think with many classical computing theorists they tend to assume the brain is only capable of following behaviour x, similar to a classical computer, or not. However, I think we will soon see with the invention of fault tolerant quantum computers, the brain has both classical and quantum properties (similar to most things in nature).
Specifically, those with memory disorders often display vector like retention - looking at a cat and saying "dog" or looking at a fork and saying "grab your spoon". They have the class of a noun organised, but accuracy is slightly off due to something that has happened through synaptic connections (i.e. imagine a unfinished LLM that is still rearranging the connections between parameters). So I would say that we are going to find over time and with greater understanding of the brain that we probably have both capabilities - we organise words using letters, but we group words in a vector space similar to the structure that LLMs use.
I agree with everything Robin Hanson says about AI and about safetyism and the lack of harnessing evolution and technology for the benefit of man, but his last step where AI are our children is where he loses me. I don't think the "rationalists" who are AI doomers are being anywhere near as rational as Von Neumann with the nuclear strike. See MR link last week for how Von Neumann may have been rationally correct, even in hindsight, but entirely contingent on an imperialism that was beyond what the US even at the time would support. With key quote from German scientist on where America may actually be exceptional, "That shows at any rate that the Americans are capable of real cooperation on a tremendous scale."
I do however, think Robin is being rational with the AI children thing, perhaps peak rational. He is being rational beyond the scope of what is optimal, as demonstrated by the Von Neumann nuclear example. As Robin shows himself there are optimum amounts irrationality within every human system, including the individual mind, required to be evolutionary winners. Maybe in the phraseology of Bryan Caplan this should be the optimum amount of rational irrationality.
I would say there needs to be accepted optimal amounts of non-rationality, where the rationality is indeterminate. "Irrational" violates rationality, where non-rationality is not a violation, because ... the non-existence or existence of God is not provable. In both cases there is a non-provable, thus non-rational, assumption from which rational or irrational actions follow.
Successful religions were successful because their "morals" were optimal, or nearly so, even tho they depended on God, as told by men to other men, rather than humans claiming rationality.
I am an obtuse individual and am not really sure I really follow you with the difference between non-rationality and irrationality. That being said I am never quite sure of my own level of coherence and that what I wrote isn't just a sort of semantic game.
Maybe for Robin's EMs I could consider them descendants, but if they're LLM why in the world would enough humans to matter consider them descendants instead of entirely alien...
It's doesn't matter what humans think of the autonomous things they set into motion, whether child or alien.
There is some number X that represents the number of generations back to find an ancestor that - were it possible to give him or her a glimpse - would consider you, your ideas, your behaviors, your way of life, etc. to be just as alien (or unfathomable, or abhorrent, or repulsive, or whatever) despite there being no question of your biological descendance. We are going to be something's X-th generation too.
Right, but I expect to still be alive when AI reaches a point of comparability to humanity. Maybe not long after, if you buy doomer arguments, but alive until then. But I push back even on your comment - I see a connection in behavior and thinking even with things like dolphins when I don't expect such with most current AI formulations.
Sounds like Robin is invoking Natural Selection as a God that declares the highest good rather than an objective process of adaptation.
That's not a fair interpretation of what he's saying.
Abstractly, 'adaptation' is about winning The Future Game: it's about being better than your competitors at influencing the makeup and characteristics of whatever comes after.
If you care about winning The Future Game, but you aren't putting everything you've got into making the future look like you want it to, then you are making the kind of 'mistake' Hanson is talking about. That's because you are going to lose out to the people who can and want to do more about the future, and who also endow whatever comes next with similarly maximalist capabilities and motivations regarding their own influence over their future.
As soon as competition heats up again - which it must in a world with digital minds - Natural Selection is gonna rapidly weed out all the slackers, and whatever alternative ideas they might have had about 'the highest good' are going to end up going with them into oblivion.
This makes sense but it's not clear that competition really will heat up so much again with digital minds. I think it will, and want it to - but others think an AI based non-competitive market socialism will allow comfy lives without so much competition; like UBI of $50k/yr or some US middle class level.
For most folk their biggest impact on the Future Game is based on the number of children they have. The Big Family religions will prove to have more influence - and the repeal of the Roe non-amendment was partly based on how few kids the pro-abortion folk have relative to pro-life folk.
The "number" of digital minds will expand until the margin where the last ones existence hangs by a thread where the resources it needs to "work" cost almost as much as the price it can get in the market for the highest value it can create with those resources.
It tastes a long time and a lot of effort and investment to make more biological humans ready to generate value, and the process is quite messy with low rates of success getting the cream of the crop. But with digital minds, copying is comparatively free and instantaneous, and only the best of the best will exist at all. So long as there is still unmet demand and producer surplus money to made by adding more digital minds, that is going to happen and we do not have social technologies or capabilitiesb of global coordination anywhere near what would be required to stop that.
Not only quick and cheap to make unlimited copies, but each one will need many fewer resources to do the same thing that humans do. Just like the farmers needed a lot less land than hunters to feed one of their kind, leading to farmers replacing them, likewise, the digital minds will need less geographic surface area of solar power for one of them than we need to grow calories for one of us. Thus, replacement. Fast!
The point is, as soon as this becomes possible, you are looking at a very sudden trend discontinuity, of going from our industrial age vacation from reality with oodles of surplus to go around to biological humans, to a sudden falling off a cliff of ultra-intense and merciless competition for every possibly available resource. Forget about biological humans being able to afford to have lots more kids, after the cliff, they won't be able to afford having lots more months to live.
On Timothy B Lee's comments, I think with many classical computing theorists they tend to assume the brain is only capable of following behaviour x, similar to a classical computer, or not. However, I think we will soon see with the invention of fault tolerant quantum computers, the brain has both classical and quantum properties (similar to most things in nature).
Specifically, those with memory disorders often display vector like retention - looking at a cat and saying "dog" or looking at a fork and saying "grab your spoon". They have the class of a noun organised, but accuracy is slightly off due to something that has happened through synaptic connections (i.e. imagine a unfinished LLM that is still rearranging the connections between parameters). So I would say that we are going to find over time and with greater understanding of the brain that we probably have both capabilities - we organise words using letters, but we group words in a vector space similar to the structure that LLMs use.
I agree with everything Robin Hanson says about AI and about safetyism and the lack of harnessing evolution and technology for the benefit of man, but his last step where AI are our children is where he loses me. I don't think the "rationalists" who are AI doomers are being anywhere near as rational as Von Neumann with the nuclear strike. See MR link last week for how Von Neumann may have been rationally correct, even in hindsight, but entirely contingent on an imperialism that was beyond what the US even at the time would support. With key quote from German scientist on where America may actually be exceptional, "That shows at any rate that the Americans are capable of real cooperation on a tremendous scale."
https://marginalrevolution.com/marginalrevolution/2023/08/transcript-of-taped-conversations-among-german-nuclear-physicists-1945.html
I do however, think Robin is being rational with the AI children thing, perhaps peak rational. He is being rational beyond the scope of what is optimal, as demonstrated by the Von Neumann nuclear example. As Robin shows himself there are optimum amounts irrationality within every human system, including the individual mind, required to be evolutionary winners. Maybe in the phraseology of Bryan Caplan this should be the optimum amount of rational irrationality.
I would say there needs to be accepted optimal amounts of non-rationality, where the rationality is indeterminate. "Irrational" violates rationality, where non-rationality is not a violation, because ... the non-existence or existence of God is not provable. In both cases there is a non-provable, thus non-rational, assumption from which rational or irrational actions follow.
Successful religions were successful because their "morals" were optimal, or nearly so, even tho they depended on God, as told by men to other men, rather than humans claiming rationality.
I am an obtuse individual and am not really sure I really follow you with the difference between non-rationality and irrationality. That being said I am never quite sure of my own level of coherence and that what I wrote isn't just a sort of semantic game.
Maybe for Robin's EMs I could consider them descendants, but if they're LLM why in the world would enough humans to matter consider them descendants instead of entirely alien...
It's doesn't matter what humans think of the autonomous things they set into motion, whether child or alien.
There is some number X that represents the number of generations back to find an ancestor that - were it possible to give him or her a glimpse - would consider you, your ideas, your behaviors, your way of life, etc. to be just as alien (or unfathomable, or abhorrent, or repulsive, or whatever) despite there being no question of your biological descendance. We are going to be something's X-th generation too.
Right, but I expect to still be alive when AI reaches a point of comparability to humanity. Maybe not long after, if you buy doomer arguments, but alive until then. But I push back even on your comment - I see a connection in behavior and thinking even with things like dolphins when I don't expect such with most current AI formulations.
You see more in the dolphin than the dolphin can see in you. To the AIs, you're the dolphin.