The Zvi chides economists; Ethan Mollick offers a user's guide; Maya Bodnick cheats at Harvard; My experience with Personal Intelligence (Pi), a chatbot companion
“Generative AI models will fundamentally change our relationship with computers, putting them beside us as coworkers, friends, family members, and even lovers.” I’m sure that’ll turn out great!!!
[W]hat is “human-like intelligence”? It seems to me that it is not all that different from what the likes of Google search or Siri do: absorb vast amounts of associations between data items, without really being systematic about what the associations mean or selective about their quality, and apply some statistical algorithm to the associations to pick the most relevant.
There must be more to it than that; for one thing, trained humans can sort of do actual proper logic[,] and there’s a lot of effectively hand-built (i.e. specifically evolved) functionality in a some selected pattern-recognition areas. But I think the general-purpose associationist mechanism is the most important from the point of view of building artificial intelligence.
If that is true, then a couple of things follow. First, the Google/Siri approach to AI is the correct one, and as it develops we are likely to see it come to achieve something resembling humanlike ability.
But it also suggests that the limitations of human intelligence may not be due to limitations of the human brain, so much as they are due to fundamental limitations in what the association-plus-statistics technique can practically achieve.
[...]
The major limitation on human intelligence, particularly when it is augmented with computers as it generally is now, is how much it is wrong. Being faster or bigger doesn’t push back the major limitation unless it can make the intelligence wrong less often, and I don’t think it would.
I think I need to disaggregate my foom-scepticism into two distinct but related propositions, both of which I consider likely to be true. Strong Foom-Scepticism — the most intelligent humans are close to the maximum intelligence that can exist. This is the “could really be true” one. But there is also Weak Foom-Scepticism: Intelligence at or above the observed human extreme is not useful, it becomes self-sabotaging and chaotic. That is also something I claim in my prior writing. But I have considerably more confidence in it being true. I have trouble imagining a super-intelligence that pursues some specific goal with determination. I find it more likely it will keep changing its mind, or play pointless games, or commit suicide. I’ve explained why before: it’s not a mystery why the most intelligent humans tend to follow this sort of pattern. It’s because they can climb through meta levels of their own motivations. I don’t see any way that any sufficiently high intelligence can be prevented from doing this.
Where the LLMs can probably make the most and truly productive impacts will be in medicine, law, and teaching. All three of those have enormously powerful political organizations to protect their members jobs. I think the economic upside is likely to be limited by this fact.
“Generative AI models will fundamentally change our relationship with computers, putting them beside us as coworkers, friends, family members, and even lovers.” I’m sure that’ll turn out great!!!
Really enjoyed the Maya Bodnick piece
AnomalyUK, who had correctly predicted the eventual rise of LLMs in 2012 [https://www.anomalyblog.co.uk/2012/01/speculations-regarding-limitations-of/] (following quote is from that post), has interesting arguments against super-intelligent AI doomerism.
---
[W]hat is “human-like intelligence”? It seems to me that it is not all that different from what the likes of Google search or Siri do: absorb vast amounts of associations between data items, without really being systematic about what the associations mean or selective about their quality, and apply some statistical algorithm to the associations to pick the most relevant.
There must be more to it than that; for one thing, trained humans can sort of do actual proper logic[,] and there’s a lot of effectively hand-built (i.e. specifically evolved) functionality in a some selected pattern-recognition areas. But I think the general-purpose associationist mechanism is the most important from the point of view of building artificial intelligence.
If that is true, then a couple of things follow. First, the Google/Siri approach to AI is the correct one, and as it develops we are likely to see it come to achieve something resembling humanlike ability.
But it also suggests that the limitations of human intelligence may not be due to limitations of the human brain, so much as they are due to fundamental limitations in what the association-plus-statistics technique can practically achieve.
[...]
The major limitation on human intelligence, particularly when it is augmented with computers as it generally is now, is how much it is wrong. Being faster or bigger doesn’t push back the major limitation unless it can make the intelligence wrong less often, and I don’t think it would.
---
This April, at the peak of ChatGPT hype, he wrote [https://www.anomalyblog.co.uk/2023/04/ai-doom-post/]:
---
I think I need to disaggregate my foom-scepticism into two distinct but related propositions, both of which I consider likely to be true. Strong Foom-Scepticism — the most intelligent humans are close to the maximum intelligence that can exist. This is the “could really be true” one. But there is also Weak Foom-Scepticism: Intelligence at or above the observed human extreme is not useful, it becomes self-sabotaging and chaotic. That is also something I claim in my prior writing. But I have considerably more confidence in it being true. I have trouble imagining a super-intelligence that pursues some specific goal with determination. I find it more likely it will keep changing its mind, or play pointless games, or commit suicide. I’ve explained why before: it’s not a mystery why the most intelligent humans tend to follow this sort of pattern. It’s because they can climb through meta levels of their own motivations. I don’t see any way that any sufficiently high intelligence can be prevented from doing this.
Where the LLMs can probably make the most and truly productive impacts will be in medicine, law, and teaching. All three of those have enormously powerful political organizations to protect their members jobs. I think the economic upside is likely to be limited by this fact.