training call centers? LLM's near asymptote? Jacob Buckman says FOOM is far; Frederick R. Prete agrees; Tim B. Lee and Razib Khan; Sam Altman and Bari Weiss; Freddie deBoer on hype; Lee Bressler
Re AGI: I tend to want the answer to a slightly different question: what purely mental task can someone with an IQ of 80 do that AI won’t do in the next decade.
Already, it’s quite difficult to find something for that list *right now*. Of course there are certainly mental things people with IQs of 150 can do that AI can’t, and may not do for a while. And there are physical things – and things that require specific biological components, like smelling or feeling the effect of drugs – that people with an IQ of 80 can do that AI can’t.
But GPT-4 class machines are already “smarter” (in whatever way you want to define that term) than the lower 1/3 of the population *in every possible way*. They’re also “smarter” than people with IQs of 150 in *some* ways. But by focusing entirely on the things geniuses can do that AI can’t, we’re missing the most important development of the past 6 months – fully human level intelligence.
Playing against self for learning purposes turned out to be very effective for AlphaGo Zero, which resoundingly beat the human-beating AlphaGo and AlphaGo Master.
„if you can’t adequately model a phenomenon mathematically, you can’t duplicate it with AI.” I wonder if they can adequately model the current LLMs mathematically
In addition to the admitted self-promotion, I post this here because I kind of want to test out one of the ideas I came up with while writing the post.
tl;dr: In my post I try to separate two groups that look really similar from the outside. I (probably inelegantly) call the groups "nerds debating" and "academics researching." My hypothesis is that lot of the confusion seems to come from the fact that nerds love debating the nature of reality for fun, and that there have been some recent scientific breakthroughs in a field that happens to be *named* after the science-fiction ideas that the nerds love debating about. What's confusing is that to outsiders, (and increasingly to the nerds-debating insiders too), these groups look like the same people, and since this combined-group appears to be the authority in this space (because it includes the academics-researching subgroup), maybe we should all be as freaked out as they appear to be?
Among the links above, the best example of my hypothesis Max Tegmark. I'm pretty sure I had never heard of him until this morning, but he's such a perfect example of what I describe in my piece that I almost can't believe it. If you want to see why, I encourage you to read my whole piece first and then check out Max Tegmark's MIT website, specifically the first paragraph in the section he calls "Crazy."
Finally, I kind of want to try out my new framework to categorize these links, based on my very first glance at them. No idea if it's going to work or not, or if anyone is going to agree with me, but here's how I categorize the links above:
Nerds Debating: Max Tegmark (big time!), Jacob Buckman, Zvi (though not quoted directly in the list above)
Academics Researching: Sam Altman in the Lewis White piece (though White is an Outsider), Frederick R. Prete
Outsiders: Brynjolfsson et al., Sam Altman and Bari Weiss in the Weiss piece, Freddy DeBoer, Lee Bressler
(I left out Timothy B. Lee because I don't have time to listen to a podcast right now, and Arnold didn't give much information on what he said)
P.S. I deleted a previous comment with this exact text because the formatting got all weird when I posted it, and then when I tried to fix the formatting, the comment got weirdly truncated. Here's hoping that this comment works!
re: tasks AI will never do, long-time AI robotics pioneer Rob Brooks has a list: https://rodneybrooks.com/predictions-scorecard-2023-january-01/. e.g. "A robot that has any real idea about its own existence, or the existence of humans in the way that a six year old understands humans."
A young relative of mine, a recent college graduate, is half Filipino, and sometime in the recent past this relative was wearing a hoodie (maybe it was a tee shirt) with the word 'FilipinX' in large letters. So the answer is yes, apparently there is such a thing. This relative is also half Jewish from my side of the family. I thought of cracking a joke about whether there is such a thing as a JewX (JewessX?), but I censored myself.
Dance craze and gang war seem likeliest.
And that was a sentence I never expected to write.
Re AGI: I tend to want the answer to a slightly different question: what purely mental task can someone with an IQ of 80 do that AI won’t do in the next decade.
Already, it’s quite difficult to find something for that list *right now*. Of course there are certainly mental things people with IQs of 150 can do that AI can’t, and may not do for a while. And there are physical things – and things that require specific biological components, like smelling or feeling the effect of drugs – that people with an IQ of 80 can do that AI can’t.
But GPT-4 class machines are already “smarter” (in whatever way you want to define that term) than the lower 1/3 of the population *in every possible way*. They’re also “smarter” than people with IQs of 150 in *some* ways. But by focusing entirely on the things geniuses can do that AI can’t, we’re missing the most important development of the past 6 months – fully human level intelligence.
Playing against self for learning purposes turned out to be very effective for AlphaGo Zero, which resoundingly beat the human-beating AlphaGo and AlphaGo Master.
https://www.deepmind.com/blog/alphago-zero-starting-from-scratch
„if you can’t adequately model a phenomenon mathematically, you can’t duplicate it with AI.” I wonder if they can adequately model the current LLMs mathematically
I'm a data scientist and I wrote a blog post last week as a way to explain my understanding of AI to an acquaintance: https://ipsherman.substack.com/p/ai-for-seminarians
In addition to the admitted self-promotion, I post this here because I kind of want to test out one of the ideas I came up with while writing the post.
tl;dr: In my post I try to separate two groups that look really similar from the outside. I (probably inelegantly) call the groups "nerds debating" and "academics researching." My hypothesis is that lot of the confusion seems to come from the fact that nerds love debating the nature of reality for fun, and that there have been some recent scientific breakthroughs in a field that happens to be *named* after the science-fiction ideas that the nerds love debating about. What's confusing is that to outsiders, (and increasingly to the nerds-debating insiders too), these groups look like the same people, and since this combined-group appears to be the authority in this space (because it includes the academics-researching subgroup), maybe we should all be as freaked out as they appear to be?
Among the links above, the best example of my hypothesis Max Tegmark. I'm pretty sure I had never heard of him until this morning, but he's such a perfect example of what I describe in my piece that I almost can't believe it. If you want to see why, I encourage you to read my whole piece first and then check out Max Tegmark's MIT website, specifically the first paragraph in the section he calls "Crazy."
Finally, I kind of want to try out my new framework to categorize these links, based on my very first glance at them. No idea if it's going to work or not, or if anyone is going to agree with me, but here's how I categorize the links above:
Nerds Debating: Max Tegmark (big time!), Jacob Buckman, Zvi (though not quoted directly in the list above)
Academics Researching: Sam Altman in the Lewis White piece (though White is an Outsider), Frederick R. Prete
Outsiders: Brynjolfsson et al., Sam Altman and Bari Weiss in the Weiss piece, Freddy DeBoer, Lee Bressler
(I left out Timothy B. Lee because I don't have time to listen to a podcast right now, and Arnold didn't give much information on what he said)
P.S. I deleted a previous comment with this exact text because the formatting got all weird when I posted it, and then when I tried to fix the formatting, the comment got weirdly truncated. Here's hoping that this comment works!
re: tasks AI will never do, long-time AI robotics pioneer Rob Brooks has a list: https://rodneybrooks.com/predictions-scorecard-2023-january-01/. e.g. "A robot that has any real idea about its own existence, or the existence of humans in the way that a six year old understands humans."
A young relative of mine, a recent college graduate, is half Filipino, and sometime in the recent past this relative was wearing a hoodie (maybe it was a tee shirt) with the word 'FilipinX' in large letters. So the answer is yes, apparently there is such a thing. This relative is also half Jewish from my side of the family. I thought of cracking a joke about whether there is such a thing as a JewX (JewessX?), but I censored myself.