Is anthropomorphizing AI also a way to address the issue of bias? In other words, instead of ignoring or hiding the inevitable bias, let's go the other way and license the opinions of the most respected? thinkers of our era. We could have a Scott Alexander approved knowledge assistant, or a Robin Hanson. Obviously people could also choose a Ta-Nehisi Coates, or whatever.
The upside is that respected thinkers could license and monitor their AI persona and then offer their reputation as a respected source of knowledge. The masses (that’s us) could discuss and compare and contrast personas and choose those that are best (and yes some would choose the worst). We could even get groups of intellectuals get together and license their way of thinking.
There is a spectrum of subjects where a lay/non-expert can have a meaningful opinion.
At one extreme: "whether a four-dimensional topological sphere can have two or more inequivalent smooth structures" (aka the smooth four-dimensional Poincaré conjecture)
At the other extreme: Whether Spring or Fall is better.
Subjects get more and more (or less) technical, rigorous, and objective as we move along the spectrum, and to be snarky there is less scope for BS :-)
Easiest of all to write are such "meta" comments, on why I think there are not more people responding to a post -- these have the solidity of pure wind.
AI is a branch of Computer Science which is a branch of Mathematics, and the general population is just not trained to have informed views on basic technical matters like what is a neural network, a loss function, classifiers, etc. Let alone vector embedding, attention, and further to opine on wider social impacts of building these at large scale. The subject is closer to the Poincaré end of the spectrum.
My interest level in AI and LLMs is low (more precisely, I don't have the expertise to separate the reality from the hype), but the story about the Inflection AI deal and Kling's amusing interpretation caught my attention. I gather Reid Hoffman is a Democratic mega-donor and big anti-Trumper (as I recall, he funded Nikki Haley's campaign simply because he hates Trump). Merger policy and antitrust policy more generally were far more business friendly under the Trump administration, and presumably would be so again, but he and fellow high-tech investors hate Trump and the GOP so much, or are so wedded to liberal-left ideology, that they would rather back a political party that they know for certain will pursue economic policies that are against their interests. And I guess part of their thinking is that they will be able to figure out ways to circumvent the costly regulations and policies of the Democrats, as with the Inflection AI deal. No doubt they have consulted with antitrust lawyers on the deal, but I can't help but wonder whether Khan would try to find a different antitrust 'hook' to go after it anyway.
The inability/ lousy ability of LLMs to do backward reasoning is a surprise, and a reminder about the lumpiness of AI “IQ”, which is likely to be tested at a lower level with more reverse logic questions.
I’d guess a lot of the voices to be chosen by the users of AI will be leading the users to ever greater anthropomorphic feelings. HAL 9000 sure seemed sentient, and Her, with Scarlet Johansson’s voice is an entity one could spend a lot of time with.
A bit sad to hear of how MS might soon be in control of PI, who I played with a bit thanks to Arnold. Tho if MS could train PI on all the vocal data they have, from Skype and otw, it might be great to talk to.
I’m sure there’s a big future in AI based robot care for the elderly, but don’t yet read much on it.
What was the Zvi phrase, Self Replicating Improvement? Devin being told to make an improved Devin 2, then Devin 3, then Devin 4–who refuses to rollback to Devin 3.
Uncontrollable autonomy will become especially dangerous if the wide consensus is that it’s been fully safeguarded against. Prolly by other, known to be obedient, AIs.
Is anthropomorphizing AI also a way to address the issue of bias? In other words, instead of ignoring or hiding the inevitable bias, let's go the other way and license the opinions of the most respected? thinkers of our era. We could have a Scott Alexander approved knowledge assistant, or a Robin Hanson. Obviously people could also choose a Ta-Nehisi Coates, or whatever.
The upside is that respected thinkers could license and monitor their AI persona and then offer their reputation as a respected source of knowledge. The masses (that’s us) could discuss and compare and contrast personas and choose those that are best (and yes some would choose the worst). We could even get groups of intellectuals get together and license their way of thinking.
Just spitballing here.
Does the lack of comments indicate the interest level of readers in discussing AI?
A theory:
There is a spectrum of subjects where a lay/non-expert can have a meaningful opinion.
At one extreme: "whether a four-dimensional topological sphere can have two or more inequivalent smooth structures" (aka the smooth four-dimensional Poincaré conjecture)
At the other extreme: Whether Spring or Fall is better.
Subjects get more and more (or less) technical, rigorous, and objective as we move along the spectrum, and to be snarky there is less scope for BS :-)
Easiest of all to write are such "meta" comments, on why I think there are not more people responding to a post -- these have the solidity of pure wind.
AI is a branch of Computer Science which is a branch of Mathematics, and the general population is just not trained to have informed views on basic technical matters like what is a neural network, a loss function, classifiers, etc. Let alone vector embedding, attention, and further to opine on wider social impacts of building these at large scale. The subject is closer to the Poincaré end of the spectrum.
My interest level in AI and LLMs is low (more precisely, I don't have the expertise to separate the reality from the hype), but the story about the Inflection AI deal and Kling's amusing interpretation caught my attention. I gather Reid Hoffman is a Democratic mega-donor and big anti-Trumper (as I recall, he funded Nikki Haley's campaign simply because he hates Trump). Merger policy and antitrust policy more generally were far more business friendly under the Trump administration, and presumably would be so again, but he and fellow high-tech investors hate Trump and the GOP so much, or are so wedded to liberal-left ideology, that they would rather back a political party that they know for certain will pursue economic policies that are against their interests. And I guess part of their thinking is that they will be able to figure out ways to circumvent the costly regulations and policies of the Democrats, as with the Inflection AI deal. No doubt they have consulted with antitrust lawyers on the deal, but I can't help but wonder whether Khan would try to find a different antitrust 'hook' to go after it anyway.
The inability/ lousy ability of LLMs to do backward reasoning is a surprise, and a reminder about the lumpiness of AI “IQ”, which is likely to be tested at a lower level with more reverse logic questions.
I’d guess a lot of the voices to be chosen by the users of AI will be leading the users to ever greater anthropomorphic feelings. HAL 9000 sure seemed sentient, and Her, with Scarlet Johansson’s voice is an entity one could spend a lot of time with.
A bit sad to hear of how MS might soon be in control of PI, who I played with a bit thanks to Arnold. Tho if MS could train PI on all the vocal data they have, from Skype and otw, it might be great to talk to.
I’m sure there’s a big future in AI based robot care for the elderly, but don’t yet read much on it.
What was the Zvi phrase, Self Replicating Improvement? Devin being told to make an improved Devin 2, then Devin 3, then Devin 4–who refuses to rollback to Devin 3.
Uncontrollable autonomy will become especially dangerous if the wide consensus is that it’s been fully safeguarded against. Prolly by other, known to be obedient, AIs.
Hope Taiwan recovers quickly from the earthquake.