Aswath Damodaran on competing against your AI clone; Ethan Mollick on students and teachers using LLMs; Neo, a household robot; The Zvi rates LLMs on Silver's technological Richter Scale
"He argues, and I am inclined to agree, that the LLMs are already a very important technological innovation. The thing is, as with the Internet, it takes a while for the right apps to be built and for people to learn the best uses."
Casual observation suggests that LLMs have yet to impact the quantity, quality, and variety of your blogposts, the Zvi's, and so on.
You (Arnold) had a particularly incisive run of blogposts in 2020 around the pandemic and the election. I wonder if or how any current LLMs would have changed the substance of your writings then.
About AI clones, there is value in a machine that can imitate a person. There is also the risk that the AI clone will prove incontrovertibly that AI cannot think. And why? Because the real me changes opinions. The real me takes new information and adjusts my view of the world and the things and people in it, including my own preferences.
Would the AI clone change its "mind"? I do not believe it can with any rationality. An AI clone can only guess at what the real future me might think and argue.
There is value in LLM as a search engine and as a machine to replicate actions and words based on an historical catalog. There is also a tremendous inefficiency with LLM in that it inherently cannot replicate the evolution of human thought without the machine breaking down and proving itself untrustworthy.
Just consider the misunderstandings that occur with people close to you. A disagreement occurs with the person telling the other "I thought you liked this." And the other person responds "I do, but ...." An explanation is then given that makes sense to the person giving it. But the explanation was so unpredictable or nuanced that the other person did not predict it. And, of course the inverse happens when a person thinks the other person would not like something when in fact that person would, if done in a certain way.
"Because the real me changes opinions. The real me takes new information and adjusts my view of the world and the things and people in it, including my own preferences."
Say an AI-cloned person dies or just stops publishing. Ten years later lots of things in the world will change. You don't think AI will try to adjust a clone of that person to account for how the world has changed? Just taking a simple example, would it have an opinion on whether he liked a President he had never commented about? Isn't this a form of updating based on new info?
There is no confidence the AI clone would mirror the thoughts and opinions of the person in the future. Simply consider Paul Krugman. How would an AI clone of Krugman provide opinions? Would it be based on the writings of Krugman? Or on his politics? The AI could guess, but it could never be Krugman. Too unpredictable, or maybe too predictable except when not!
I know my political opinions have changed over the years. I am much less partisan today and far more cynical. An AI clone of me based on my words up through 2012 would not reflect what I believe today. It would not be a clone but an incomplete fake.
So if one's opinions change, how can it be argued that the AI clone is wrong? How do we know it isn't one of the responses the person might give? You seem to be arguing the clone has to get something exactly right? Does it?
Does this diminish the value of publishing so publicly?
"He argues, and I am inclined to agree, that the LLMs are already a very important technological innovation. The thing is, as with the Internet, it takes a while for the right apps to be built and for people to learn the best uses."
And the worst!
A very narrow test:
Casual observation suggests that LLMs have yet to impact the quantity, quality, and variety of your blogposts, the Zvi's, and so on.
You (Arnold) had a particularly incisive run of blogposts in 2020 around the pandemic and the election. I wonder if or how any current LLMs would have changed the substance of your writings then.
About AI clones, there is value in a machine that can imitate a person. There is also the risk that the AI clone will prove incontrovertibly that AI cannot think. And why? Because the real me changes opinions. The real me takes new information and adjusts my view of the world and the things and people in it, including my own preferences.
Would the AI clone change its "mind"? I do not believe it can with any rationality. An AI clone can only guess at what the real future me might think and argue.
There is value in LLM as a search engine and as a machine to replicate actions and words based on an historical catalog. There is also a tremendous inefficiency with LLM in that it inherently cannot replicate the evolution of human thought without the machine breaking down and proving itself untrustworthy.
Just consider the misunderstandings that occur with people close to you. A disagreement occurs with the person telling the other "I thought you liked this." And the other person responds "I do, but ...." An explanation is then given that makes sense to the person giving it. But the explanation was so unpredictable or nuanced that the other person did not predict it. And, of course the inverse happens when a person thinks the other person would not like something when in fact that person would, if done in a certain way.
"Because the real me changes opinions. The real me takes new information and adjusts my view of the world and the things and people in it, including my own preferences."
Say an AI-cloned person dies or just stops publishing. Ten years later lots of things in the world will change. You don't think AI will try to adjust a clone of that person to account for how the world has changed? Just taking a simple example, would it have an opinion on whether he liked a President he had never commented about? Isn't this a form of updating based on new info?
There is no confidence the AI clone would mirror the thoughts and opinions of the person in the future. Simply consider Paul Krugman. How would an AI clone of Krugman provide opinions? Would it be based on the writings of Krugman? Or on his politics? The AI could guess, but it could never be Krugman. Too unpredictable, or maybe too predictable except when not!
I know my political opinions have changed over the years. I am much less partisan today and far more cynical. An AI clone of me based on my words up through 2012 would not reflect what I believe today. It would not be a clone but an incomplete fake.
So if one's opinions change, how can it be argued that the AI clone is wrong? How do we know it isn't one of the responses the person might give? You seem to be arguing the clone has to get something exactly right? Does it?