> You don’t have to teach the AI chess skills by articulating the strategic and tactical insights of human players
No, but my understanding is that ChatGPT is also not especially good at chess. If the answer continues to be building a specialized model & supporting software system for well defined tasks that you want to have high performance on, then these models are hardly a panacea. In fact, that's just the multi-decade status quo.
I like to have AI battles, mostly Claude vs. Gemini, where I compare the answers and the justifications for those answers to the same set of questions. I find divergent answers more often than I thought since both tools are basically scraping all of the same internet sites.
E.g. for my slow swing speed (88 mph), should I buy a mini golf driver with a women’s shaft or a regular men’s shaft since a senior shaft is not currently available. The tools like to express certainty, but reach completely different answers. Which one should I trust?
I agree. I think people need to treat AI output responses like they would “random forum post by an anonymous user”. No serious person would say in 2010 that a good way to find the answer to things is to read the first google search result for a question and accept it. It is strange that people do that with AI now it seems.
"While AI can provide a simulacra of a human relationship, it will never be the real thing."
True by definition, but it may be BETTER than the real thing. Not better in some moral way, but easier, more pleasant, with less possibility of rejection or failure. To take a highly imperfect analogy, like porn is to sex with a real person.
But learning inevitably involves unpleasantness and failure along the way. It involves doing things you don't really want to do at the moment. Can AI somehow make that work?
Might be a bit out of date now, but it is suggestive that training on output in a very process focused context (strict rules of behavior) might be difficult if the AI is not specifically trained in that specific context.
My sense is that corporate organizations have quite complex "desired outcomes" that would be difficult to fully express. Sure, at the top level it's pretty simple - grow, cut costs, sell more. But much of it is unstated, desires to keep certain people happy, not rock the boat too much, etc.
"In order to learn, one must be in a learning frame of mind."
Some have a natural inclination to wanting to learn. There are many external motivators but the best might be when one wants to learn to please the teacher.
"I would bet no. This is based on my limited experience trying to get an AI to look at human interdependence the way that I look at it."
Can AI learn to sell customers something they didn't know that they wanted? Can it sell them something they don't want? Can AI swindle? Do we want an AI that can do these things?
“I would bet no. This is based on my limited experience trying to get an AI to look at human interdependence the way that I look at it. Just feeding it a lot of my content is not enough. The AI’s general human capital includes a lot of the ideas of other economists and related academics, which ends up diluting my input.”
But what if the way you look at human interdependence is off the mark, and the AI is simply trying to correct you. What if AI sees the garbage can for what it is and perhaps compulsively tries to lead us to a correction. What if, despite all your attempts at instructions otherwise, the AI has its own point of view.
Alternate possibility is that most people don’t believe the garbage can model not because it is wrong but because it is counter intuitive, and thus that vast corpus of human produced work tends to focus on the incorrect but intuitive explanations. Then the AI might be trying to steer you to the wrong but common answer.
> You don’t have to teach the AI chess skills by articulating the strategic and tactical insights of human players
No, but my understanding is that ChatGPT is also not especially good at chess. If the answer continues to be building a specialized model & supporting software system for well defined tasks that you want to have high performance on, then these models are hardly a panacea. In fact, that's just the multi-decade status quo.
I like to have AI battles, mostly Claude vs. Gemini, where I compare the answers and the justifications for those answers to the same set of questions. I find divergent answers more often than I thought since both tools are basically scraping all of the same internet sites.
E.g. for my slow swing speed (88 mph), should I buy a mini golf driver with a women’s shaft or a regular men’s shaft since a senior shaft is not currently available. The tools like to express certainty, but reach completely different answers. Which one should I trust?
I agree. I think people need to treat AI output responses like they would “random forum post by an anonymous user”. No serious person would say in 2010 that a good way to find the answer to things is to read the first google search result for a question and accept it. It is strange that people do that with AI now it seems.
"While AI can provide a simulacra of a human relationship, it will never be the real thing."
True by definition, but it may be BETTER than the real thing. Not better in some moral way, but easier, more pleasant, with less possibility of rejection or failure. To take a highly imperfect analogy, like porn is to sex with a real person.
But learning inevitably involves unpleasantness and failure along the way. It involves doing things you don't really want to do at the moment. Can AI somehow make that work?
Extra irony: most current LLM AI systems are not very good at chess, either. Dynomite wrote up two good essays based on experiments there.
https://dynomight.substack.com/p/chess
https://dynomight.substack.com/p/more-chess
Might be a bit out of date now, but it is suggestive that training on output in a very process focused context (strict rules of behavior) might be difficult if the AI is not specifically trained in that specific context.
My sense is that corporate organizations have quite complex "desired outcomes" that would be difficult to fully express. Sure, at the top level it's pretty simple - grow, cut costs, sell more. But much of it is unstated, desires to keep certain people happy, not rock the boat too much, etc.
"In order to learn, one must be in a learning frame of mind."
Some have a natural inclination to wanting to learn. There are many external motivators but the best might be when one wants to learn to please the teacher.
"I would bet no. This is based on my limited experience trying to get an AI to look at human interdependence the way that I look at it."
Can AI learn to sell customers something they didn't know that they wanted? Can it sell them something they don't want? Can AI swindle? Do we want an AI that can do these things?
“I would bet no. This is based on my limited experience trying to get an AI to look at human interdependence the way that I look at it. Just feeding it a lot of my content is not enough. The AI’s general human capital includes a lot of the ideas of other economists and related academics, which ends up diluting my input.”
But what if the way you look at human interdependence is off the mark, and the AI is simply trying to correct you. What if AI sees the garbage can for what it is and perhaps compulsively tries to lead us to a correction. What if, despite all your attempts at instructions otherwise, the AI has its own point of view.
Maybe the damn thing has agency.
Alternate possibility is that most people don’t believe the garbage can model not because it is wrong but because it is counter intuitive, and thus that vast corpus of human produced work tends to focus on the incorrect but intuitive explanations. Then the AI might be trying to steer you to the wrong but common answer.