Some AI Links
Michael Strong on how humans learn best (they need motivation); Zvi Mowshowitz on AI's as motivators; Ethan Mollick on how AI's learn best
I heartily recommend reading each of the long posts linked below.
We evolved to value authentic human relationships for millions of years. Whether another human being respects us, loves us, ignores us, and so forth is everything. While AI can provide a simulacra of a human relationship, it will never be the real thing.
In order to learn, one must be in a learning frame of mind. Strong argues that we need to acquire the right social norms in order to learn. If he is correct, then AI in education will be just another dud. If AI cannot motivate, then the hopes and dreams of AI boosters will be disappointed.
Zvi Mowshowitz looks at the evidence that AI can motivate. He reviews a study of political persuasion by AI’s. The study finds that post-training for persuasion is the most effective margin on which to increase persuasiveness.
He also looks at trends in personalization. For AI’s to be personal tutors and assistants, they must remember things about you. This reminds me of Hollis Robbins’ observation that we seem to want AI’s to remember everything (so that they give us personal service) and to remember nothing (because of privacy concerns).
He also discusses studies of how young people use AI companions (it’s mostly not what you think). The way that they are treating AI’s as people makes me inclined to dispute Strong that AI’s will never be able to motivate students.
One thing you learn studying (or working in) organizations is that they are all actually a bit of a mess. In fact, one classic organizational theory is actually called the Garbage Can Model. This views organizations as chaotic "garbage cans" where problems, solutions, and decision-makers are dumped in together, and decisions often happen when these elements collide randomly, rather than through a fully rational process. Of course, it is easy to take this view too far - organizations do have structures, decision-makers, and processes that actually matter. It is just that these structures often evolved and were negotiated among people, rather than being carefully designed and well-recorded.
I like to say that anyone who feels intimidated by big corporations has never worked for one. Mollick gives the implications for trying to bring AI into large organizations.
The Garbage Can represents a world where unwritten rules, bespoke knowledge, and complex and undocumented processes are critical. It is this situation that makes AI adoption in organizations difficult, because even though 43% of American workers have used AI at work, they are mostly doing it in informal ways, solving their own work problems. Scaling AI across the enterprise is hard because traditional automation requires clear rules and defined processes; the very things Garbage Can organizations lack.
But maybe you can skip over the problem of explaining all this to the AI.
our human understanding of problems built from a lifetime of experience is not that important in solving a problem with AI. Decades of researchers' careful work encoding human expertise was ultimately less effective than just throwing more computation at the problem. We are soon going to see whether the Bitter Lesson applies widely to the world of work.
You don’t have to teach the AI chess skills by articulating the strategic and tactical insights of human players. Just show it transcripts of a zillion games, and by sorting out the positions and the game outcomes, the AI will figure out the best moves to make.
Mollick sees an analogous attempt by ChatGPT in the way it trains its “agent” function.
ChatGPT agent represents a fundamental shift. It is not trained on the process of doing work; instead, OpenAI used reinforcement learning to train their AI on the actual final outcomes.
Mollick suggests that corporations may not have to be able to explain their processes to an AI. Instead, give the desired outcome and let the AI figure things out.
The effort companies spent refining processes, building institutional knowledge, and creating competitive moats through operational excellence might matter less than they think. If AI agents can train on outputs alone, any organization that can define quality and provide enough examples might achieve similar results, whether they understand their own processes or not.
He cautions,
Or it might be that the Garbage Can wins, that human complexity and those messy, evolved processes are too intricate for AI to navigate without understanding them.
Think of the AI’s abilities as general human capital. And think of the knowledge inside a firm (the “Garbage Can”) as specific human capital. Can the AI bypass the specific human capital and just apply general human capital to the particular needs of the firm?
I would bet no. This is based on my limited experience trying to get an AI to look at human interdependence the way that I look at it. Just feeding it a lot of my content is not enough. The AI’s general human capital includes a lot of the ideas of other economists and related academics, which ends up diluting my input. Instead, the more explicitly I prompt the AI to incorporate my views and not behave like a generic economist or academic, the more satisfying the result. To me, the moral is that you cannot expect the AI to use sheer exposure to data to overcome a lack of specific human capital. It is not like learning chess.
substacks referenced above: @
@
@
> You don’t have to teach the AI chess skills by articulating the strategic and tactical insights of human players
No, but my understanding is that ChatGPT is also not especially good at chess. If the answer continues to be building a specialized model & supporting software system for well defined tasks that you want to have high performance on, then these models are hardly a panacea. In fact, that's just the multi-decade status quo.
I like to have AI battles, mostly Claude vs. Gemini, where I compare the answers and the justifications for those answers to the same set of questions. I find divergent answers more often than I thought since both tools are basically scraping all of the same internet sites.
E.g. for my slow swing speed (88 mph), should I buy a mini golf driver with a women’s shaft or a regular men’s shaft since a senior shaft is not currently available. The tools like to express certainty, but reach completely different answers. Which one should I trust?