LLM Links, 10/8/2024
Tim B. Lee on o1; David Deming on adoption rates; Ben Thompson offers his vision; Allison Schrager on AI personal coaches
it breaks the problem down into smaller problems and then solves those problems one by one.
In a nutshell, that is the Chain-of-Thought process that Open AI’s o1 uses to solve a complex problem. The complex problem is not in its training data. But the simpler problems are. So if its process of breaking down the complex problem works, it gets the right answer.
Lee goes into how o1 was trained.
With o1, OpenAI focused on reasoning problems in math and computer programming. Not only do these problems have objective right answers, it’s often possible to automate the generation of new problems along with an answer key.
It is easier to train via reinforcement learning when there are objective answers. Lee concludes,
while I’m impressed by how good LLMs have gotten at solving canned reasoning problems, I think it’s important for people not to confuse this with the kind of cognition required to effectively navigate the messiness of the real world. These models are still quite far from human-level intelligence
David Deming cites research he co-authored.
Our data show that generative AI adoption has been faster (39.4% after 2 years) than PCs and the Internet (20% and 30% after three years respectively). For PCs and the internet, usage more than doubled between year 3 and year 15, which if the trend holds implies that generative AI usage would exceed 80% by 2036.
Plus all of the people who will be using it without knowing that they are using it.
Executives, however, want the benefit of AI now, and I think that benefit will, like the first wave of computing, come from replacing humans, not making them more efficient. And that, by extension, will mean top-down years-long initiatives that are justified by the massive business results that will follow.
…My core contention here, however, is that AI truly is a new way of computing, and that means the better analogies are to computing itself. Transformers are the transistor, and mainframes are today’s models. The GUI is, arguably, still TBD.
To the extent that is right, then, the biggest opportunity is in top-down enterprise implementations.
I think that I get what he is saying, but I don’t really get why he is saying it. Read the essay for yourself.
Good, fee-only advisors tend to focus on high-net-worth clients, but everyone needs good advice. In fact, those with less money arguably need better advice, as they have less margin for error. With AI and current robo-advisors, everyone can get pretty good portfolio construction, arguably as good as from a high-end advisor. But portfolio construction is only part of an advisor’s job. Advisors also act as therapists, having difficult conversations—like telling you that you can’t afford to keep subsidizing your 40-year-old son’s music career, helping with end-of-life financial decisions, or guiding you when your spouse dies, and you have to manage household finances for the first time.
As she points out, there are other personal coaching applications for LLMs. Indeed, the typical “clone” at Delphi is a personal coach of some sort, not an economist blogger.
My sense is that people are very unmotivated to get personal coaching. It would be interesting to speculate on why that is. For example, is it too easy for a bad coach to masquerade as a good one, so that your chances of finding a good coach are slim?
substacks referenced above:
@
@
@
On the Ben Thompson essay: my impression is that he is trying to draw a distinction between the so-called 'bottoms up' customer acquistion strategy and the 'top down' customer acquisition strategy.
I'm not sure why he doesn't mention it in his essay, but the canonical example of a company which acquired its earliest customers from the bottoms up strategy is Stripe. Stripe got developers at all sorts of companies to use its simple code, they built stuff with it, told their managers that it was amazing, those managers told their managers, etc., and soon enough, Stripe penetrated thousands of organizations.
Compare, on the other hand, to, say, Oracle's or Salesforce's customer acquisition strategy, in which the sale was made, ideally to the CxO of a large enterprise. The actual users of the product (the front line employees) didn't have much choice in the matter of software they were to use to do their jobs; the software, often awkwardly designed, was foisted on them from on high. From the perspective of CxO types, this was great. From the perspective of the ultimate user of the software, this was terrible.
The question with AI is whether customer acquisition will follow the bottoms up or top down path. Ethan Mollick has written frequently in favor of the bottoms up approach, in which individual employees experiment with ChatGPT or similar, and figure out how to make it a productivity-enhancing complement. He has also noted that many corporate liability policies militate against employee adoption of AI tech.
It makes sense the AI has much faster adoption than PCs or the internet. AI is free, requires no expertise, and requires no work to set up. You had to buy and install a modem and pay for a service to get the internet. You had to buy and learn a PC. AI just works. It's kind of like a new show on TV. It requires nothing on the users part other than knowing it exists and is good.