LLM Links, 6/25
Andrew Chen on the entertainment industry; Nat Friedman on the lack of applications; Mark McNeilly on working applications; Ethan Mollick on the value of experience
I often speak to prominent folks in the entertainment + video game industries. The good news is that these executives are very sharp, and are following all the technology evolutions. They are both hugely optimistic and they acknowledge that AI will reinvent their industries. But there is no will to proactively embrace the technology. After all, when you have a series of franchises and delivery channels that work, why take the risk with new tech that might end up require a lot of rethinking of business models / workflows, might cause rebellion and dissension among customers/employees, and maybe not even work?
What the Internet did to the newspaper industry AI will do to the big-production movie and game industries.
In a podcast with Ben Thompson (and also Daniel Gross), Nat Friedman says,
There’s this big debate in the AI field about what are the rate-limiters on progress, and the scaling purists think we need more scale, more compute. There are people who believe we need algorithmic breakthroughs, and so we are AI researcher limited, and then there are folks who believe we’re hitting a data wall, and what we’re actually gated on is high-quality data and maybe labeled data, maybe raw data, maybe video can provide it.
But at least in terms of felt progress, I think it is UI and products. There’s still a massive capability overhang where we are still learning how to make these models useful to people. It’s really shocking to me how little has happened.
+1.
But most of the podcast consists of gushing over Apple and its “Apple Intelligence” product announcement, which I thought was lame. Go figure. Pointer from Tyler Cowen.
In one Fintech company an AI chatbot replacing human customer service reps cut costs by 95%, reduced resolution time from 45 minutes to sixty seconds and increased median customer satisfaction from 55% to 69%. Examples such as these are why many predict customer service is prime territory Gen AI will automate.
As one of my PhD advisors, Eric von Hippel, pointed out: R&D is very expensive because it involves lots of trial and error, but when you are doing a task all the time, trial and error is cheap and easy. That is why a surprisingly large percentage of important innovation comes, not from formal R&D labs, but rather from people figuring out how to solve their own problems. Learning by doing is cheap if you are already doing.
He points out that the best people to know how to use LLMs in an organization will be people doing jobs in the business. But management may be tempted to hand the job of integrating LLMs into the corporation over to the Information Technology department (something that the IT department will certainly desire, and perhaps insist upon).
substacks referenced above:
@
@
@
LLMs' input is the Internet. What is on the Internet will increasingly be generated by LLMs. What happens when LLM input is largely LLM output? Will that create a feedback loop where inaccuracies and biases are reinforced and amplified? Can LLMs be trained to reduce the feedback effect by, for example, identifying and rejecting bad or biased information?
Saw a tweet about Edison’s response to the critique that he failed over 1000 times before getting the lightbulb right. Successfully excluding alternatives.
When, not if but also not yet, AI can test multiple possibilities to get the best one, it will be helpful. This requires a problem which has a clear way to evaluate the best of alternative answer responses.
All digital games are like this, but most of life isn’t. Yet a lot of trial & error testing might be.
Accuracy remains key for business usefulness, except maybe in predicting the future.