LLM Links
James O'Malley on the future of music; Ethan Mollick on different business assumptions; The Zvi on Leopold Aschenbrenner; Dominic Cummings on same;
If you need background music for your corporate health and safety training video, or you need a theme tune for your podcast, then it is a no-brainer to use AI instead of paying an expensive musician.
For better or worse then, AI will become the ubiquitous source of, essentially, “elevator music” for the entire world, and it will have terrible consequences for many people working in the music industry today.
There used to be a lot of money in writing jingles for commercials for insurance companies, theme songs for TV shows, background music for movies, and so on. Probably not so lucrative going forward.
Ethan Mollick reacts to “Apple Intelligence” by stepping back and looking at how different schools of thought are emerging concerning the use of LLMs.
the most advanced generalist AI models often outperform specialized models, even in the specific domains those specialized models were designed for.
…smaller, weaker models give Apple a lot of control over AI use on their systems and offloads a lot of work to the phone or computer. But they still don’t have a frontier model, so they working with OpenAI to send GPT-4 the questions that are too hard for Apple’s models to answer.
…what we are seeing from Apple is a clear and practical vision of how AI can help most users, without a lot of effort, today. In doing so, they are hiding much of the power, and quirks, of LLMs from their users.
Personally, I thought that the Apple announcement had no compelling use cases.
I want a robot that will cook dinner and fold the laundry, a tutor that will inspire my grandchildren, and an intellectual clone of myself that you can converse with.
Siri is just a waste of battery.
Meanwhile,
There is a specter haunting all AI development, the specter of AGI - Artificial General Intelligence, the hypothetical machine better than humans at every intellectual tasks. This is the explicit goal of OpenAI and Anthropic, and it is something they hope to achieve in the near term. For people who genuinely believe they are building AGI soon, almost nothing else is important. The AI models along the way to AGI are mere stepping stones, not anything you want to build a business around, because they will be replaced by better models soon. OpenAI's systems may feel unpolished because the company believes that future models will significantly advance AI capabilities. As a result, they may not be investing heavily in refining systems that will likely be outdated as new models are released.
I personally have not bought into the AGI target. I think of the LLM as an important advance in the human-computer interface. But that’s it.
Leopold Aschenbrenner sees AGI coming soon. Zvi Mowshowitz examines LA’s assumptions.
One of first automated jobs will be AI research. Then things get very fast. Decades of work in a year. One to a few years for much smarter than human things.
AI’s doing AI research indeed seems like a path to a major take-off of artificial intelligence. I do not know enough to have an informed opinion on the likelihood of this, but if I had to guess, I would doubt that we will see large language models doing important AI research.
I think that A) beliefs about imminent capabilities of AI, and possibilities for the PRC and Russia, will soon freak out much more of the National Security network across the world, B) it will destabilise (already destabilised and worsening) WMD deterrence as states start to fear scenarios of pre-emptive action that now seem highly implausible (e.g ‘is it possible new AI-enabled technologies will allow part X of the nuclear weapon system to be neutralised?’), C) it will become clear that cyber-defence and cyber-attack, and intelligence services more broadly, will be transformed much more rapidly than most senior Whitehall Nat-Sec types think is likely.
Substacks referenced above:
@
@
@
@
https://www.persuasion.community/p/music-just-changed-foreve
"I think that A) beliefs about imminent capabilities of AI, and possibilities for the PRC and Russia, will soon freak out much more of the National Security network across the world"
This is already happening, but the freaking is still behind the reality and NatSec types have not fully "priced in" reasonable expectations of future capabilities in terms of their attitudes, planning, urgency, etc. That's not atypical. Usually there is a period of under-freaking following by a panicky overreaction and then easing back into equilibrium especially as new mitigations and control measures prove stable and durable.
But that's because previous changes were more stepwise in nature - big leaps followed by slow and steady improvements or increases in quantities. This time may be different. Things are changing so fast, so much, and perhaps for long enough, that the actual freaking will never actually catch up to the appropriate level of freaking for most establishment types.
O'Malley and Cummings provide two versions of the same message: AI today, albeit interesting and offering glimpses into the future, is not yet close to the AI pinnacle that will render what we see today as no more than tinkertoys for what is coming. Whether it be AGI or not, there is ongoing, rapid advancement of AI. Buckle your seat belts. This is not necessarily an optimistic vision.