GPT links, 4/16
Tim B. Lee on language models; Tom Morgan and the virtuals vs. the physicals; Francisco Toro on the speed of change; Robin Hanson on which AI fears are reasonable
What makes a transformer model powerful is its ability to “pay attention” to multiple parts of its input at the same time. When Play.ht’s model generates the audio for a new word, it isn’t just “thinking about” the current word or the one that came before it, it’s taking into account the structure of the sentence as a whole. This allows it to vary the speed, emphasis, and other characteristics of speech in a way that mirrors the speech patterns of the person whose voice is being cloned.
His new substack on AI is worth checking out.
Later in his post, he warns,
Imagine someone leaking fake audio of a political candidate saying something embarrassing, or circulating a fake radio or television broadcast on social media.
Jason Manning describes a different frightening use for fake audio, to engage in blackmail.
Last year I was frankly embarrassed when I read Vaclav Smil’s latest book “How The World Really Works.” As a fully paid-up member of the coastal virtuals, I realized how utterly I had failed to grasp the scale and energy intensity of the modern physical world.
Think of people as earning a living either in the digital economy or the physical economy. For several decades, the virtuals have been winning (“software eats everything”). But Morgan explores whether this may be changing.
My thought is that AI is creating an “average is over” moment for the virtuals. Instead of needing large numbers of competent symbol analysts, the economy will be taken over by a relatively few of the most creative adopters of GPT type technology.
Nine of the top ten chess players in the world at the moment were born after 1987: their critical learning period for chess came post-Deep Blue, with computer training baked into their routines from the start.
His point is that AI may have a similar effect in other fields. I can imagine that the top knowledge workers a decade from now will be the relative few who are fluent in their use of AI.
the most likely AI scenario looks like lawful capitalism, with mostly gradual (albeit rapid) change overall. Many organizations supply many AIs and they are pushed by law and competition to get their AIs to behave in civil, lawful ways that give customers more of what they want compared to alternatives. Yes, sometimes competition causes firms to cheat customers in ways they can’t see, or to hurt us all a little via things like pollution, but such cases are rare. The best AIs in each area have many similarly able competitors. Eventually, AIs will become very capable and valuable.
It’s a long essay, not easy to excerpt.
Substacks referenced above:
@
@
@
@
"gradual (albeit rapid)". What the heck is that supposed to mean?
“Many organizations supply many AIs and they are pushed by law and competition to get their AIs to behave in civil, lawful ways that give customers more of what they want compared to alternatives.”
It seems to me that a huge part of economics is the study of consequences of law that we’re not intended. Sometimes those are good, but not usually for modern statutes. So behaving in a lawful way seems more likely to result in behaving in an uncivil way.