GPT/LLM links, 8/22
Ian Bremmer and Mustafa Suleyman on world power relationships; Freddie deBoer is not impressed with the latest AI; Tyler Cowen on limits to growth; The Zvi disagrees with Tyler
Ian Bremmer and Mustafa Suleyman write,
Soon, AI developers will likely succeed in creating systems with self-improving capabilities—a critical juncture in the trajectory of this technology that should give everyone pause.
This assumption is at the core of doomer scenarios.
There is little use in regulating AI in some countries if it remains unregulated in others. Because AI can proliferate so easily, its governance can have no gaps.
To me, this sounds like they are saying that regulation of AI must be totalitarian in order to succeed. That in itself is a doomer scenario. But
at least for the next few years, AI’s trajectory will be largely determined by the decisions of a handful of private businesses, regardless of what policymakers in Brussels or Washington do. In other words, technologists, not policymakers or bureaucrats, will exercise authority over a force that could profoundly alter both the power of nation-states and how they relate to each other.
The authors wish to conjure up regulatory bodies that are worldwide, wise, and adaptable. Predictably, Robert Wright likes the notion. If they are correct that AI necessitates this manner of regulation, then count me as a doomer.
it feels like people have been told so relentlessly by the media that what we are choosing to call artificial intelligence is currently, right now, already amazing that they feel compelled to go along with it. But this isn’t amazing. It’s a demonstration of the profound limitations of these systems that people are choosing to see as a representation of their strengths.
He makes it sound like a naked emperor.
as one set of constraints is relaxed — in this case access to intelligence — the remaining constraints will matter all the more. Regulatory delays will be more frustrating, for instance, as they will be holding back a greater amount of cognitive horsepower than in times past. Or as AI improves at finding new and better medical hypotheses to test, the lags and delays in systems for clinical trials will become all the more painful. In fact they may worsen as the system is flooded with conjectures.
And if we get the sort of world government that folks like Bremmer, Suleyman, and Robert Wright hope for, all bets are off.
Tyler expects “advanced artificial intelligence will boost the annual US growth rate by one-quarter to one-half of a percentage point.” That sounds right to me. It is a significant change, but not an earth-shattering one. The problem is that there are so many other adaptations needed to foster the economic impact of AI.
I think what Tyler predicts here is on the extreme low end, even if we got no further substantial foundational advances from AI beyond the GPT-4 level, and even if rather harsh restrictions are put in place. The comparisons to the Industrial Revolution continue to point if anything to far faster growth, since you would then have a metaphorical power law ordering of speed of impact from something like humans, then agriculture, then industry.
Let’s say that the industrial revolution increased the annual growth rate from 0.2 percent to 2 percent. Does a comparable increase mean going from 2 percent to 4 percent, or from 2 percent to 20 percent?
substacks referenced above:
@
@
How do we expect AI to improve the world economy? Having it replace certain jobs is an incremental change - technology has long been destroying old jobs and creating new jobs. How will AI replace jobs in a far superior way?
Take software programming. If AI is used to build software, do humans need to understand what AI coded? The big leap would be for humans to turn software coding and testing over to the computers. That could be a huge productivity jump. But at what risk? Who validates the AI? Who is liable for what AI codes?
There is a great looming problem with AI regulation. It is that modern America has embraced a regulatory culture that it is ok to do bad things because (1) you might get away with it and (2) if caught the punishment makes the bad thing worth it. This history informs me that companies will use AI in unethical ways and this will lead to large system / social failures.
I sense the AI proponents are ignoring the cost of these failures in their projections. I wonder if the AI proponents even imagine that AI can produce great failures. Such is the hubris of technocrats. My forecast is we will see no great improvement in social welfare but we will see increasing inequality as ever more gains are privatized and losses socialized.
The industrial revolution greatly increased the population, which is itself an important factor of production. Likely the increase in population did more for gdp than the industrial revolution did otherwise.
I don't see AI adding to the population, so I think it's contribution to gdp will be much smaller than for the IR-- even if you think it's game changing technology.