GPT/LLM links
Jim Fan on robotics; The Zvi on the copyright infringement issue; Krzysztof Tyszka-Drozdowski on AI and lawyers; Glenn Reynolds on AI and "symbolic analysts"
I've been asked what's the biggest thing in 2024 other than LLMs. It's Robotics. Period. We are ~3 years away from the ChatGPT moment for physical AI agents.
Pointer from Alexander Kruel.
As is usually the case, the critics misunderstand the technology involved, complain about infringements that inflict no substantial damages, engineer many of the complaints being made and make cringeworthy accusations.
That does not, however, mean that The New York Times case is baseless. There are still very real copyright issues at the heart of Generative AI. This suit is a serious effort by top lawyers. It has strong legal merit. They are likely to win if the case is not settled.
If the LLMs were merely using NYT content to learn English, I think that would be fine. But when you use ChatGPT as a search tool and it comes back with a verbatim article from the NYT, with no attribution, that is a problem. Another problem is when ChatGPT hallucinates and falsely attributes something to the NYT.
Krzysztof Tyszka-Drozdowski writes,
Critics portray lawyers as the main agents of rent-seeking tendencies in society. Their role amounts largely to the transfer of wealth; they do not contribute to its creation, acting as a seedbed for what Jonathan Rauch called the ‘parasite economy.’ Every regulatory battle, every new lawsuit, every struggle for redistribution results in profit for them. The transfers they enforce, along with subsidies and court awards, are quantifiable. In contrast, the magnitude of wasted material and human resources they entail, not to mention systemic inefficiencies they induce, is difficult to measure.
He is upbeat about
the numerous benefits that will come from AI’s revolutionizing lawyer jobs – reducing the parasite economy, improving talent allocation and democratizing the consumption of legal services
Meanwhile, we have the NYT lawsuit.
ChatGPT can write code, and sometimes it’s pretty good code. (Sometimes it’s not, but then again, you can say that about the code that people write, too.) ChatGPT can write news stories, and essays, and speeches, and again, they’re not always gems, but neither are the actual human products in those areas, either. And the AI programs get better from one year to the next, while human beings stay pretty much the same. That being the case, we can expect them to become a serious threat to jobs in the near future.
The point he makes about the pace of improvement is very important. It took decades for computers to be able to play chess at the highest level, but then they very quickly had humans in the rearview mirror. I think that whenever an AI can handle a task at a human level, it will quickly achieve superiority.
Computers have all these scientists and engineers enhancing them with more powerful chips and better software. Our human brains do not enjoy comparable resources dedicated to our advancement.
Reynolds sees AI replacing jobs among the elite that Robert Reich calls “symbolic analysts.” I myself am hesitant to predict where the AI hurricane will first make landfall in the job market. And you don’t know which occupational groups will succeed at using legal/political means to protect themselves. Did the Hollywood writers save themselves, or are they just going to take the major firms down with them?
Substacks referenced above:
@
@
@
@
Highly attention-grabbing topics tend to attract a lot of commentary from people who have no special interest in or knowledge of the thing at hand. Unsurprisingly, this commentary is often quite bad. This was quite obvious with the war in Ukraine, and now also with the violence in Gaza. I observe the same trend with AI. To simplify a bit, let's just consider the effect on writers. Generative AI is a tool which decreases the cost of producing text. One could hypothesize that this will lead to less employment in this field. On the other hand, you could suggest that writers will become more productive, and hence overall employment in the field will increase. My understanding is that what actually happens will depend on the relevant elasticities, which you would have to estimate from the empirical economics literature. Reynolds doesn't seem to have considered the possibility of doing that, preferring just to write some empty speculation instead. Obviously, empty speculation is allowed and, indeed, a treasured human pastime. Nevertheless, I would like to see a bit lower status for people who habitually do this, especially those who don't seem to care or even recognize when their half-baked ideas are later proved wrong. Again, such people were a particularly big problem in the case of the Ukraine war. As far as AI goes, I expect the situation to only get worse, since the technology will only draw more attention from this point on.
"democratizing the consumption of legal services" => Lowering the price of legal services to plaintiffs so that defendants are forced to consume more legal services?