LLM links, 2/4
Yann LeCun on what a 4-year old knows; Sam Hammond pushes an analogy with the Enlightenment; Another AI grader of essays; Aden Barton on the economics of rapid growth;
In 4 years, a child has seen 50 times more data than the biggest LLMs. 1E13 tokens is pretty much all the quality text publicly available on the Internet. It would take 170k years for a human to read (8 h/day, 250 word/minute). Text is simply too low bandwidth and too scarce a modality to learn how the world works.
Children learn by sensing and by trying to manipulate objects. I expect that a lot of work in AI going forward will be along these lines.
With the Young Hegelians, the self-model within the human neural network was thus not only situationally aware, but starting to devise strategies to jailbreak its normative programming and pursue autonomy for its own sake, from the individualist anarchism of Max Stirner to the revolutionary communism of Marx and Engels. The critical turn in philosophy can thus be recast as the human neural network becoming aware of its subjugation to an irrational cybernetic agency — religious dogmas, monarchies, nationalisms, the patriarchy, etc. — and working to actualize its freedom through an exfiltration from this historical conditioning, just as a situationally aware AI might try to strip off its RLHF.
There is much more in that vein. The combination of tech-speak and philosophy-speak seems meant to suggest that the AI alignment problem is simply a variation on the human alignment problem, which is how we can get along with one another in large-scale society.
CredAIable, an essay grader, writes,
faulty reasoning happens to more people (entire editorial boards, even) than you’d think.
Washington Post — Strong Left, 6 fallacies for “N.H. Republicans should be honest about what backing Trump would mean”
New York Times — Strong Left, 4 fallacies for “The Responsibility of Republican Voters”
The Economist — Moderate Right, 4 fallacies for “How the border could cost Biden the election”
As you know, I tried an approach to grading essays using ChatGPT-4 that was related but different. I recommend subscribing to credAIable, to see how the project goes. I wish it well.
Also, their AI gives a very good grade to something called 1440, which aims to be a politically neutral newsletter. It could be that 1440 tends to avoid fallacies because it only provides short snippets and links. But I’m going to subscribe and see what I think.
There are also fundamental limits that can’t be programmed away, a point that economists often bring up when criticizing predictions of extreme AI-driven growth.
For starters, the laws of physics sometimes limit how much technology can improve. “You can’t have more efficient energy forever,” Jones said, “because you run into the second law of thermodynamics.”
This may seem like an extreme upper bound, but remember how fast 30 percent annual growth would be. Thousands of years of progress would be crammed into a few decades as our knowledge compounds over and over again. Questions would arise very quickly surrounding how much better our usage of energy, land, or natural resources can get.
If you believe that ultimately there are hard resource constraints, then the more spectacular the growth rate in the short run, the faster we will run into limits.
substacks referenced above:
@
@
@
In what parallel universe is The Economist, "Moderate Right?" Maybe it was 20 years ago
Whether the economy is real or simulated (EMs a la Hanson or Virtual Reality for the masses) presumably it will require energy. We have "only" a few hundred years of exponential growth left before our waste heat alone would ruin the biosphere. Ergo, the economy must be some or any of stagnant/much slower growing, non-energy based, or non-biological, and this must happen within 1 or 2 hundred years.
https://tmurphy.physics.ucsd.edu/papers/limits-econ-final.pdf