23 Comments

In what parallel universe is The Economist, "Moderate Right?" Maybe it was 20 years ago

Expand full comment

To be fair, it was grading a single op-ed, but I read the op-ed the other day, and it isn't moderate right either- just warning the Biden Administration and the Democrats of their risks in supporting the open border the way they are.

Expand full comment

I fail to see why it matters whether what you describe is labeled moderate right or moderate left. Either way there are many on the right who favor well regulated immigration.

Expand full comment

The essaygrader rated it moderate right, not me. The essay itself isn't, though- the essay grader misjudged it, and probably because the algorithm simply assumes anyone opposed to the open border policies or writes about it in a critical way has to be on the right.

Expand full comment

I don't know anything about the article other than you saying it mentions risks to open border. That doesn't sound "campus-rag 'radical' " to me.

How does a position become

"campus-rag 'radical' "

without being something akin to open border? And what's the difference between moderate right and moderate left on this?

Expand full comment

Correct. I continued a gift subscription I got in my youth for two decades and used to look forward to the print edition and read the whole thing. There was a turn from moderate right to center around 2006, but it went "moderate left" during Obama's candidacy and never looked back.

Expand full comment
Feb 4·edited Feb 4

I stopped getting economist via the job shortly after I retired in 2019. I guess it and I were both moderate left on most social issues (still far from woke) but I never saw it as left on economic and financial issues. Maybe it had a more favorable view of govt than me but I don't honestly remember on that one.

Expand full comment

Guessing you just skipped over every little bit they put in there on "climate change" etc.?

Expand full comment

I don't remember exactly where they stand on that but am sure it's a little left of me. They surely see more reason to be proactive than me. That said, I don't remember them assuming the worst or misrepresenting the likely consequences. Unless one sees climate change as a non-issue, they don't seem too far off.

Expand full comment

The Economist has come to have something of a Jeckyll & Hyde syndrome. For a well written, rational discussion of various things it is still a go-to publication but when it comes to any objective, critical analysis of our current 'social liberal' orthodoxies.....it suddenly entirely loses all perspective and becomes mere campus-rag 'radical'. Sad to say.

Expand full comment

What do you think it is now?

Expand full comment

Whether the economy is real or simulated (EMs a la Hanson or Virtual Reality for the masses) presumably it will require energy. We have "only" a few hundred years of exponential growth left before our waste heat alone would ruin the biosphere. Ergo, the economy must be some or any of stagnant/much slower growing, non-energy based, or non-biological, and this must happen within 1 or 2 hundred years.

https://tmurphy.physics.ucsd.edu/papers/limits-econ-final.pdf

Expand full comment

Yes. If one makes absurd assumptions, one can get a prediction. Doesn't make the prediction right.

Expand full comment

I don't think I nor the author are making a prediction so much as saying 2.3% energy consumption growth for much longer is indeed an absurd assumption.

Expand full comment

There we go.

Why did you pick 2.3% energy consumption growth?

What assumption(s) tie that to economic growth?

Expand full comment

You lost me in the first paragraph:

the biggest LLMs. 1E13 tokens is pretty much all the quality text publicly available on the Internet.

What is an LLM? What is an 1E13 token, let alone a 1E13?

When I read texts in Chinese I expect to look up maybe two or three characters, but English is my native language.

Sorry! It's probably a great post....

Expand full comment

https://nypost.com/2024/02/04/lifestyle/inside-cybrothel-the-worlds-first-ai-brothel-using-sex-dolls/

Looks like porn will beat edu for making money with combined VR, sex dolls, and AI.

Once a person is enjoying as many orgasms daily as he or she* can, it’s hard to see 30% “growth in orgasms”. The Aden Barton note seems similarly click-bait-ish, but is a good question about limits to growth. The real world limits, not to mention aging populations, will insure it’s less than 30%, yet much higher than 2% seems likely. UBI & 20 or even 10 hr work weeks will likely become irresistible to ask for. That’s a reason to get practice with a govt Job Gurantee, instead. Even if only 20 hr/week.

Expand full comment

"The Economist — Moderate Right"

has endorsed the Dem pres candidate at every US election since 2004. is pro gay marriage, pro open borders, pro abortion, pro virtually every us foreign intervention. had Trump derangement syndrome that would put any NYT histrionic columnist to shame.

"moderate right" . sure. ok. have fun with that.

Expand full comment

The CredAlable post mentions the difficulty they have in identifying libertarian writings. I left the following suggestion:

A more inclusive approach to representing political ideologies, especially for incorporating libertarian perspectives, could involve transitioning from a traditional left-right continuum to a two-dimensional x/y scatter plot. This model plots beliefs about the government's role in both economic and social freedoms. The vertical axis represents the degree of support for social freedoms, like free speech and gay marriage, while the horizontal axis gauges belief in economic freedom, encompassing aspects like property rights and tax policies. In this framework, editorial positions in the upper left quadrant would be categorized as 'liberal,' signifying strong social freedoms but more economic regulation. Conversely, the lower right quadrant represents 'conservative' views, favoring economic freedom but more social regulation. Significantly, the upper right quadrant highlights 'libertarian' stances, advocating high degrees of both economic and social freedoms."

This revision aims to make the explanation more concise and clear, emphasizing the distinctiveness of each quadrant in relation to libertarianism.

Expand full comment

Yan LeCun is no idiot, but he raises an important observation without following the train of thought far enough, perhaps.

So, here's my thinking: does 'how the world works' scale from small physical objects to large organizations in the same way? What would it take for an intelligence, artificial or otherwise, to learn by fumbling experience (trial and error) at scales that would be required to train it on large scales, like managing a corporation? If we train an intelligence on smaller scales, the lessons don't necessarily hold faithfully; but training at very large scales could be catastrophic, no? Too big to fail and all that? Seems like AI should be trapped in the same place as human executives, bureaucrats, and so on... or am I just completely off base?

Expand full comment

Your prediction that “Children learn by sensing and by trying to manipulate objects. I expect that a lot of work in AI going forward will be along these lines” seems very reasonable, at least with respect to object manipulation. Children are born social, however, and AI’s ability to sense, watch, and interact with people would seem to be impossible to replicate through programming, which would seem to imply some curtailment of AI development in the human interactions domain.

In the object manipulation domain, for example, a recent article describes how an AI can learn object manipulation:

“Imagine you want to carry a large, heavy box up a flight of stairs. You might spread your fingers out and lift that box with both hands, then hold it on top of your forearms and balance it against your chest, using your whole body to manipulate the box.

Humans are generally good at whole-body manipulation, but robots struggle with such tasks. To the robot, each spot where the box could touch any point on the carrier’s fingers, arms, and torso represents a contact event that it must reason about. With billions of potential contact events, planning for this task quickly becomes intractable.

Now MIT researchers found a way to simplify this process, known as contact-rich manipulation planning. They use an AI technique called smoothing, which summarizes many contact events into a smaller number of decisions, to enable even a simple algorithm to quickly identify an effective manipulation plan for the robot.”

This AI smoothing depends in part upon the design of feedback loops:

“Reinforcement learning performs smoothing implicitly by trying many contact points and then computing a weighted average of the results. Drawing on this insight, the MIT researchers designed a simple model that performs a similar type of smoothing, enabling it to focus on core robot-object interactions and predict long-term behavior. They showed that this approach could be just as effective as reinforcement learning at generating complex plans.”

https://news.mit.edu/2023/ai-technique-robots-manipulate-objects-whole-bodies-0824

Carrying a box up some steps is one thing. The “Self-Supervised Learning” and text interpretation that Yann LeCun mentions seems like a whole different ballgame. In the human interactions domain, much of the learning that children do in early childhood is crucially dependent upon watching and listening to both family members and other people. (see for example a recent piece on the shutdowns impact on early child development, “Pandemic-associated social isolation may have impacted on the social communication skills in babies born during the pandemic compared with a historical cohort. Babies are resilient and inquisitive by nature, and it is hoped that with societal re-emergence and increase in social circles, their social communication skills will improve.” https://adc.bmj.com/content/108/1/20.abstract

This aspect of early childhood development would seem to be a very important building block for adult human text interpretation, a self-supervised learning project that develops and changes over an entire human lifetime. LeCun’s tweets reminded me of Dominic Cummings’ classic substack on reading Tolstoy (https://dominiccummings.substack.com/p/tolstoy ) and the powerful developmental feedback loops he experienced. Cummings recounts his personal history of reading War and Peace:

I first read War and Peace at the age of about 17-18. I loved it and was captivated by the famous scenes [… ...I’ve re-read it after each big political project I’ve done: in 2005 after being involved in politics for the first time over the euro and the North East referendum, in 2014 after I resigned from the Department for Education, in 2017 after the referendum, and in 2021 after resigning from No10.

Each time I see and feel more of the extraordinary depth that I didn’t see before. [… …] Although my mind has never recovered from 2015-16 — I’m slower, I stumble over words, in many ways I aged years and didn’t recover, then deteriorated further in 2020 — it feels like I understand it better because I’m older, I’m married, I have a child and so on. Primitive emotional pattern-matching feels more important for appreciating at least some parts of great art than ‘cognitive function’.”

I don’t see AI ever being able to acquire a wisdom comparable to that revealed by Cummings or Tolstoy, and acquiring such wisdom would seem to be among the more important goals of self-supervised learning and text interpretation. So, even though if I had a do-over I would study engineering rather than poetry, AI is never going to seem to be much more than a neat trick for getting boxes up stairs faster than something capable of producing development of the human faculties of understanding.

Expand full comment

"...the AI alignment problem is simply a variation on the human alignment problem" - bingo. Hit the nail on the head. And how have we been doing on this human alignment problem for the last 50,000 years?

Expand full comment

Moderate Left (at least)

Expand full comment