Feeling Honored
Thanks to Zvi Mowshowitz
This newsletter now has 10,000 subscribers1. That is more than enough to make me feel honored. And honor matters to me. I learned that by listening to the Coleman Hughes podcast with Arthur Brooks. I originally found it on Free Press.
Brooks asks Hughes what he most would like to have more of than an average person: power, money, pleasure, or honor. They go through a process of elimination. What does Hughes care least about having in excess of the average person? Power. Then what? money. Then what? pleasure. So Hughes cares most about honor. He wants to be respected by people who are important to him. I also would have eliminated power first and ended up caring most about honor, but I care more about money than pleasure. Money to me represents security for my wife and family.
What does one do with what is learned from this exercise? In Brooks’ philosophy (which we used to call New Age—at my age it reminds me of Ram Dass), you notice that you are attached to a desire (honor, in my case), so watch out! You might go overboard in pursuit of that, neglecting other things that you know matter, including family time.
Anyway, I am pretty sure that the most recent 50 subscribers, which put this newsletter over the 10,000 mark, came from Zvi Mowshowitz, who wrote,
There is something highly refreshing about the way Kling offers his own consistent old school economic libertarian perspective, takes his time responding to anything, and generally tries to understand everything including developments in AI from that perspective. He knows a few important things and it is good to get reminders of them. He often offers links, the curation of which is good enough that they are worth considering.
I appreciated reading that. It is particularly kind considering that I am not with him on one of his important causes, which is AI safety. My intuition is that the ability of AI to autonomously do great harm to the human race is a long way off. But even more important, my intuition is that the ability of other humans to do great harm to the human race is a bigger danger by a factor of a million. I predict that if we suffer a mass casualty or mass extinction event in the next twenty years, it will come from bad humans (what do I win if that prediction comes true?).
Granted, those bad humans could make use of AI. But then, borrowing from the old gun-rights trope, I would say that if Palantir is restricted only ISIS will have unrestricted AI. Anyway, The Zvi has put more thought into the issue of AI safety than I have, so don’t assume I’m right and he’s wrong.
I also feel honored by James Cham’s note.
It is so much fun watching Arnold Kling work in public.
He is referring to my posts on The Social Code. And I would be honored if anyone would spend time with that immersive seminar and provide me with feedback in the comment section here.
Only 300 pay. But that is ok with me.


Take a bow, Mr. Kling, as you write with honesty, consistency, and, yes, honor.
Some men never seem to grow old. Always active in thought, always ready to adopt new ideas, they are never chargeable with foggyism. Satisfied, yet ever dissatisfied, settled, yet ever unsettled, they always enjoy the best of what is, are the first to find the best of what will be.
--William Shakespeare