Links to Consider, 10/2
Rob Henderson on the Dark Triad; Peter Diamandis on exciting breakthroughs; The Zvi on the Nate Silver book; Andrew Chen on bureaucracy
Researchers who study the Dark Triad are interested in grandiose as opposed to vulnerable narcissism.
Grandiosity is associated with low neuroticism, high extraversion, greater levels of energy and happiness, overconfidence, an imperviousness to insults or setbacks, and a greater willingness to self-promote, brag, and name-drop. I say grandiose narcissists are a wolf in wolf’s clothing. You can spot these guys from a mile away. I say guys, because men generally score higher on grandiose narcissism than women.
My main worry about personality psychology is its reliance on self-reporting through surveys. My guess is that once psychologists start applying AI to actual speech patterns and behavior of people, we will separate the wheat from the chaff in personality psych.
Commenter Scott requested links to some good news for a change. You can always count on Peter Diamandis for that.
What It Is: Elon Musk's Neuralink has achieved a notable milestone: their Blindsight implant received the FDA's "breakthrough device" designation. This chip aims to restore vision by directly stimulating the visual cortex, potentially benefiting those who have lost sight or were born blind. Elon envisions evolving capabilities, from basic visual input to enhanced perception. While ambitious, the technology faces substantial challenges, particularly for congenitally blind individuals. A planned three-patient trial will evaluate its efficacy and safety. This development could revolutionize treatment for certain types of blindness and advance our understanding of neural interfaces.
Why It Matters / What I Think: This is “biblical” in nature—curing blindness. But beyond the obvious uses, I’m excited about a MUCH more exciting future. Imagine being able to use your Blindsight implant to see through the eyes of an Optimus robot on the other side of the planet. Or being able to enhance your vision, seeing the world in infrared, ultraviolet or in zoom-mode, based on special wearables. Just as interesting will be “picture-in-picture” mode on your visual field... Reading your texts and emails, super-imposed on whatever you are watching at the moment. The future is going to be stranger than we can imagine.
I used to say that I read MIT Technology Review to get excited and Cato’s Regulation to get depressed. Now Diamandis is more reliably in the excitement category.
The key is, you accept that risk is part of life, and you look to make the most of it, including understanding that sometimes the greatest risk is not taking one.
The Zvi is reviewing Nate Silver’s On The Edge. It sounds like a book that strikes a cultural chord, sort of the way that The Black Swan did. It seems like everyone is reading it. Maybe that means that I have to. Or maybe it means that I don’t have to.
The way I see it, the wrong way to think about risk is as something you have to regulate out of existence. Is that mindset part of what Silver calls The Village? The mindset Silver prefers is the River.
A True Riverian learns to inherently love a correct play and hate a mistake, in all contexts, from all sides that are not their active opponents. They want those good decisions and valuable actions to be rewarded, the bad decisions and destructive actions punished. They want that to be what matters, not who you know or who you are or how you play some political game.
Back in 1978, I took my general exams a semester early. Ken Rogoff remarked, “You’ve probably doubled your chances of failing, but increased your expected utility.” Riverian of him to say, and Riverian of me to get his point. Fortunately, I didn’t fail.
The Zvi writes,
our civilization handled Covid rather badly. Nate identifies correctly one of the two core mistakes, which was that it was a raise-or-fold situation, and we called, trying to muddle through without a plan.
Remember when I said we should try a two-week military-enforced lockdown? That was “raise.” Lots of problems with trying to execute it, and probably “fold” (Sweden) was the best practical approach, but at least I wasn’t supporting muddling through.
If you create an organization where “impact” is measured by how much your team is outputting — and thus, it correlates with the size of your team — then you are going to create a massive incentive to pitch all sorts of large scale projects that require hiring. If people see that other people getting promoted requires them to manage people, so that their responsibilities and scope are vast, rather than the success of their output — well, you are going to creative an incentive to hire a ton of folks. If big visible projects (“Project XYZ!”) end up being what’s required to drive internal visibility, and thus promotions, small impactful things will be ignored and big grandstanding projects will end up being encouraged. Committees will be formed for reasons other than building consensus.
This creates the phenomenon of self-replicating bureaucrats:
If winners hire winners, and losers hire losers, what do bureaucrats hire? More bureaucrats of course.
Think of everyone within an organization as having a game-theoretic choice:
cooperate, meaning act in the interest of the organization; or
defect, meaning act in a self-interested way that may go against the interest of the organization
In a small organization (I use the Dunbar number of about 150 people to mark the dividing line), this game is dealt with informally, through direct observation, gossip, and the threat of excommunication. When I try to exercise initiative, everyone can see what I am doing and decide whether to be supportive or throw me out.
In a large organization, informal mechanisms do not work. It’s like a big city, where conflicts of interest are complex and the legal system is necessary for order. In the large organization, my initiative may not be visible to everyone affected until its adverse consequences show up. To minimize both intentional and unintentional acts of defection in a large organization, it needs formal mechanisms: written rules, well-defined roles and responsibilities, training processes, and formulaic compensation systems.
Formal systems are never perfect. They have many failure modes. “Self-replicating bureaucrats” is a common one. But an organization can also have the opposite problem: losses due to weak internal controls; plans not meshing; failure to complete crucial projects, while resources are wasted on projects that should never have been started in the first place.
Middle managers often complain about excess bureaucracy, and sometimes they are right. But sometimes they are wrong, because they do not have enough of an overview of the organization to see all of the costs and risks that their pet ideas would entail.
substacks referenced above: @
@
COVID policies were generally and universally bad policy because they were based on the massive error of getting wrong the actual risk of harm. For example, knowing how low the risk was for young adults it made absolutely no sense to close universities and send kids home. And knowing how low the risk was for children it made no sense to close schools.
The political problem for the panic-stricken media and politicians was an approach to Covid that did not create universal alarm was unacceptable to them! So everyone had to be inconvenienced and play stupid in order to make pretend this would protect grandma.
Most often, muddling through is the best approach; don’t commit yourself until it is clearly necessary, by which time you may have more relevant information.