Noah Smith reads books on China and talks with Tyler Cowen; Glenn Loury, Ed West, and Robert Wright worry about tribalism; Freddie deBoer reviews Julia Galef
"Loser's consent is a vital component of democracy". True of succession. But civil disobedience may be crucial to counter tyranny of the majority in policy-making, as well as tyranny of experts.
Individual liberties are vital components of constitutional democracy. A wise polity greatly restricts the scope of majority rule and of rule by experts, and second-guesses majorities and experts in rule-making, always by checks and balances, and occasionally by civil disobedience.
Beware paternalistic or authoritarian tyranny of the majority and rule by experts: The minority gets the government that the majority and/or experts think the minority deserves.
Political decentralization is a vital component of constitutional democracy. It enables local policy experimentation. A minority can exit a local tyranny of majority or expert rule by migrating to a more favorable jurisdiction.
What are the optimal scale and optimal scope of majority rule and of rule by experts? These are contested issues that depend partly on changes in technology and social density, complexity, diversity. A presumption of liberty should carry great weight in the debate.
I considered the things done by the government in the last two years to be delegitimizing.
I also don't think "wait for the next election" is a serious reply when we are talking about fundamental rights. Elections are often far away, and they are messy things with lots of issues that often don't resolve particular rights violations.
'Consent' wasn't the best word*, though we all know what West was getting at.
First, it's about election results, not any possible abuse of power by the winners to crush the losers by stomping all over their rights. Reducing the stakes of elections in this regard would help a lot, but alas, we are not headed in that direction.
There is no way to define a precise point at which people would be within their rights to either disobey peacefully or resist violently, but the thresholds should be high or you get a failed-state where functional and harmonious civil society is totally impossible. A lot of people ignored covid rules, but then again, a lot of states ignored them back.
The point is that losers of elections need to suck it up, accept the situation stoically and without protest like mature adults, and encourage others to do so. For the sake of the greater good, they should do this *even when* they suspect that vote fraud flipped the election result**.
Maybe acquiescence, resignation, endurance, tolerance - that kind of thing. "You can grumble, but this fight's over, take your lumps, save it for the next fight, and move on."
*Then again, the public understanding of the meaning of 'consent' is changing with various modifiers that have been introduced for sexual contexts. It used to connote voluntary agreement or willing acceptance, but now it is insufficient even as a minimum since it allowed for mistakes due to unexpressed internal rejections. So now we need "affirmative, enthusiastic, continuous, and verbal" consent, to be distinguished from the legally classical, 'mere' consent.
**You could make an exception for really extreme cases of truly huge amounts of fraud and slam-dunk, smoking gun, red-handed evidence of the sort that is so brazen, immediately obvious, and egregious that most courts wouldn't hesitate to take the case and judge in your favor. But while there is and has always been a margin of fraud in US elections (it used to be much, much worse) it is unlikely to get that bad in the near-term precisely because there is still a strong incentive to avoid those kind of judgments. And there just wasn't anything like that in favor of either Gore or Trump.
Despite being incredibly heterodox himself, Taleb enforces one of the tightest and most arbitrarily enforced orthodoxies I've ever seen around his own set of views. If you deviate from his perspective one iota on certain topics (GMOs, IQ, Covid) he becomes absolutely unglued.
"the Democratic Party will want to try to arrest this development and instead encourage ethnic identity politics", as they've been doing for at least a decade.
The children's books of Kendi, and slogans such as BLM actively discourage non-racial politics and encourage an extremely race-centric view of politics. Kling speaks about this as some future direction the Democrats might explore, but that has been the Democrat's dominant platform and messaging.
I'm still reasonably optimistic for the future. But any good outcomes won't be due to good behavior or intentions on the part of Democrats.
"Freddie reviews Julia. Self-recommending, as Tyler would say."
Eh, maybe one of those cases when a self-recommendation is bad advice.
Seemed to me as if Freddie had reluctantly accepted a crappy assignment to write the only review out there that would be both prominent and even mildly critical.
And then he really couldn't think of anything critical to say, so he phoned it in and praised with faint-damning and petty complaints buried at least halfway in about 'tone' and the need for some sharper editing, and about the 'execution style', and then rambled on about tangents unrelated to the book's contents, I guess for the purpose of 'contextualization', but who knows.
He made a good (albeit trivial) point about her overdoing it to 'prove' how 'soldiery' our mindsets tend to be in terms of characterizing too many of the metaphors we tend use for ideas and arguments as 'martial'. But then he fails to go the final inch and point out the obvious which is that metaphor usage doesn't prove anything*. Disagreement is rivalrous, so people will naturally use the language of rivalry to describe it. So what? There is a whole line of cynical and abusive accusation out there making too much of metaphors, in which your use of metaphors is dangerous violence whereas our use of the same metaphors is totally innocent and just how normal people talk.
But again, the metaphor thing is totally minor and since not essential to anything else, not even really a criticism of Galef or the book at all.
I've read the book and there is nothing objectionable or controversial, which is why she gets invited to talk to all sorts of people. People may learn new things, but they don't have to believe different things. Indictments are general and balanced, we are all sinners, after all. It doesn't ask anyone to change a currently-held strong opinion, just to 'try to be better'. And there's nothing bad to say about it, which is why no one - not even Freddie who kinda tried - says anything bad about it.
Nobody, except me.
And I have two objections.
My first objection is that I believe Galef seriously underrates the value - even necessity - of argument for truth-discovery and knowledge generation. I think this is because she - and her audience - has spent a lot of her life on the internet or consuming media or paying attention to politics, and those are the areas where things are the worst. With very rare exceptions, the internet is just not a place where productive, high quality debate happens.
Maybe had she spent more time in trial courts or in markets, she might think, "You know, it's good that both sides had an opportunity to make their case, and to poke holes in the other side's case, like soldiers, but with rules of the game," or, "it's better when businesses face open competition than if they can find excuses to get the other guy shut down."
And second, I object to is its very unobjectionability.
Because, as David Brooks will be happy to tell you, what always happens when an author appeals to unobjectionable virtues, e.g., "try to overcome bias and be a fair judge", and creates terms that substitute for good and bad with stark contrast and zero moral ambiguity, is that it just creates a language the negative side of which becomes immediately corrupted and weaponized by partisans.
Because no one even applies these terms to themselves or to their own side, they end up using these weapons not to become better thinkers or be open-minded and expose themelves to and engage with different ideas, but *to psychologize* the explanation for the very existence of opposition.
This ends up always just providing an excuse to dismiss, dispose and suppress anything that opposition says, because the cynical grunts of mere soldiers are not just suspect but fundamentally *illegitimate and thus unworthy* of consideration, being mere manifestations of reptile-brained uncivil impulses and tribal warrior instincts or some kind of barely intellectualized rationalization for genetically-determined moral foundations or arbitrary preferences, or whatever.
So many ways to say the same thing, which is that we think good, they don't, and what they say is at best nonsense, and at worst, an existential danger. So, you don't have to listen to those guys, they don't deserve a 'platform' or a 'signal boost', and you can and should safely ignore them, and also, as you know, we are totally in the right in silencing their harmful, evil idiocy.
Like Galef, I've been on the internet a lot too. I've seen 'debate' go bad - very, very bad, you have no idea how bad - but I've also seen plenty of the above as well, over and over, and at the level of the elite commentators that are all giving Galef interviews. When they think about themselves, they think they are already scouts, and like everyone can resolve to try to be more athletically fit, they can all resolve to try to be better scouts too, but it's on the margin. Maybe they recognize that some of their friends are letting themselves go and are out of shape, and so have a little further to go on the road to fitness, but they're still more or less 'healthy'. On the other hand, when they think of their opponents, soldiers one and all. So unfit, it's a wonder they're even alive.
My point is, while Galef is full of good intentions, that's how the road to hell is paved, and a book about better thinking which just ends up helping these people further rationalize their bad behavior is counterproductive. No one is going to change how they think, they are just going to use her terms to change the words they use to dunk on people who think differently. This always happens. If you give people boo-words for bad-thinkers, this is what they do. It's not her fault, but that's how it is.
My position is that things have gone way, way too far in that direction, and what would be more helpful are books and terms that encourage these same elite commentators to raise the status of opposition (ETA: AND TO GET OFF TWITTER, which is an ocean of scout-killing poison, bad for them, and everybody).
Terms like 'good sportsmanship' or 'fair play' would be better to emphasize the point that there is nothing wrong with arguing, on the contrary, we should and sometimes we must to get to the truth, but that there is a civil and honorable way to go about it, and also dishonorable ways, and that one should try one's best to be virtuous and responsible.
*I will accept it proves something if someone can show me how frequency of use of the metaphors correlates with being better soldiers or worse scouts or whatever.
One good question is what specific features distinguish, say, trial courts, such that argument there remains relatively productive and within bounds. Perhaps it is reputation mechanisms - lawyers and other participants knowing they may end up on the opposite side of the bar if not next week, then in 10 years when they advance their careers? This doesn't seem to describe typical career paths of public intellectuals: they specialize. On this hypothesis, courtroom argument norms should be worse off where participants are more specialized.
We need to be able to trust experts again. The only way we get there is with institutions that ensure expert trustworthiness. We had a number of these, but nothing lasts forever, and most don't work anymore. People tried to break them, and they did.
So we need new institutions to perform this function, ones that are harder to break. And a good place to start to learn about how it can be done is to see which institutions still do this well and weren't broken. The general answer is to force people to put 'skin in the game'. If they say something false, then they are likely to pay a steep price.
Sure, when one has the ability and time to gather and assess information for oneself, it helps to be good at overcoming bias and being a fair judge. But that's pretty rare, and also, people are often terrible at being fair judges of their own fairness and judiciousness.
So, there is just no alternative for most lay people, most of the time, to learning from and deferring to people who know more. This is the instinctive, common sense approach, and a lot of important things rely on this common sense working for common people.
For this to work, experts need to be trustworthy. Which many, alas, are not.
Lay people are now often justified in distrusting experts, because, it has turned out many times recently, that many purported experts do not actually know what they're talking about and/or have abused the trust lay people placed in them in order to feed them lies and serve ulterior motives.
So, we need ways to make experts better and more trustworthy, and to help lay people identify when someone is actually a good and trustworthy expert who is safe to trust.
This is an "epistemic-security problem", very analogous to cyber-security problems in the technological context, and somewhat analogous to immune system problems in the biological context, in which we can catch infections from other people.
The problems of how to know who can be trusted when acquiring information from a social network are similar to the problems which arise regarding how to know which systems and messages can be trusted in a digital network. In particular, there is the chain-of-trust issue. If I rely on X to tell me I can trust Y, then how do I know if I can trust X?
In the cyber realm, a large number of the world's most intelligent people have spent a huge amount of time trying to figure out how to do this in a general, practical, economical, and reliable way. They are motivated by circumstances in which the stakes are incredibly high to include the protection of secrets crucial to national security and in commercial contexts with billions - perhaps trillions - of dollars on the line.
Nevertheless, as anyone can see from all the reports of recent hacking events, one cannot escape the conclusion that the cyber problem has not been 'solved' and that if not altogether impossible, it must be incredibly hard to solve, at least, if one is trying to do it by tweaking typical systems and software. The trouble is, there's a lot to be gained from hacking, and as hard as you try to prevent it, someone is trying just as hard to overcome your effort, in an endless arms race.
Life is also an endless arms race. In the biological context, mother nature - who is even better at this than the most intelligent humans - has been doing her very best over literally billions of years to solve the issue in the wet, biochemical context with immune systems. The stakes are as high as they get: literally life and death for individuals, and even extinction for entire species. Unfortunately, she plays both sides, and she has also been doing her very best to defeat those solutions, which is why we keep getting sick with new infections.
Still, a lot of approaches to security and immunity which have been refined over a long time work ok a lot of the time and for a while until a new hack is discovered. When that happens, until there is a fix, everyone is vulnerable if exposed to the wider world and not isolating, and so just asking to get immediately pwned. But - and this is key - even after a fix is discovered, there is still no way to ever go back to using unfixed systems. You can't just 'reset' to a previous condition or try to 'clean' malware out of your system. There is no going backward.
As good as they may have been, they are irreversibly *ruined forever*. This is really psychologically hard to accept, especially for people who have a lot invested in those systems, but that's the way it is, and there's no use denying it. The susceptible systems will always have that known failure mode and be vulnerable to being compromised and exploited in that manner, and if you ever try to use them again, you are just setting yourself up to get hacked again in exactly the same way. The mature thing to do is accept reality, accept your losses, accept the necessity and cost of fixing things, and move on.
For the social way we get information, the epistemic security problem is also very, very hard. In the past, we used a variety of approaches to include reputation systems like prestige and credentials, rigorous procedures and protocols for weighing claims in special contexts or venues, and all backed up by broader cultural approaches to increasing the trustworthiness of individuals in general leading to higher trust social equilibria. If you travel around, it's really clear that some places are much better at this than others.
But, for us, sadly, all that has broken down beyond repair in most areas. What we were doing is no longer good enough, in fact, not really good at all. When you see areas using such approaches that are still producing impressive results, the reason is not because of those approaches, but often despite them, because those areas get to benefit from some supplementary advantage.
Sometimes you get lucky and people working in some area tend to be of good character and internally motivated to do good work, and they are left alone to be that way without getting corrupted, because the field is not seen as politically consequential and thus there are no active efforts by bad actors to hack it to smithereens.
But outside of that rare and precious circumstance, you need discipline. Sometimes that discipline is imposed by reality, for instance, your bridges had better not collapse, or people aren't going to pay you to make bridges.
Of course you want experts to be "disciplined by the truth", but that just raises the question of how to tell what that truth is, which is what you needed expert help to do in the first place.
I confess that I am really abusing these terms, but to me, the question of truth for a lay-person outside his area competency is like an "EXPTIME-complete problem" that would take him forever to figure out. The only answer for a lay person is to force experts to submit themselves to mechanisms which provide something like an 'NP-complete' easy-to-verify way for the law-person to do this. So you need institutions which do "EXP-to-NP conversion". (The math and compsci people are going to kill me for that one, and they have justice on their side, but I think the terms, misused as they are, still help one to see what I'm getting at.)
It is usually very hard for a lay-person to solve problems or discern the truth for himself. But it is very easy for a lay-person to observe or determine whether or not an expert has won or lost a bet, and if he has an impressive track record, which are indicia of reliability. A lay person can't design a good bridge or tell a good design from a bad one, but he *can* easily tell if some guy's bridges stay up or collapse. Liability is a legal institution that relies on this EXP-to-NP problem conversion that "keeps bridge experts honest".
But discipline also comes from facing serious competition, like in games, sports, and, usually, war. And in the realm of ideas, one epistemic security measure that still works tolerably successfully to supplement the above approaches is ordered adversarialism, for example, as used in a common law trial. People can make a wide variety of claims, but a motivated and skilled opponent will also get a turn to speak and a fair chance to show why those claims are false.
Again, it is a hard problem for a lay-person looking over, say, a single investigator's report, to know whether they can trust the claims and conclusions which they have no way to challenge, verify, or scrutinize. It is a much easier problem when the hard part is outsourced a skilled professional can do those things for them, and when that person is motivated to detect error, point them out, and explain why those claims are wrong.
So, we need to be able to trust experts again, and because the current ones are broken and can't be salvaged, we need to replace our broken epistemic security systems with new, better ones. Those new systems will have to reliably discipline experts by holding them accountable to the truth, in a way that has a mechanism to convert that hard problem into a simpler problem of claims which are easy for lay-people to verify.
The way this is done is by norms rejecting the claims of any expert who is unwilling to accept accountability by exposing themselves to risk for false claims and putting their skin in the game. That can be done via some test against reality, making big public bets, or in open, fair competition with motivated opponents according to the time-tested rules and processes of our traditional adversarial institutions. Maybe there are other or better ways to get it done, but I doubt it. Where things are broken, we should be patching them with these fixes with utmost urgency.
"in open, fair competition with motivated opponents according to the time-tested rules and processes of our traditional adversarial institutions."
Alas, while once these traditional adversarial institutions deserved trust, that they would apply time-tested rules and processes, they (esp. academia and the Legacy media) no longer deserve such trust.
So, the layman must wing it, in assessing each expert's track record.
It's not a good question. It's a great question. An *excellent* question!
It is *the* question we should be investigating in order to implement the insights gained thereby.
It's much more important than the question of how to be better thinker, at least, when presented as if that is actually a superior alternative to trial-like debate. Let me explain.
Let's say there are two general "epistemic security system" categories: individual and institutional. You need both. We can encourage and teach people how to be better on an individual level, and we should.
But also, "no man is an island" and most people pick up idea socially in the "roundabout production of knowledge". It's not feasible for most people to think through most things on their own, or even to figure out who can be trusted, so they need structural support that are hard to pwn.
Additionally, "If men were angels, they wouldn't need government." Men face all kinds of temptations to act (and think) badly, so you need to balance those incentives by policing bad behavior with stronger-counter incentives, and this is a primary function of institutions, which also need to be hard to pwn, a lot harder than they proved to be lately.
To me, Galef's book is like noticing people can be selfish and greedy which causes there to be a lot of fraud out there, thus implicitly lowering the status of transactions as if they should all be inherently suspect, and then just encouraging people to be more selfless and generous.
But that's throwing the baby out with the bathwater. Deal with the problem of fraud, sure, but greed and selfishness are the human condition and the greater good is better served with a much more effective way to help the poor when we channel those impulses towards pro-social ends: "It is not from the benevolence of the butcher, the brewer, or the baker that we expect our dinner, but from their regard to their own self-interest. We address ourselves not to their humanity but to their self-love, and never talk to them of our own necessities, but of their advantages."
In the law, you see how the individual and institutional approaches work as essential complements to each other. There are two 'soldiers' who are making the best possible case for their side within rules of the game which have been refined over millenia, which means they can't shut up or play personally harmful dirty tricks against the other side, but must focus on the arguments. They know that if they say something that is easy to prove false, the other side is going to notice and pounce on it and get a fair opportunity in exclusive command of the decision-maker's attention to show it's wrong. That keeps people on their toes, and maintains the level of quality and rigor a lot higher than it would otherwise be. You play a dumb move in chess, the other guy turns it against you, so you are playing as well as you can, with every move you make. You could try to flip the board each turn and play against yourself, and perhaps there are supermen who can do that, but for us normal mortals, it never works nearly as well, and you absolutely need that other guy there to be your best.
Those are institutional, structural forces. On the individual level, you hope the decision-maker is well-trained and deeply experiences in "thinking good" so he can "overcome bias and be a fair judge" when presented with these rival cases.
Now, it's always kind of petty to go after someone's use of a metaphor too much. "It's just a metaphor, it's not supposed to be some exact parallel in every detail!" I get it. Still, I'm going to indulge in a little pettiness and say that Galef gets 'soldier' wrong here.
What she is actually trying to get at is the "riskless, costless trash-talking" that is 99.9% of twitter. We get 'fan' from 'fanatic' and you can be a bad fan of a sports team, and the bad fans of each side yell crass stupidities at each other all the time and drown everybody else out because it's fun for them and nothing happens so why not. But bad fans are not the actual players on the field, who reliably pursue excellence to the limit of their capacity because if they slip up their opponents will take advantage and crush them.
Remember how, because covid, there were excellent, high-quality games in empty stadiums with zero noise or trash-talking? That's the baby without the bathwater.
Soldiers are not like bad-fan trash-talkers, *because there are real risks and enormous costs to fighting*. The status quo for soldiers is not constant insults and baiting but silent restraint and staying in the barracks. Even mutually-tense nations not named Russia tend to stay quiet and avoid saying things or making moves with their armed forces that might provoke the other side and escalate to the point of a war they did not want. And then, when war comes, it isn't trivial "owning the libs" or whatever, but actually murdering as many of the people on the other side as possible, as quickly as possible, and maybe getting yourself and all your friends murdered in the process.
So, unlike the trash-talking bad-fans on twitter, soldiers don't attack every possible thing the other guys say, or defend every possible thing their own guys say, because they can't. They can't attack places where they are overwhelmingly outnumbered, because they are just going to get themselves killed. Likewise, they rarely even try to defend the indefensible, so they 'stipulate'. Pleading guilty is a form of surrender.
Competition is good. It's good in sports, it's good for countries, it's good for businesses, it's good for law, and it's good for ideas too. The status of opposition and idea-competition has fallen way, way too low because of the trash-talkers, and it needs to go back up, and figuring out how to arrange and use adversarialism in our institutions to complement individual good-thinking is key to crawling out of this terrible hole we're in.
Now, as to your question about what makes law work like that, it's a combination of a few things which would take a while to explain, so I'll do that some other time. Or maybe ask Galef to write the complementary follow-up book. But to answer your speculation in the negative, none of them is specialization.
Most trial lawyers end up quite specialized these days. My impression is that, if anything, that trend has been intensifying for a long time, due to economic reasons (economies of scale leading to centralization of firms / big-law, and also to carve out niches), technological reasons (the way people search for lawyers on the internet for their particular issue, also 'LegalZoom'), and also because of the difficulty of remaining competent up to minimum professional standards in the face of the increasing complexification of law.
The scenario of a new lawyer (or a slightly older one who didn't survive the night of long knives after seven years at a firm) hanging up a shingle in some under-served place who then starts handling a wide variety of cases is practically extinct.
Similar logic applies to law professors too. Legal and academic professionals don't want to be fungible thus interchangeable thus dispensable, so they need to differentiate, and they want to be seen as *the* expert / go-to-guy about some particular issue. This is much less important for ordinary general-level doctors or dentists who more or less all know and do the same things and successfully restrict their numbers to make a good living out of it.
A more likely feature is, that a judge can rule a line of questioning to be Out Of Order, knowing that, if s/he drops the ball on this, s/he can be overturned and, in rare cases, chastised by the appellate level.
"Loser's consent is a vital component of democracy". True of succession. But civil disobedience may be crucial to counter tyranny of the majority in policy-making, as well as tyranny of experts.
Individual liberties are vital components of constitutional democracy. A wise polity greatly restricts the scope of majority rule and of rule by experts, and second-guesses majorities and experts in rule-making, always by checks and balances, and occasionally by civil disobedience.
Beware paternalistic or authoritarian tyranny of the majority and rule by experts: The minority gets the government that the majority and/or experts think the minority deserves.
Political decentralization is a vital component of constitutional democracy. It enables local policy experimentation. A minority can exit a local tyranny of majority or expert rule by migrating to a more favorable jurisdiction.
What are the optimal scale and optimal scope of majority rule and of rule by experts? These are contested issues that depend partly on changes in technology and social density, complexity, diversity. A presumption of liberty should carry great weight in the debate.
I considered the things done by the government in the last two years to be delegitimizing.
I also don't think "wait for the next election" is a serious reply when we are talking about fundamental rights. Elections are often far away, and they are messy things with lots of issues that often don't resolve particular rights violations.
'Consent' wasn't the best word*, though we all know what West was getting at.
First, it's about election results, not any possible abuse of power by the winners to crush the losers by stomping all over their rights. Reducing the stakes of elections in this regard would help a lot, but alas, we are not headed in that direction.
There is no way to define a precise point at which people would be within their rights to either disobey peacefully or resist violently, but the thresholds should be high or you get a failed-state where functional and harmonious civil society is totally impossible. A lot of people ignored covid rules, but then again, a lot of states ignored them back.
The point is that losers of elections need to suck it up, accept the situation stoically and without protest like mature adults, and encourage others to do so. For the sake of the greater good, they should do this *even when* they suspect that vote fraud flipped the election result**.
Maybe acquiescence, resignation, endurance, tolerance - that kind of thing. "You can grumble, but this fight's over, take your lumps, save it for the next fight, and move on."
*Then again, the public understanding of the meaning of 'consent' is changing with various modifiers that have been introduced for sexual contexts. It used to connote voluntary agreement or willing acceptance, but now it is insufficient even as a minimum since it allowed for mistakes due to unexpressed internal rejections. So now we need "affirmative, enthusiastic, continuous, and verbal" consent, to be distinguished from the legally classical, 'mere' consent.
**You could make an exception for really extreme cases of truly huge amounts of fraud and slam-dunk, smoking gun, red-handed evidence of the sort that is so brazen, immediately obvious, and egregious that most courts wouldn't hesitate to take the case and judge in your favor. But while there is and has always been a margin of fraud in US elections (it used to be much, much worse) it is unlikely to get that bad in the near-term precisely because there is still a strong incentive to avoid those kind of judgments. And there just wasn't anything like that in favor of either Gore or Trump.
"there is still a strong incentive to avoid those kind of judgments."
Really? Where?
To the contrary, the MSM specializes in demonizing all legit questions about 2020.
To raise any questions, no matter how reasonable, gets one caricaturized as akin to Qanonists.
And not just by the MSM, but by "world-class" intellectuals of the rank of Pinker, see https://richardhanania.substack.com/p/rationality-requires-incentives-an .
Despite being incredibly heterodox himself, Taleb enforces one of the tightest and most arbitrarily enforced orthodoxies I've ever seen around his own set of views. If you deviate from his perspective one iota on certain topics (GMOs, IQ, Covid) he becomes absolutely unglued.
"the Democratic Party will want to try to arrest this development and instead encourage ethnic identity politics", as they've been doing for at least a decade.
This is, not a bug, but a feature.
The children's books of Kendi, and slogans such as BLM actively discourage non-racial politics and encourage an extremely race-centric view of politics. Kling speaks about this as some future direction the Democrats might explore, but that has been the Democrat's dominant platform and messaging.
I'm still reasonably optimistic for the future. But any good outcomes won't be due to good behavior or intentions on the part of Democrats.
"Freddie reviews Julia. Self-recommending, as Tyler would say."
Eh, maybe one of those cases when a self-recommendation is bad advice.
Seemed to me as if Freddie had reluctantly accepted a crappy assignment to write the only review out there that would be both prominent and even mildly critical.
And then he really couldn't think of anything critical to say, so he phoned it in and praised with faint-damning and petty complaints buried at least halfway in about 'tone' and the need for some sharper editing, and about the 'execution style', and then rambled on about tangents unrelated to the book's contents, I guess for the purpose of 'contextualization', but who knows.
He made a good (albeit trivial) point about her overdoing it to 'prove' how 'soldiery' our mindsets tend to be in terms of characterizing too many of the metaphors we tend use for ideas and arguments as 'martial'. But then he fails to go the final inch and point out the obvious which is that metaphor usage doesn't prove anything*. Disagreement is rivalrous, so people will naturally use the language of rivalry to describe it. So what? There is a whole line of cynical and abusive accusation out there making too much of metaphors, in which your use of metaphors is dangerous violence whereas our use of the same metaphors is totally innocent and just how normal people talk.
But again, the metaphor thing is totally minor and since not essential to anything else, not even really a criticism of Galef or the book at all.
I've read the book and there is nothing objectionable or controversial, which is why she gets invited to talk to all sorts of people. People may learn new things, but they don't have to believe different things. Indictments are general and balanced, we are all sinners, after all. It doesn't ask anyone to change a currently-held strong opinion, just to 'try to be better'. And there's nothing bad to say about it, which is why no one - not even Freddie who kinda tried - says anything bad about it.
Nobody, except me.
And I have two objections.
My first objection is that I believe Galef seriously underrates the value - even necessity - of argument for truth-discovery and knowledge generation. I think this is because she - and her audience - has spent a lot of her life on the internet or consuming media or paying attention to politics, and those are the areas where things are the worst. With very rare exceptions, the internet is just not a place where productive, high quality debate happens.
Maybe had she spent more time in trial courts or in markets, she might think, "You know, it's good that both sides had an opportunity to make their case, and to poke holes in the other side's case, like soldiers, but with rules of the game," or, "it's better when businesses face open competition than if they can find excuses to get the other guy shut down."
And second, I object to is its very unobjectionability.
Because, as David Brooks will be happy to tell you, what always happens when an author appeals to unobjectionable virtues, e.g., "try to overcome bias and be a fair judge", and creates terms that substitute for good and bad with stark contrast and zero moral ambiguity, is that it just creates a language the negative side of which becomes immediately corrupted and weaponized by partisans.
Because no one even applies these terms to themselves or to their own side, they end up using these weapons not to become better thinkers or be open-minded and expose themelves to and engage with different ideas, but *to psychologize* the explanation for the very existence of opposition.
This ends up always just providing an excuse to dismiss, dispose and suppress anything that opposition says, because the cynical grunts of mere soldiers are not just suspect but fundamentally *illegitimate and thus unworthy* of consideration, being mere manifestations of reptile-brained uncivil impulses and tribal warrior instincts or some kind of barely intellectualized rationalization for genetically-determined moral foundations or arbitrary preferences, or whatever.
So many ways to say the same thing, which is that we think good, they don't, and what they say is at best nonsense, and at worst, an existential danger. So, you don't have to listen to those guys, they don't deserve a 'platform' or a 'signal boost', and you can and should safely ignore them, and also, as you know, we are totally in the right in silencing their harmful, evil idiocy.
Like Galef, I've been on the internet a lot too. I've seen 'debate' go bad - very, very bad, you have no idea how bad - but I've also seen plenty of the above as well, over and over, and at the level of the elite commentators that are all giving Galef interviews. When they think about themselves, they think they are already scouts, and like everyone can resolve to try to be more athletically fit, they can all resolve to try to be better scouts too, but it's on the margin. Maybe they recognize that some of their friends are letting themselves go and are out of shape, and so have a little further to go on the road to fitness, but they're still more or less 'healthy'. On the other hand, when they think of their opponents, soldiers one and all. So unfit, it's a wonder they're even alive.
My point is, while Galef is full of good intentions, that's how the road to hell is paved, and a book about better thinking which just ends up helping these people further rationalize their bad behavior is counterproductive. No one is going to change how they think, they are just going to use her terms to change the words they use to dunk on people who think differently. This always happens. If you give people boo-words for bad-thinkers, this is what they do. It's not her fault, but that's how it is.
My position is that things have gone way, way too far in that direction, and what would be more helpful are books and terms that encourage these same elite commentators to raise the status of opposition (ETA: AND TO GET OFF TWITTER, which is an ocean of scout-killing poison, bad for them, and everybody).
Terms like 'good sportsmanship' or 'fair play' would be better to emphasize the point that there is nothing wrong with arguing, on the contrary, we should and sometimes we must to get to the truth, but that there is a civil and honorable way to go about it, and also dishonorable ways, and that one should try one's best to be virtuous and responsible.
*I will accept it proves something if someone can show me how frequency of use of the metaphors correlates with being better soldiers or worse scouts or whatever.
One good question is what specific features distinguish, say, trial courts, such that argument there remains relatively productive and within bounds. Perhaps it is reputation mechanisms - lawyers and other participants knowing they may end up on the opposite side of the bar if not next week, then in 10 years when they advance their careers? This doesn't seem to describe typical career paths of public intellectuals: they specialize. On this hypothesis, courtroom argument norms should be worse off where participants are more specialized.
As a brief follow up to my earlier response:
A succinct way to put my argument:
We need to be able to trust experts again. The only way we get there is with institutions that ensure expert trustworthiness. We had a number of these, but nothing lasts forever, and most don't work anymore. People tried to break them, and they did.
So we need new institutions to perform this function, ones that are harder to break. And a good place to start to learn about how it can be done is to see which institutions still do this well and weren't broken. The general answer is to force people to put 'skin in the game'. If they say something false, then they are likely to pay a steep price.
Sure, when one has the ability and time to gather and assess information for oneself, it helps to be good at overcoming bias and being a fair judge. But that's pretty rare, and also, people are often terrible at being fair judges of their own fairness and judiciousness.
So, there is just no alternative for most lay people, most of the time, to learning from and deferring to people who know more. This is the instinctive, common sense approach, and a lot of important things rely on this common sense working for common people.
For this to work, experts need to be trustworthy. Which many, alas, are not.
Lay people are now often justified in distrusting experts, because, it has turned out many times recently, that many purported experts do not actually know what they're talking about and/or have abused the trust lay people placed in them in order to feed them lies and serve ulterior motives.
So, we need ways to make experts better and more trustworthy, and to help lay people identify when someone is actually a good and trustworthy expert who is safe to trust.
This is an "epistemic-security problem", very analogous to cyber-security problems in the technological context, and somewhat analogous to immune system problems in the biological context, in which we can catch infections from other people.
The problems of how to know who can be trusted when acquiring information from a social network are similar to the problems which arise regarding how to know which systems and messages can be trusted in a digital network. In particular, there is the chain-of-trust issue. If I rely on X to tell me I can trust Y, then how do I know if I can trust X?
In the cyber realm, a large number of the world's most intelligent people have spent a huge amount of time trying to figure out how to do this in a general, practical, economical, and reliable way. They are motivated by circumstances in which the stakes are incredibly high to include the protection of secrets crucial to national security and in commercial contexts with billions - perhaps trillions - of dollars on the line.
Nevertheless, as anyone can see from all the reports of recent hacking events, one cannot escape the conclusion that the cyber problem has not been 'solved' and that if not altogether impossible, it must be incredibly hard to solve, at least, if one is trying to do it by tweaking typical systems and software. The trouble is, there's a lot to be gained from hacking, and as hard as you try to prevent it, someone is trying just as hard to overcome your effort, in an endless arms race.
Life is also an endless arms race. In the biological context, mother nature - who is even better at this than the most intelligent humans - has been doing her very best over literally billions of years to solve the issue in the wet, biochemical context with immune systems. The stakes are as high as they get: literally life and death for individuals, and even extinction for entire species. Unfortunately, she plays both sides, and she has also been doing her very best to defeat those solutions, which is why we keep getting sick with new infections.
Still, a lot of approaches to security and immunity which have been refined over a long time work ok a lot of the time and for a while until a new hack is discovered. When that happens, until there is a fix, everyone is vulnerable if exposed to the wider world and not isolating, and so just asking to get immediately pwned. But - and this is key - even after a fix is discovered, there is still no way to ever go back to using unfixed systems. You can't just 'reset' to a previous condition or try to 'clean' malware out of your system. There is no going backward.
As good as they may have been, they are irreversibly *ruined forever*. This is really psychologically hard to accept, especially for people who have a lot invested in those systems, but that's the way it is, and there's no use denying it. The susceptible systems will always have that known failure mode and be vulnerable to being compromised and exploited in that manner, and if you ever try to use them again, you are just setting yourself up to get hacked again in exactly the same way. The mature thing to do is accept reality, accept your losses, accept the necessity and cost of fixing things, and move on.
For the social way we get information, the epistemic security problem is also very, very hard. In the past, we used a variety of approaches to include reputation systems like prestige and credentials, rigorous procedures and protocols for weighing claims in special contexts or venues, and all backed up by broader cultural approaches to increasing the trustworthiness of individuals in general leading to higher trust social equilibria. If you travel around, it's really clear that some places are much better at this than others.
But, for us, sadly, all that has broken down beyond repair in most areas. What we were doing is no longer good enough, in fact, not really good at all. When you see areas using such approaches that are still producing impressive results, the reason is not because of those approaches, but often despite them, because those areas get to benefit from some supplementary advantage.
Sometimes you get lucky and people working in some area tend to be of good character and internally motivated to do good work, and they are left alone to be that way without getting corrupted, because the field is not seen as politically consequential and thus there are no active efforts by bad actors to hack it to smithereens.
But outside of that rare and precious circumstance, you need discipline. Sometimes that discipline is imposed by reality, for instance, your bridges had better not collapse, or people aren't going to pay you to make bridges.
Of course you want experts to be "disciplined by the truth", but that just raises the question of how to tell what that truth is, which is what you needed expert help to do in the first place.
I confess that I am really abusing these terms, but to me, the question of truth for a lay-person outside his area competency is like an "EXPTIME-complete problem" that would take him forever to figure out. The only answer for a lay person is to force experts to submit themselves to mechanisms which provide something like an 'NP-complete' easy-to-verify way for the law-person to do this. So you need institutions which do "EXP-to-NP conversion". (The math and compsci people are going to kill me for that one, and they have justice on their side, but I think the terms, misused as they are, still help one to see what I'm getting at.)
It is usually very hard for a lay-person to solve problems or discern the truth for himself. But it is very easy for a lay-person to observe or determine whether or not an expert has won or lost a bet, and if he has an impressive track record, which are indicia of reliability. A lay person can't design a good bridge or tell a good design from a bad one, but he *can* easily tell if some guy's bridges stay up or collapse. Liability is a legal institution that relies on this EXP-to-NP problem conversion that "keeps bridge experts honest".
But discipline also comes from facing serious competition, like in games, sports, and, usually, war. And in the realm of ideas, one epistemic security measure that still works tolerably successfully to supplement the above approaches is ordered adversarialism, for example, as used in a common law trial. People can make a wide variety of claims, but a motivated and skilled opponent will also get a turn to speak and a fair chance to show why those claims are false.
Again, it is a hard problem for a lay-person looking over, say, a single investigator's report, to know whether they can trust the claims and conclusions which they have no way to challenge, verify, or scrutinize. It is a much easier problem when the hard part is outsourced a skilled professional can do those things for them, and when that person is motivated to detect error, point them out, and explain why those claims are wrong.
So, we need to be able to trust experts again, and because the current ones are broken and can't be salvaged, we need to replace our broken epistemic security systems with new, better ones. Those new systems will have to reliably discipline experts by holding them accountable to the truth, in a way that has a mechanism to convert that hard problem into a simpler problem of claims which are easy for lay-people to verify.
The way this is done is by norms rejecting the claims of any expert who is unwilling to accept accountability by exposing themselves to risk for false claims and putting their skin in the game. That can be done via some test against reality, making big public bets, or in open, fair competition with motivated opponents according to the time-tested rules and processes of our traditional adversarial institutions. Maybe there are other or better ways to get it done, but I doubt it. Where things are broken, we should be patching them with these fixes with utmost urgency.
"in open, fair competition with motivated opponents according to the time-tested rules and processes of our traditional adversarial institutions."
Alas, while once these traditional adversarial institutions deserved trust, that they would apply time-tested rules and processes, they (esp. academia and the Legacy media) no longer deserve such trust.
So, the layman must wing it, in assessing each expert's track record.
Without access to a large range of web sites, the layman's job here would be all-but impossible.
It's not a good question. It's a great question. An *excellent* question!
It is *the* question we should be investigating in order to implement the insights gained thereby.
It's much more important than the question of how to be better thinker, at least, when presented as if that is actually a superior alternative to trial-like debate. Let me explain.
Let's say there are two general "epistemic security system" categories: individual and institutional. You need both. We can encourage and teach people how to be better on an individual level, and we should.
But also, "no man is an island" and most people pick up idea socially in the "roundabout production of knowledge". It's not feasible for most people to think through most things on their own, or even to figure out who can be trusted, so they need structural support that are hard to pwn.
Additionally, "If men were angels, they wouldn't need government." Men face all kinds of temptations to act (and think) badly, so you need to balance those incentives by policing bad behavior with stronger-counter incentives, and this is a primary function of institutions, which also need to be hard to pwn, a lot harder than they proved to be lately.
To me, Galef's book is like noticing people can be selfish and greedy which causes there to be a lot of fraud out there, thus implicitly lowering the status of transactions as if they should all be inherently suspect, and then just encouraging people to be more selfless and generous.
But that's throwing the baby out with the bathwater. Deal with the problem of fraud, sure, but greed and selfishness are the human condition and the greater good is better served with a much more effective way to help the poor when we channel those impulses towards pro-social ends: "It is not from the benevolence of the butcher, the brewer, or the baker that we expect our dinner, but from their regard to their own self-interest. We address ourselves not to their humanity but to their self-love, and never talk to them of our own necessities, but of their advantages."
In the law, you see how the individual and institutional approaches work as essential complements to each other. There are two 'soldiers' who are making the best possible case for their side within rules of the game which have been refined over millenia, which means they can't shut up or play personally harmful dirty tricks against the other side, but must focus on the arguments. They know that if they say something that is easy to prove false, the other side is going to notice and pounce on it and get a fair opportunity in exclusive command of the decision-maker's attention to show it's wrong. That keeps people on their toes, and maintains the level of quality and rigor a lot higher than it would otherwise be. You play a dumb move in chess, the other guy turns it against you, so you are playing as well as you can, with every move you make. You could try to flip the board each turn and play against yourself, and perhaps there are supermen who can do that, but for us normal mortals, it never works nearly as well, and you absolutely need that other guy there to be your best.
Those are institutional, structural forces. On the individual level, you hope the decision-maker is well-trained and deeply experiences in "thinking good" so he can "overcome bias and be a fair judge" when presented with these rival cases.
Now, it's always kind of petty to go after someone's use of a metaphor too much. "It's just a metaphor, it's not supposed to be some exact parallel in every detail!" I get it. Still, I'm going to indulge in a little pettiness and say that Galef gets 'soldier' wrong here.
What she is actually trying to get at is the "riskless, costless trash-talking" that is 99.9% of twitter. We get 'fan' from 'fanatic' and you can be a bad fan of a sports team, and the bad fans of each side yell crass stupidities at each other all the time and drown everybody else out because it's fun for them and nothing happens so why not. But bad fans are not the actual players on the field, who reliably pursue excellence to the limit of their capacity because if they slip up their opponents will take advantage and crush them.
Remember how, because covid, there were excellent, high-quality games in empty stadiums with zero noise or trash-talking? That's the baby without the bathwater.
Soldiers are not like bad-fan trash-talkers, *because there are real risks and enormous costs to fighting*. The status quo for soldiers is not constant insults and baiting but silent restraint and staying in the barracks. Even mutually-tense nations not named Russia tend to stay quiet and avoid saying things or making moves with their armed forces that might provoke the other side and escalate to the point of a war they did not want. And then, when war comes, it isn't trivial "owning the libs" or whatever, but actually murdering as many of the people on the other side as possible, as quickly as possible, and maybe getting yourself and all your friends murdered in the process.
So, unlike the trash-talking bad-fans on twitter, soldiers don't attack every possible thing the other guys say, or defend every possible thing their own guys say, because they can't. They can't attack places where they are overwhelmingly outnumbered, because they are just going to get themselves killed. Likewise, they rarely even try to defend the indefensible, so they 'stipulate'. Pleading guilty is a form of surrender.
Competition is good. It's good in sports, it's good for countries, it's good for businesses, it's good for law, and it's good for ideas too. The status of opposition and idea-competition has fallen way, way too low because of the trash-talkers, and it needs to go back up, and figuring out how to arrange and use adversarialism in our institutions to complement individual good-thinking is key to crawling out of this terrible hole we're in.
Now, as to your question about what makes law work like that, it's a combination of a few things which would take a while to explain, so I'll do that some other time. Or maybe ask Galef to write the complementary follow-up book. But to answer your speculation in the negative, none of them is specialization.
Most trial lawyers end up quite specialized these days. My impression is that, if anything, that trend has been intensifying for a long time, due to economic reasons (economies of scale leading to centralization of firms / big-law, and also to carve out niches), technological reasons (the way people search for lawyers on the internet for their particular issue, also 'LegalZoom'), and also because of the difficulty of remaining competent up to minimum professional standards in the face of the increasing complexification of law.
The scenario of a new lawyer (or a slightly older one who didn't survive the night of long knives after seven years at a firm) hanging up a shingle in some under-served place who then starts handling a wide variety of cases is practically extinct.
Similar logic applies to law professors too. Legal and academic professionals don't want to be fungible thus interchangeable thus dispensable, so they need to differentiate, and they want to be seen as *the* expert / go-to-guy about some particular issue. This is much less important for ordinary general-level doctors or dentists who more or less all know and do the same things and successfully restrict their numbers to make a good living out of it.
A more likely feature is, that a judge can rule a line of questioning to be Out Of Order, knowing that, if s/he drops the ball on this, s/he can be overturned and, in rare cases, chastised by the appellate level.
I was happy to see Joan Coaston get a bigger stage at the NYTimes. I think she is a pretty heterodox black (bi-racial) woman thinker.
One of the nice things about Omicron is that once leftists get it and realize its a nothing burger at least some of them chill out a little bit.