I'm sure there are people who aren't going to want to hear this but this is why people over history have used governments and religious organizations to supply public goods, not individual philanthropists. All governments, even the most totalitarian, must have at least the grudging consent of the governed providing the feedback you talked about above. Religious organizations generally have the same need or desire to build wide popular support. In the absence of a profit motive or a need to have a broad base of support to keep the lights on, what gets built are totalitarian feifs. The EAs are not proposing some new form of enlightened gifting but providing a justification for keeping in place what we can already see is largely not working for society but does work to give ultra-wealthy donors outsized influence. It probably bought a Presidential election.
Actually the use of religious and other organizations as charity intermediaries illustrates a solution to the problem that the proposal of 'impact markets' is trying to solve, and I think Arnold is kind of missing the issue because he is not focused in on that specific context and difficulty and instead looking at a much bigger picture which folds a lot of other distinct issues into the big omelet.
For example, the thing about rich people giving """philanthropically""" to universities as a way of purchasing a certain kind of fame, prestige, and public relations reputation insurance (not to mention the tax benefit), is a really different problem from the one Scott is trying to tackle.
The problem is easy to explain in that there are people who have accumulated a lot of wealth and want to spend it on "doing good" or "assist the deserving poor" or "helping good people who had bad luck." But they don't know people personally, don't have the time or skills to discern who is and isn't worthy of the most subsidy, how best and most fairly to allocate it limited contributions, what makes sense for the community at large, etc. They aren't 'experts' and they need the help of an 'expert' who they can trust will use good knowledge, experience, judgment, and character to take the money and do more-or-less the best thing with it.
As an example, a few centuries ago in small villages or shtetls it would be quite common (and socially expected and socially rewarded) for older prosperous men to give such donations to the local priest or rabbi in just such expectation that he was indeed such an expert in 'local charity needs' and having intimate knowledge of private details of individuals in the local scene, would be able to dole out the money more-or-less in a manner consistent with the goals stated above. The rabbi would know who were more or less deserving, more or less desperate, more or less likely to benefit from or squander any subsidy, etc. And furthermore that having everybody try to stay on the good side of the priest or rabbi just in case they ever needed to benefit from such subsidy was also, on net, a good cultural equilibrium for the community.*
Well, fast forward to today and we don't have any such reliable intermediaries for a large number of reasons, but we still have very rich people with a lot of wealth they want to give away to the 'best' causes, but they aren't good at discerning which causes are better or worse, so they need 'expert' help.
Scott has personal experience trying to play the role of being such an expert in terms of getting to decide or influence how to allocate a lot of money to a large number and variety of proposals, and he found it was really, really hard to do.
And one of the reasons it is really, really hard to do is that if you leave the context of getting to know people in need personally and intimately and try to scale up to global or cosmic perspective, you run into an information problem and have to know things - especially predictions about the future - that turn out to be beyond the grasp of even the best individual experts.
So, like Arnold says, your path starts to fill with difficult obstacles related to uncertainty. And the question is whether one should accept those obstacles as fundamentally insurmountable - or at least too difficult to make the effort to overcome them worthwhile - or whether one should take the optimistic view on the capacity of clever market mechanisms to aggregate the necessary missing information collected from people willing to put real money where their mouths are, and the ability of clever people to harness such 'finance'-esque capacities, techniques, and incentives in order to somehow meaningfully overcome the uncertainties, raise capital, and allocate scarce resources in the best way.
I think the problems with this approach to philanthropy is not so much the general problem with contemporary philanthropy (as Arnold points out), but that people are too optimistic regarding the capacity to solve a problem which is fundamentally intractable, specifically, the fundamentally intractability of performing universalist consequentialist calculations without setting some cut-offs which cannot be anything but arbitrary in nature. It is an illusion to imagine one can just hand-wave away this problem with market prices because the best that can be done still isn't good enough.
*I personally do my own charitable giving through just such an intermediary and for similar reasons. This, however, is not a 'universalist egalitarian' approach in treating the concerns of all living humans equally, and forget about putting any weight on "all future sentient consciences for the rest of time", or whatever. This gets into questions of moral fundamentals, and one of the major problems with that recent "Criticize Us And Win!" EA essay contest was precisely that they framed things in such a way as to put that deeper discussion out of bounds.
I would agree with most of this. Scott Alexander and the EA move are focused on “solving” problems, because that enhances their status, rather than merely “helping” people.
Think of the comparison between Mother Theresa and Bill Gates. We will always have the poor with us, and not something that will ever be solved.
There are plenty of charities out there that do tremendous work with the more humble goal of helping people, and I’d challenge Scott Alexander to spend a week volunteering at one and perhaps it will make his aims for charity more modest.
One need not be humble in goal, and indeed the fundamental meta-problem is the question of which goal and how to decide. But putting that issue to be side, even if one settles on a particular goal and even if it is particularly ambitious, one should still be humble about one's own cognitive and epistemic limitations regarding the possibility of the derivation of whatever answers are implicated by one's preferred framework with whatever accuracy and precision is required to achieve a kind of confidence above randomness, i.e., that one actually has or can have a kind of cosmic scale 'alpha'.
My point is that, in my view, this is simply impossible for "a butterfly flaps its wings", "unintended consequences", and other similar considerations about chaotic processes and uncertainty about the future.
Obviously if one is going to make decisions one had to try to estimate and weigh the desirability of the consequences in some way if one wants to avoid randomness or dogma. But the scope, scale, and time horizon of such consequences must be narrow enough to lend themselves to such analysis. My view of the EA framework (as I understand it, and I could easily be wrong) is that, at an intellectual or philosophical matter, they have set themselves up to play a game that can't be won.
No doubt. I’m not saying these organizations don’t do good work. But when the mission is to solve world health, you are at risk of introducing a whole slew of different problems. For example,spending millions trying to get to the “right” solution by curing malaria or using mosquito nets, and not also funding the use of DDT.
The net result may be decidedly more mixed, because helping people can become a secondary consideration to the getting the right or efficient solution.
‘ All governments, even the most totalitarian, must have at least the grudging consent of the governed providing the feedback you talked about above. Religious organizations generally have the same need or desire to build wide popular support.’
You mean fear, not consent.
Religion frightens with eternal damnation, the Government with threat of imprisonment or violence.
Once your charitable work tries to expand out beyond your own locality, it is far more likely to fail than to succeed. The information problem is unsolvable beyond that level. Donate to your local foodbanks/goods bank, but do it with actual food and goods, not cash. Contact your local private schools and donate to them specifically to pay the tuition of students by economic class. Do the same with your local colleges and universities. Contact your local medical facilities and offer to pay for necessary medical interventions for people without insurance. Or start your very own food banks, schools, and medical facilities that give away goods and services for free on demonstrated economic need.
It seems likely to me that there are many cognitive biases operant when you're trying to figure out what's best for your local community. Very hard to reason objectively about that question, given how high your personal stake in it is. Also, most people these days aren't interested in reading the local news and learning the details of how their community works (it sounds like you may be the exception). Point being, it's not clear to me that going local is likely to work better than going global for the average philanthropist.
Well, I just disagree. A global operation will draw the parasites/rent-seekers, and you won't be able to sort them out because you are completely disconnected from them beginning to end. I will simply assert you do better and more effective good directing a billion dollars to a 100K town/county than a m/billion dollars to a 7 billion people world-wide.
It's possible that you're right, but it seems to me that at a minimum, we can't assume that local knowledge is superior in the absence of systematic evidence on that question. At best the arguments are tied, I think, and then it seems to me that the tie is broken by the greater bang for your buck you get by spending overseas.
Not sure those biases don’t exist equally at the global leve,l and in any event probably introduce more at the global level.
You’re average wealthy philanthropist doesn’t get a lot of utility (read status points) donating a coat to a local food bank, but get a ton donating to the Met or to cure cancer.
My impression of effective altruism is that it was a good faith and worthwhile attempt to bring some quantitative rigor to philanthropical activities. It has since become something like a fantasy baseball league minus the baseball and the chance of actually winning anything; you just pore over numbers and charts and reports to attempt to outdo the other guys with your limited budget of charity dollars. The payoff is the smug sense of superiority you feel towards those who were so crass and clueless as to just donate money to their alma mater or local food bank or something and call it a day. LOL, don't these people know how many utils they're leaving on the table?
As it happens - not exactly by synchronicity since the EA folks triggered it with the announcement of their essay contest - Scott has a new meta-level 'defense' (of a sort) of the EA folks and their perspective from the modal criticisms they get that I think related directly the subject of Arnold's lead blog post.
One interesting argument he makes uses the concept of paradigm shifts. Now, to be fair, the EA folks kind of set the reference frame to this particular perspective as if EA was the default consensus paradigm with the very structure of "assume these things and now criticize".
And I think people generally in their orbit have so thoroughly intellectually abandoned and demoted in status the traditional or classical approaches ((not to mention the 19th century flavors) to moral ideology in favor of the enlightenment-modernist-like attempt to reconstruct a new system from first principles. So this probably feels totally natural and unproblematic to them, like the line about fish not noticing water or something.
But, to someone with my perspective, it's the whole EA approach which is the 'new paradigm' and the criticisms they made of the prior perspectives ring just as unhappily and intellectually unsatisfying in my ears as the anti-EA criticisms ring in Scott's. The big shift to the new moral paradigm is not like the big shifts in the hard sciences, but more like the big shifts in Macroeconomics or Sociology, that is, more a matter of fashion than 'justification'.
"Businesses are bad at producing public goods like helping poor people without much money, solving existential risks, and promoting forms of research that can't internalize profits." is apparently true. However, that being true doesn't make other government or philanthropic organizations/institutions competent at helping poor people or "people with problems".
It is very difficult to build an organization (public or private) that can handle individual problems, especially when trying to help individuals that are non-mentorable whose behavior (culture) can't be changed. Our family over two generations has tried to help a lot of "people with problems" who fell through the cracks of our society with over 70 years of trying. Addiction, mental health issues, poor education and poor cultural values are the main issues.
One on one private and family help will be the best you can get, but even that doesn't work a lot of times. Trying to scale that to large institutional sizes is near impossible. More than once I have bailed out people hitting hard times, when the credit card companies or government taxes got them into deep water with all the "fees" and outrageous interest rates. Without the drag of the parasitic companies they could swim to shallow water but very few didn't return to deep water or screwing me out of a few thousand dollars.
Your thinking about this topic is radical in a very interesting way, and I've learned a lot from putting myself inside your perspective a bit. That said, I really think you're thinking about this topic in a mistaken way. I don't know if a comment on your Substack is going to convince you of that, but here goes.
The overarching position you're taking is that we're never going to be in a position to believe that not-for-profit solutions are better than for-profit solutions. But there's no guarantee that the world needs to be arranged so that this never happens. It's an empirical question--a completely empirical question--whether there might be a handful of easily identifiable situations where nonprofit solutions are best.
I think you would agree with me that the structure of the family is one such case. You don't cut deals with your children or your spouse with a profit motive in mind; you act *altruistically* because you know (especially in the case of the kids) some things you could do with *your* resources that would benefit these people. You are in a position to know this, despite the fact that feeding your kids isn't profitable.
The reason you are in a position to know this is that your kids are so lacking in agency that there is lots of obvious low-hanging fruit where other people can use their resources to benefit the kids.
Now think of people living on $2 a day overseas, or very mentally ill people. Isn't it plausible that there might be lots of low-hanging fruit for philanthropists to help them with, for the exact same reason?
But George Soros and Peter Thiel suck, and they're philanthropists too, you say. Point taken. But we're not going to make philanthropy illegal. So I can't do anything to stop GS and PT. The question that faces me is whether I have means available and my disposal to be philanthropic in a way that brings more good into the world than spending that money on further expanding my action figure collection. And I think it's pretty obvious that I know I can do this if I put the money into GiveDirectly.
It is just an empirical question how well for-profit investment works. The world could have been designed the way Paul Ehrlich thought, with extremely scarce non-fungible natural resources. It turned out he was wrong about that, but whether he was wrong about it was an empirical question. There was no law of the universe that said he had to be wrong (which is not to say he wasn't in a position to know that his prediction was mistaken).
I agree that it's rarely wise to bet against the market being able to solve a problem, but sometimes it is wise (like when you're deciding whether to feed your kids) to bet that the market isn't going to just take care of somebody's problem all by itself, without altruistic action on another person's behalf.
If you could provide some concrete comparisons between specific for-profit decisions one could make, on the one hand, and specific highly optimized non-profit actions (like GiveDirectly) on the other hand, that would make your case potentially more convincing. Would be very curious to see something of that sort, which I think would advance the discussion you're trying to have.
It is hubris, but of a very particular kind. Metrics? Yes, of course. At the center is a deterministic belief that with enough data and observation, all things can be modeled if only approximately: Perhaps a universal law discovered? Newton’s impact continues to be felt.
What a timely article for me. I’m now semi-retired (probably “forever” because I want to work). My wife and I come from modest means but built a sizable net worth, mostly from the sweat equity side. We have no heirs, and no desire to have our names on a plaque. My California state university alma mater is wining and dining us because we have already donated some to them. But they have gone over-the-cliff woke.
So we have the serious, but enjoyable task to find worthy recipients of our lifetime achievement. It isn’t easy, and it isn’t something we will delegate.
Cold Buttons seems like common sense. Neither party is right on all issues so why not do what one can (if anything?) to move both toward better positions.?
If a public-spirited philanthropist believes (rightly or wrongly) that government provides a reliable safety net and necessary public goods (justice, national defense, infrastructure, etc), then she will give to other, perhaps "taste-based" causes. See Jackie Mason's comedy routine about saving the ballet.
If a public-spirited philanthropist wants to help the weak, the poor, and the vulnerable who "fall through the cracks" of the welfare state, then she must identify those who (a) are truly needy and (b) won't squander the gift. Yancey Ward (comment above) says keep philanthropy "simple and local." Samuel L. Popkin's classic study of village community, "The Rational Peasant: The Political Economy of Rural Society in Vietnam" (1979) finds that villagers allocated charity to needy widows, orphan support, persons unable to work, and the like. Simple and local.
By contrast, many ambitious philanthropists in modern polities want to address large national or global problems. They get entangled with politics. Robin Hanson points out that politics is about status, not policy. Bryan Caplan points out that politics is about what *sounds* good, rather than what *does* good. Effective Altruists point out that innumeracy distorts philanthropy towards what sounds good, rather than what does good. Arnold Kling points out that Effective Altruists puts way too much faith in numeracy (metrics). Well-meaning philanthropists have to reckon with the twin pitfalls of social desirability bias and hard utilitarianism, as well as the temptations of status and hubris.
My intuitions:
1. There is room and need for Effective Altruists as *bullshit detectors* in the world of philanthropy.
2. Cognitive humility is key when philanthropy tries to go beyond what is simple and local. (See Arnold and Popkin.)
3. In wealthy countries, everyday beliefs, about what *does* good, tend to overrate not-for-profit orgs and underrate markets. A wicked problem is to persuade people to bend the stick the other way -- to make the case for markets and the price mechanism. (See Arnold.)
Another point on this, Arnold. Suppose that, as most EA people believe, de-beaking chickens in factory farms is pretty wrong. What's a solution, or even a partial remedy, for this problem that proceeds via profitable investment? Farms that de-beak their chickens are going to be more profitable than ones that don't, for the foreseeable future at least.
‘ Profit-seeking investment is driven ultimately by what consumers want. Philanthropy is driven ultimately by what donors want. ’
And unlike profit-seekers (capitalists), philanthropists never risk their own money always somebody else’s - preferably taxpayers but also gullible donors with more money than sense.
I'm sure there are people who aren't going to want to hear this but this is why people over history have used governments and religious organizations to supply public goods, not individual philanthropists. All governments, even the most totalitarian, must have at least the grudging consent of the governed providing the feedback you talked about above. Religious organizations generally have the same need or desire to build wide popular support. In the absence of a profit motive or a need to have a broad base of support to keep the lights on, what gets built are totalitarian feifs. The EAs are not proposing some new form of enlightened gifting but providing a justification for keeping in place what we can already see is largely not working for society but does work to give ultra-wealthy donors outsized influence. It probably bought a Presidential election.
Actually the use of religious and other organizations as charity intermediaries illustrates a solution to the problem that the proposal of 'impact markets' is trying to solve, and I think Arnold is kind of missing the issue because he is not focused in on that specific context and difficulty and instead looking at a much bigger picture which folds a lot of other distinct issues into the big omelet.
For example, the thing about rich people giving """philanthropically""" to universities as a way of purchasing a certain kind of fame, prestige, and public relations reputation insurance (not to mention the tax benefit), is a really different problem from the one Scott is trying to tackle.
The problem is easy to explain in that there are people who have accumulated a lot of wealth and want to spend it on "doing good" or "assist the deserving poor" or "helping good people who had bad luck." But they don't know people personally, don't have the time or skills to discern who is and isn't worthy of the most subsidy, how best and most fairly to allocate it limited contributions, what makes sense for the community at large, etc. They aren't 'experts' and they need the help of an 'expert' who they can trust will use good knowledge, experience, judgment, and character to take the money and do more-or-less the best thing with it.
As an example, a few centuries ago in small villages or shtetls it would be quite common (and socially expected and socially rewarded) for older prosperous men to give such donations to the local priest or rabbi in just such expectation that he was indeed such an expert in 'local charity needs' and having intimate knowledge of private details of individuals in the local scene, would be able to dole out the money more-or-less in a manner consistent with the goals stated above. The rabbi would know who were more or less deserving, more or less desperate, more or less likely to benefit from or squander any subsidy, etc. And furthermore that having everybody try to stay on the good side of the priest or rabbi just in case they ever needed to benefit from such subsidy was also, on net, a good cultural equilibrium for the community.*
Well, fast forward to today and we don't have any such reliable intermediaries for a large number of reasons, but we still have very rich people with a lot of wealth they want to give away to the 'best' causes, but they aren't good at discerning which causes are better or worse, so they need 'expert' help.
Scott has personal experience trying to play the role of being such an expert in terms of getting to decide or influence how to allocate a lot of money to a large number and variety of proposals, and he found it was really, really hard to do.
And one of the reasons it is really, really hard to do is that if you leave the context of getting to know people in need personally and intimately and try to scale up to global or cosmic perspective, you run into an information problem and have to know things - especially predictions about the future - that turn out to be beyond the grasp of even the best individual experts.
So, like Arnold says, your path starts to fill with difficult obstacles related to uncertainty. And the question is whether one should accept those obstacles as fundamentally insurmountable - or at least too difficult to make the effort to overcome them worthwhile - or whether one should take the optimistic view on the capacity of clever market mechanisms to aggregate the necessary missing information collected from people willing to put real money where their mouths are, and the ability of clever people to harness such 'finance'-esque capacities, techniques, and incentives in order to somehow meaningfully overcome the uncertainties, raise capital, and allocate scarce resources in the best way.
I think the problems with this approach to philanthropy is not so much the general problem with contemporary philanthropy (as Arnold points out), but that people are too optimistic regarding the capacity to solve a problem which is fundamentally intractable, specifically, the fundamentally intractability of performing universalist consequentialist calculations without setting some cut-offs which cannot be anything but arbitrary in nature. It is an illusion to imagine one can just hand-wave away this problem with market prices because the best that can be done still isn't good enough.
*I personally do my own charitable giving through just such an intermediary and for similar reasons. This, however, is not a 'universalist egalitarian' approach in treating the concerns of all living humans equally, and forget about putting any weight on "all future sentient consciences for the rest of time", or whatever. This gets into questions of moral fundamentals, and one of the major problems with that recent "Criticize Us And Win!" EA essay contest was precisely that they framed things in such a way as to put that deeper discussion out of bounds.
I would agree with most of this. Scott Alexander and the EA move are focused on “solving” problems, because that enhances their status, rather than merely “helping” people.
Think of the comparison between Mother Theresa and Bill Gates. We will always have the poor with us, and not something that will ever be solved.
There are plenty of charities out there that do tremendous work with the more humble goal of helping people, and I’d challenge Scott Alexander to spend a week volunteering at one and perhaps it will make his aims for charity more modest.
One need not be humble in goal, and indeed the fundamental meta-problem is the question of which goal and how to decide. But putting that issue to be side, even if one settles on a particular goal and even if it is particularly ambitious, one should still be humble about one's own cognitive and epistemic limitations regarding the possibility of the derivation of whatever answers are implicated by one's preferred framework with whatever accuracy and precision is required to achieve a kind of confidence above randomness, i.e., that one actually has or can have a kind of cosmic scale 'alpha'.
My point is that, in my view, this is simply impossible for "a butterfly flaps its wings", "unintended consequences", and other similar considerations about chaotic processes and uncertainty about the future.
Obviously if one is going to make decisions one had to try to estimate and weigh the desirability of the consequences in some way if one wants to avoid randomness or dogma. But the scope, scale, and time horizon of such consequences must be narrow enough to lend themselves to such analysis. My view of the EA framework (as I understand it, and I could easily be wrong) is that, at an intellectual or philosophical matter, they have set themselves up to play a game that can't be won.
The Gates foundation has done great good by focusing on public health. That foundation has probably saved millions of lives.
No doubt. I’m not saying these organizations don’t do good work. But when the mission is to solve world health, you are at risk of introducing a whole slew of different problems. For example,spending millions trying to get to the “right” solution by curing malaria or using mosquito nets, and not also funding the use of DDT.
The net result may be decidedly more mixed, because helping people can become a secondary consideration to the getting the right or efficient solution.
‘ All governments, even the most totalitarian, must have at least the grudging consent of the governed providing the feedback you talked about above. Religious organizations generally have the same need or desire to build wide popular support.’
You mean fear, not consent.
Religion frightens with eternal damnation, the Government with threat of imprisonment or violence.
Try not paying your taxes and see what happens.
Once your charitable work tries to expand out beyond your own locality, it is far more likely to fail than to succeed. The information problem is unsolvable beyond that level. Donate to your local foodbanks/goods bank, but do it with actual food and goods, not cash. Contact your local private schools and donate to them specifically to pay the tuition of students by economic class. Do the same with your local colleges and universities. Contact your local medical facilities and offer to pay for necessary medical interventions for people without insurance. Or start your very own food banks, schools, and medical facilities that give away goods and services for free on demonstrated economic need.
Keep it simple, keep it local.
It seems likely to me that there are many cognitive biases operant when you're trying to figure out what's best for your local community. Very hard to reason objectively about that question, given how high your personal stake in it is. Also, most people these days aren't interested in reading the local news and learning the details of how their community works (it sounds like you may be the exception). Point being, it's not clear to me that going local is likely to work better than going global for the average philanthropist.
Well, I just disagree. A global operation will draw the parasites/rent-seekers, and you won't be able to sort them out because you are completely disconnected from them beginning to end. I will simply assert you do better and more effective good directing a billion dollars to a 100K town/county than a m/billion dollars to a 7 billion people world-wide.
It's possible that you're right, but it seems to me that at a minimum, we can't assume that local knowledge is superior in the absence of systematic evidence on that question. At best the arguments are tied, I think, and then it seems to me that the tie is broken by the greater bang for your buck you get by spending overseas.
Not sure those biases don’t exist equally at the global leve,l and in any event probably introduce more at the global level.
You’re average wealthy philanthropist doesn’t get a lot of utility (read status points) donating a coat to a local food bank, but get a ton donating to the Met or to cure cancer.
The net impact is probably the same at best.
My impression of effective altruism is that it was a good faith and worthwhile attempt to bring some quantitative rigor to philanthropical activities. It has since become something like a fantasy baseball league minus the baseball and the chance of actually winning anything; you just pore over numbers and charts and reports to attempt to outdo the other guys with your limited budget of charity dollars. The payoff is the smug sense of superiority you feel towards those who were so crass and clueless as to just donate money to their alma mater or local food bank or something and call it a day. LOL, don't these people know how many utils they're leaving on the table?
As it happens - not exactly by synchronicity since the EA folks triggered it with the announcement of their essay contest - Scott has a new meta-level 'defense' (of a sort) of the EA folks and their perspective from the modal criticisms they get that I think related directly the subject of Arnold's lead blog post.
One interesting argument he makes uses the concept of paradigm shifts. Now, to be fair, the EA folks kind of set the reference frame to this particular perspective as if EA was the default consensus paradigm with the very structure of "assume these things and now criticize".
And I think people generally in their orbit have so thoroughly intellectually abandoned and demoted in status the traditional or classical approaches ((not to mention the 19th century flavors) to moral ideology in favor of the enlightenment-modernist-like attempt to reconstruct a new system from first principles. So this probably feels totally natural and unproblematic to them, like the line about fish not noticing water or something.
But, to someone with my perspective, it's the whole EA approach which is the 'new paradigm' and the criticisms they made of the prior perspectives ring just as unhappily and intellectually unsatisfying in my ears as the anti-EA criticisms ring in Scott's. The big shift to the new moral paradigm is not like the big shifts in the hard sciences, but more like the big shifts in Macroeconomics or Sociology, that is, more a matter of fashion than 'justification'.
The statement:
"Businesses are bad at producing public goods like helping poor people without much money, solving existential risks, and promoting forms of research that can't internalize profits." is apparently true. However, that being true doesn't make other government or philanthropic organizations/institutions competent at helping poor people or "people with problems".
It is very difficult to build an organization (public or private) that can handle individual problems, especially when trying to help individuals that are non-mentorable whose behavior (culture) can't be changed. Our family over two generations has tried to help a lot of "people with problems" who fell through the cracks of our society with over 70 years of trying. Addiction, mental health issues, poor education and poor cultural values are the main issues.
One on one private and family help will be the best you can get, but even that doesn't work a lot of times. Trying to scale that to large institutional sizes is near impossible. More than once I have bailed out people hitting hard times, when the credit card companies or government taxes got them into deep water with all the "fees" and outrageous interest rates. Without the drag of the parasitic companies they could swim to shallow water but very few didn't return to deep water or screwing me out of a few thousand dollars.
Your thinking about this topic is radical in a very interesting way, and I've learned a lot from putting myself inside your perspective a bit. That said, I really think you're thinking about this topic in a mistaken way. I don't know if a comment on your Substack is going to convince you of that, but here goes.
The overarching position you're taking is that we're never going to be in a position to believe that not-for-profit solutions are better than for-profit solutions. But there's no guarantee that the world needs to be arranged so that this never happens. It's an empirical question--a completely empirical question--whether there might be a handful of easily identifiable situations where nonprofit solutions are best.
I think you would agree with me that the structure of the family is one such case. You don't cut deals with your children or your spouse with a profit motive in mind; you act *altruistically* because you know (especially in the case of the kids) some things you could do with *your* resources that would benefit these people. You are in a position to know this, despite the fact that feeding your kids isn't profitable.
The reason you are in a position to know this is that your kids are so lacking in agency that there is lots of obvious low-hanging fruit where other people can use their resources to benefit the kids.
Now think of people living on $2 a day overseas, or very mentally ill people. Isn't it plausible that there might be lots of low-hanging fruit for philanthropists to help them with, for the exact same reason?
But George Soros and Peter Thiel suck, and they're philanthropists too, you say. Point taken. But we're not going to make philanthropy illegal. So I can't do anything to stop GS and PT. The question that faces me is whether I have means available and my disposal to be philanthropic in a way that brings more good into the world than spending that money on further expanding my action figure collection. And I think it's pretty obvious that I know I can do this if I put the money into GiveDirectly.
It is just an empirical question how well for-profit investment works. The world could have been designed the way Paul Ehrlich thought, with extremely scarce non-fungible natural resources. It turned out he was wrong about that, but whether he was wrong about it was an empirical question. There was no law of the universe that said he had to be wrong (which is not to say he wasn't in a position to know that his prediction was mistaken).
I agree that it's rarely wise to bet against the market being able to solve a problem, but sometimes it is wise (like when you're deciding whether to feed your kids) to bet that the market isn't going to just take care of somebody's problem all by itself, without altruistic action on another person's behalf.
If you could provide some concrete comparisons between specific for-profit decisions one could make, on the one hand, and specific highly optimized non-profit actions (like GiveDirectly) on the other hand, that would make your case potentially more convincing. Would be very curious to see something of that sort, which I think would advance the discussion you're trying to have.
It is hubris, but of a very particular kind. Metrics? Yes, of course. At the center is a deterministic belief that with enough data and observation, all things can be modeled if only approximately: Perhaps a universal law discovered? Newton’s impact continues to be felt.
What a timely article for me. I’m now semi-retired (probably “forever” because I want to work). My wife and I come from modest means but built a sizable net worth, mostly from the sweat equity side. We have no heirs, and no desire to have our names on a plaque. My California state university alma mater is wining and dining us because we have already donated some to them. But they have gone over-the-cliff woke.
So we have the serious, but enjoyable task to find worthy recipients of our lifetime achievement. It isn’t easy, and it isn’t something we will delegate.
Cold Buttons seems like common sense. Neither party is right on all issues so why not do what one can (if anything?) to move both toward better positions.?
If a public-spirited philanthropist believes (rightly or wrongly) that government provides a reliable safety net and necessary public goods (justice, national defense, infrastructure, etc), then she will give to other, perhaps "taste-based" causes. See Jackie Mason's comedy routine about saving the ballet.
If a public-spirited philanthropist wants to help the weak, the poor, and the vulnerable who "fall through the cracks" of the welfare state, then she must identify those who (a) are truly needy and (b) won't squander the gift. Yancey Ward (comment above) says keep philanthropy "simple and local." Samuel L. Popkin's classic study of village community, "The Rational Peasant: The Political Economy of Rural Society in Vietnam" (1979) finds that villagers allocated charity to needy widows, orphan support, persons unable to work, and the like. Simple and local.
By contrast, many ambitious philanthropists in modern polities want to address large national or global problems. They get entangled with politics. Robin Hanson points out that politics is about status, not policy. Bryan Caplan points out that politics is about what *sounds* good, rather than what *does* good. Effective Altruists point out that innumeracy distorts philanthropy towards what sounds good, rather than what does good. Arnold Kling points out that Effective Altruists puts way too much faith in numeracy (metrics). Well-meaning philanthropists have to reckon with the twin pitfalls of social desirability bias and hard utilitarianism, as well as the temptations of status and hubris.
My intuitions:
1. There is room and need for Effective Altruists as *bullshit detectors* in the world of philanthropy.
2. Cognitive humility is key when philanthropy tries to go beyond what is simple and local. (See Arnold and Popkin.)
3. In wealthy countries, everyday beliefs, about what *does* good, tend to overrate not-for-profit orgs and underrate markets. A wicked problem is to persuade people to bend the stick the other way -- to make the case for markets and the price mechanism. (See Arnold.)
Another point on this, Arnold. Suppose that, as most EA people believe, de-beaking chickens in factory farms is pretty wrong. What's a solution, or even a partial remedy, for this problem that proceeds via profitable investment? Farms that de-beak their chickens are going to be more profitable than ones that don't, for the foreseeable future at least.
"smarter than the voters" is generally unconvincing.
limit oneself to clear cut cases (God help us etc) might help here.
you don't want to interfere in most things.
only in the near 100% cases.
example: yimby. add utilitarian calculations and studies in public policy. more Pareto improvements. etc.
it's difficult to contain oneself. but this is at least less dubious
‘ Profit-seeking investment is driven ultimately by what consumers want. Philanthropy is driven ultimately by what donors want. ’
And unlike profit-seekers (capitalists), philanthropists never risk their own money always somebody else’s - preferably taxpayers but also gullible donors with more money than sense.