Sorry to break your admonition, but without weighing in on the drug itself, I say there's a good Hayekean reason to agree that "clinicians who are saving patients using Ivermectin should not have their evidence over-ruled by randomized controlled trials"
An RCT is a mechanism by which society attempts to reach consensus on captial-K Knowledge. But an experienced clinician developing a treatment protocol is appying situational practical knowledge.
If that protocol includes a possibly useless but well-known-to-be-safe drug, then society doesn't need to enforce its consensus on him. It can just get out if the way and let him do his job.
I'm reminded of the simple algorithm Scott Alexander mentions in this piece (https://astralcodexten.substack.com/p/contra-weyl-on-technocracy) for diagnosing psychosis. It outperformed doctors, but doctors insisted that surely their knowledge, in support of what the algorithm said, would do better than the algorithm alone, but in fact, no, allowing the doctor to overrule the algorithm made things worse. Doctors contributed negative knowledge to the decision-making process. Sometimes, doctors don't have special localized knowledge that supersedes scientific conclusions. The reason research exists in the first place is because a collection of anecdotal experiences isn't a substitute for - or even necessarily a complement to - scientific reasoning.
The placebo effect is real, which is why we developed randomized control trials. The nocebo effect is also real, which is why witch doctors appear to be able to stick pin in dolls and give you liver problems. The nocebo effect also explains how the legal/science experts can demonstrate toxic effects at extremely low concentration of a chemical from a deep-pockets source with no method of biological action. If someone "believes" that chemical X caused his health problem, the "belief" in the lawyer/expert can create the real health problem.
Without randomized control trials and without a demonstrable biological mechanism, it is best to assume you are dealing with professional snake oil salesmen -- or with a manifestation of the very nocebo effect they are not studying scientifically with RCT's.
It might be that RCTs were not quite as universally true as represented; it might be that something other than Ivermectin is working in North India.
The lack of Covid deaths in Uttar Pradesh using Ivermectin is a very important truth, and many might think it more important than the "non-effect" results of the trials.
There is some cross reactivity with the common cold corona virus and other similar factors that can make large groups appear more resistant. These are statistical artifacts when dealing with complex problem with many unknown variables.
If economists take such a "real world" observation as proof of a hypothesis, their understanding of truth and knowledge in N-dimensional problems, where you don't even know all the dimensions, appear a bit sloppy. Perhaps that is why the Social Science have failed in the last century, while the real sciences have dramatically improved the human condition on this planet from it Hobbesian beginnings.
With respect to Ivermectin: given that the severity of the virus was exaggerated, it is possible that many given Ivermectin would have recovered anyway and the recovery attributed to the drug. RCTs showed inconclusive results. However, millions of people believe homeopathics work and they feel better. Who reallly cares how you do it as long as improvement occurs.
The multiple dimension axes of most problems results in a problem solution simplification to reduce those dimensions to be tractable, which results in non-successful results.
How is a biologist being very critical of an "over processed/over mathed" scientific result due to both empirical evidence from practitioners and serious methodology concerns different from an economist being very critical of an "over processed/over mathed" policy proposal due to common sense axioms and real life business experience?
I'm not in a position to agree/disagree with Bret's takedown of the "Together trial" that he did in an extensive podcast, but it sounds a lot like Arnold railing against the MIT school's economics outputs.
My high school chemistry teacher liked to talk about the distinction between accuracy and precision.
Math is a language of precise description. Use of math substantially increases the odds that researchers are at least talking about the same thing. This is rather valuable in a global research community.
However, a precise description is not necessarily an accurate one. And accuracy is ultimately what matters most.
The big question is, what practices would improve accuracy in the economics community? I don't think getting rid of math is the answer; I believe math is still part of the puzzle, it's just of secondary rather than primary importance.
Brett seems correct: "Audacity, Tenacity, Veracity" - more important than mental horsepower.
Care about truth.
Willing to say stuff others don't like.
Willing to stick with the problem puzzle until it is solved.
Steve thinks "the problem with models is that they may be too simple for the processes they are trying to model. I [Arnold] think that is a big problem in economics. "
The over simple model, like Econ 101 (from Michael Lind) is certainly one problem.
But the biggest problem is the goal of economics. Is it to understand? Or to influence, to control, to improve? Or to profit thru investments or other actions because of the econ?
Simple models are likely OK for simplified understanding, but no amount of econ understanding is enough to control thinking humans who constantly choose to be uncontrollable.
“But don’t give up just because he said one thing that you or I think is wrong. And don’t waste your comments on this issue, because it is not central to the podcast.”
I find this approach to learning from people extremely underrated by my generation. For instance, if I try to recommend Zero to One to someone they want to scream about how very bad Thiel’s politics are before they consider even a single piece of insight he has on the topic we’re discussing. I want to scream that of course he thinks differently from conventional wisdom because that’s what the entire book is about! Should I be trying to learn how to found a radically different company from someone who thinks the same as everyone else?
That overly specific example asides, we put all these prerequisites on people having to think X and Y in order to deserve any respect at all and it’s totally antithetical to what true learning is about.
It's really important to listen to those who you disagree with to hear truths you're less likely to hear, as well as the untruths (those where he disagrees with you! ??).
Randomized Clinical Trials (RCTs) can easily be rigged to produce a desired result, and this was in fact done repeatedly during the pandemic to suppress the use of re-purposed anti-viral agents in order to protect the conditions for Emergency Use Authorization of the vaccines, among which is that there be no alternative therapy available. Under such circumstances the experience of practitioners may be of better evidentiary value than manipulated RCTs. In fact, one RCT published by the prestigious journal Lancet had to be withdrawn when the data base was found to be entirely fraudulent, something that should have been checked before publication but wasn't due to the urgency of publishing in support of the official Public Health narrative. Besides outright fraud, it is easy to manipulate dosage, timing of treatment, etc., to get an intended negative result, and this was also done. (There were also many RCTs from other countries strongly supporting the use of re-purposed drugs).
By way of contrast, practitioners Dr. George Fareed and Dr. Brian Tyson successfully treated thousands of Covid cases with re-purposed drugs with almost no adverse outcomes from the disease among a highly vulnerable population in their chain of urgent care centers in California's Imperial Valley.
Unlike "social science" journals, Lancet and Science mag, actually catch some junk science getting past reviewers. The "hard" STEM journals outside of the extremely complex life sciences are even better.
My sense, greatly reinforced by the COVID experience, isn't so much that the models are bad, but the data is generally terrible. At best it's a wild guess, but much more commonly data elements are selected to prove the point at issue.
I tend to think something like this: Solow made his living as a scientist, constructing illustrative models of how the world worked. His successors make their livings as engineers, providing results based on application of his models. But... the results these guys get aren't really used for anything more than political talking points, so the answer is going to be "whatever we want it to be".
I think Patterson worth listening to, but also has something of the too-cocky autodidact about him.
He believes that flaws in the foundations of mathematics and corrupt everything. Cantor's diagonal argument somehow allows scaremingers to abuse the word "exponential" during Covid. How?
I listened to the Weinstein-Patterson fragment where they discuss quantum mechanics and the crisis of the foundations of mathematics. In my view, Patterson's objections illustrate profound methodological confusion that arose towards the end of the XIX century, when our models of reality - especially in physics - became so good and so extensive that people started to forget that they are models, and discuss them as if they were reality itself. Take infinite sets, which Patterson, following some mathematicians, objects to on philosophical grounds. That makes about as much sense as Aristotelian arguments about form and substance. Objects are models, so are collections of objects or sets. Necessarily informal correspondence rules map reality to these models. We are free to select whatever axioms (what Hellenistic scientists used to call hypotheses) we wish, deduce consequences from them, and see if the resulting structure maps back onto reality in interesting or useful ways. If it does not, it is evidence that the choice of axioms was unfortunate, but it is wrong to call the axioms false because of this. The fact that some models, such as "object" or "cause and effect", together with correspondence rules mapping reality via perception onto these models, are to some extent encoded - machined in by natural selection, as it were - into the structure of our brains, should not obscure their nature as models.
This is very much my own view. I can respect Wildberger and co for starting from a different philosophical position, as long as they have the humility to say so.
But there's an arrogant dogmatism to just declaring it "fallacious" and "antirational" to believe in things like calculus and when what you really mean is that "my philosophy is too small for such things".
He's on stronger ground attacking to Copenhagen interpretation.
Physics does have a good method for getting empirically correct results called "shut up and calculate". We suspend judgement on what real things those calculations refer to.
But Bohr et all then insist that there can't be any such real things. There's an arrogant dogmatism to that.
Insisting that there can only be such real things as we find it easy and intuitive to imagine on the basis of everyday experience amounts to an arrogant dogmatism, too.
Indeed. These finitist arguments are far from crazy, at the very least they should force mathematicians to carefully consider the underlying philosophy.
Kudos to Patterson for emphasizing philosophy, but I don't see him actually doing it. He just asserts things, it at best gives stupid arguments like his infinte-circle thing.
I may have him confused with Dr. Doron Zeilberger. Sorry. I know nothing about the worth of the ideas of Dr. Zeilberger. Wildberger is worth following.
Sorry to break your admonition, but without weighing in on the drug itself, I say there's a good Hayekean reason to agree that "clinicians who are saving patients using Ivermectin should not have their evidence over-ruled by randomized controlled trials"
An RCT is a mechanism by which society attempts to reach consensus on captial-K Knowledge. But an experienced clinician developing a treatment protocol is appying situational practical knowledge.
If that protocol includes a possibly useless but well-known-to-be-safe drug, then society doesn't need to enforce its consensus on him. It can just get out if the way and let him do his job.
I'm reminded of the simple algorithm Scott Alexander mentions in this piece (https://astralcodexten.substack.com/p/contra-weyl-on-technocracy) for diagnosing psychosis. It outperformed doctors, but doctors insisted that surely their knowledge, in support of what the algorithm said, would do better than the algorithm alone, but in fact, no, allowing the doctor to overrule the algorithm made things worse. Doctors contributed negative knowledge to the decision-making process. Sometimes, doctors don't have special localized knowledge that supersedes scientific conclusions. The reason research exists in the first place is because a collection of anecdotal experiences isn't a substitute for - or even necessarily a complement to - scientific reasoning.
The placebo effect is real, which is why we developed randomized control trials. The nocebo effect is also real, which is why witch doctors appear to be able to stick pin in dolls and give you liver problems. The nocebo effect also explains how the legal/science experts can demonstrate toxic effects at extremely low concentration of a chemical from a deep-pockets source with no method of biological action. If someone "believes" that chemical X caused his health problem, the "belief" in the lawyer/expert can create the real health problem.
Without randomized control trials and without a demonstrable biological mechanism, it is best to assume you are dealing with professional snake oil salesmen -- or with a manifestation of the very nocebo effect they are not studying scientifically with RCT's.
So, I guess aspirin never worked until the mid 60's, eh?
Uttar Pradesh used Ivermectin - very successfully. If this was an econ "real world" experiment, it would be called proof.
https://pierrekory.substack.com/p/the-miracle-not-heard-around-the
It might be that RCTs were not quite as universally true as represented; it might be that something other than Ivermectin is working in North India.
The lack of Covid deaths in Uttar Pradesh using Ivermectin is a very important truth, and many might think it more important than the "non-effect" results of the trials.
Empirical verification.
There is some cross reactivity with the common cold corona virus and other similar factors that can make large groups appear more resistant. These are statistical artifacts when dealing with complex problem with many unknown variables.
If economists take such a "real world" observation as proof of a hypothesis, their understanding of truth and knowledge in N-dimensional problems, where you don't even know all the dimensions, appear a bit sloppy. Perhaps that is why the Social Science have failed in the last century, while the real sciences have dramatically improved the human condition on this planet from it Hobbesian beginnings.
With respect to Ivermectin: given that the severity of the virus was exaggerated, it is possible that many given Ivermectin would have recovered anyway and the recovery attributed to the drug. RCTs showed inconclusive results. However, millions of people believe homeopathics work and they feel better. Who reallly cares how you do it as long as improvement occurs.
The multiple dimension axes of most problems results in a problem solution simplification to reduce those dimensions to be tractable, which results in non-successful results.
Also called unintended consequences.
Wherever science and politics intersect, science is bent and corrupted.
Podcast was excellent, raised my view of Weinstein again
How is a biologist being very critical of an "over processed/over mathed" scientific result due to both empirical evidence from practitioners and serious methodology concerns different from an economist being very critical of an "over processed/over mathed" policy proposal due to common sense axioms and real life business experience?
I'm not in a position to agree/disagree with Bret's takedown of the "Together trial" that he did in an extensive podcast, but it sounds a lot like Arnold railing against the MIT school's economics outputs.
My high school chemistry teacher liked to talk about the distinction between accuracy and precision.
Math is a language of precise description. Use of math substantially increases the odds that researchers are at least talking about the same thing. This is rather valuable in a global research community.
However, a precise description is not necessarily an accurate one. And accuracy is ultimately what matters most.
The big question is, what practices would improve accuracy in the economics community? I don't think getting rid of math is the answer; I believe math is still part of the puzzle, it's just of secondary rather than primary importance.
More prizes and status for those economists more correctly predicting macro measured effects.
Arnold's Fantasy Intellectual Team's idea, and other methods to improve the status of those thinkers who are doing good thinking would help.
This link about Jane Street and quants has lots of insight about math and applied market economics: https://www.thediff.co/p/jane-street
Etienne Gilson made much the same point: “the conclusions of the master are the premises of the disciple.”
Brett seems correct: "Audacity, Tenacity, Veracity" - more important than mental horsepower.
Care about truth.
Willing to say stuff others don't like.
Willing to stick with the problem puzzle until it is solved.
Steve thinks "the problem with models is that they may be too simple for the processes they are trying to model. I [Arnold] think that is a big problem in economics. "
The over simple model, like Econ 101 (from Michael Lind) is certainly one problem.
But the biggest problem is the goal of economics. Is it to understand? Or to influence, to control, to improve? Or to profit thru investments or other actions because of the econ?
Simple models are likely OK for simplified understanding, but no amount of econ understanding is enough to control thinking humans who constantly choose to be uncontrollable.
“But don’t give up just because he said one thing that you or I think is wrong. And don’t waste your comments on this issue, because it is not central to the podcast.”
I find this approach to learning from people extremely underrated by my generation. For instance, if I try to recommend Zero to One to someone they want to scream about how very bad Thiel’s politics are before they consider even a single piece of insight he has on the topic we’re discussing. I want to scream that of course he thinks differently from conventional wisdom because that’s what the entire book is about! Should I be trying to learn how to found a radically different company from someone who thinks the same as everyone else?
That overly specific example asides, we put all these prerequisites on people having to think X and Y in order to deserve any respect at all and it’s totally antithetical to what true learning is about.
It's really important to listen to those who you disagree with to hear truths you're less likely to hear, as well as the untruths (those where he disagrees with you! ??).
Randomized Clinical Trials (RCTs) can easily be rigged to produce a desired result, and this was in fact done repeatedly during the pandemic to suppress the use of re-purposed anti-viral agents in order to protect the conditions for Emergency Use Authorization of the vaccines, among which is that there be no alternative therapy available. Under such circumstances the experience of practitioners may be of better evidentiary value than manipulated RCTs. In fact, one RCT published by the prestigious journal Lancet had to be withdrawn when the data base was found to be entirely fraudulent, something that should have been checked before publication but wasn't due to the urgency of publishing in support of the official Public Health narrative. Besides outright fraud, it is easy to manipulate dosage, timing of treatment, etc., to get an intended negative result, and this was also done. (There were also many RCTs from other countries strongly supporting the use of re-purposed drugs).
By way of contrast, practitioners Dr. George Fareed and Dr. Brian Tyson successfully treated thousands of Covid cases with re-purposed drugs with almost no adverse outcomes from the disease among a highly vulnerable population in their chain of urgent care centers in California's Imperial Valley.
Unlike "social science" journals, Lancet and Science mag, actually catch some junk science getting past reviewers. The "hard" STEM journals outside of the extremely complex life sciences are even better.
My sense, greatly reinforced by the COVID experience, isn't so much that the models are bad, but the data is generally terrible. At best it's a wild guess, but much more commonly data elements are selected to prove the point at issue.
I tend to think something like this: Solow made his living as a scientist, constructing illustrative models of how the world worked. His successors make their livings as engineers, providing results based on application of his models. But... the results these guys get aren't really used for anything more than political talking points, so the answer is going to be "whatever we want it to be".
I think Patterson worth listening to, but also has something of the too-cocky autodidact about him.
He believes that flaws in the foundations of mathematics and corrupt everything. Cantor's diagonal argument somehow allows scaremingers to abuse the word "exponential" during Covid. How?
When you dig into the claimed flaws, we find bald assertions. He doesn't believe infinite sets. And his argument is basically "they are an obvious absurdity, QED". https://steve-patterson.com/infinite-things-do-not-exist/
I listened to the Weinstein-Patterson fragment where they discuss quantum mechanics and the crisis of the foundations of mathematics. In my view, Patterson's objections illustrate profound methodological confusion that arose towards the end of the XIX century, when our models of reality - especially in physics - became so good and so extensive that people started to forget that they are models, and discuss them as if they were reality itself. Take infinite sets, which Patterson, following some mathematicians, objects to on philosophical grounds. That makes about as much sense as Aristotelian arguments about form and substance. Objects are models, so are collections of objects or sets. Necessarily informal correspondence rules map reality to these models. We are free to select whatever axioms (what Hellenistic scientists used to call hypotheses) we wish, deduce consequences from them, and see if the resulting structure maps back onto reality in interesting or useful ways. If it does not, it is evidence that the choice of axioms was unfortunate, but it is wrong to call the axioms false because of this. The fact that some models, such as "object" or "cause and effect", together with correspondence rules mapping reality via perception onto these models, are to some extent encoded - machined in by natural selection, as it were - into the structure of our brains, should not obscure their nature as models.
This is very much my own view. I can respect Wildberger and co for starting from a different philosophical position, as long as they have the humility to say so.
But there's an arrogant dogmatism to just declaring it "fallacious" and "antirational" to believe in things like calculus and when what you really mean is that "my philosophy is too small for such things".
He's on stronger ground attacking to Copenhagen interpretation.
Physics does have a good method for getting empirically correct results called "shut up and calculate". We suspend judgement on what real things those calculations refer to.
But Bohr et all then insist that there can't be any such real things. There's an arrogant dogmatism to that.
Insisting that there can only be such real things as we find it easy and intuitive to imagine on the basis of everyday experience amounts to an arrogant dogmatism, too.
The CI goes beyond even your caveat of "easy and intuitive to imagine" and asks us to just give up entirely.
But yes, I suspect Patterson and others like him are engaging in a kind of dogmatic realisim too.
Give up entirely on what exactly?
For a more careful explorarion of these iconoclastic math ideas, see Patterson's source, N J Wildberger - on youtube.
Indeed. These finitist arguments are far from crazy, at the very least they should force mathematicians to carefully consider the underlying philosophy.
Kudos to Patterson for emphasizing philosophy, but I don't see him actually doing it. He just asserts things, it at best gives stupid arguments like his infinte-circle thing.
I may have him confused with Dr. Doron Zeilberger. Sorry. I know nothing about the worth of the ideas of Dr. Zeilberger. Wildberger is worth following.
I think Patterson is indeed a fan of Wildberger, who is indeed a serious mathematian worth listening to.