19 Comments

"As the business professor and AI enthusiast Ethan Mollick says, the people who manipulate AI most productively will be those with the deepest and most esoteric knowledge of the humanities."

Previous history shows that most technology is pretty rapidly adopted by sociopaths and other bad actors. The ever present phenomenon of internet spam should be a mild reminder of this. We make amazing technical strides when we're trying to kill each other.

No offense intended but you think the AI did a good job on that deposit insurance article largely because *it is the article that you would have written yourself* absent AI. There may be some utility in saving yourself wear and tear on your fingers but as far as a demonstration of problem-solving goes, it's an utter failure. The AI is just regurgitating your solution to the problem with little to no additional insight. I've seen enough examples of what AI produces when asked to answer even simple questions where it has little to no information and it doesn't honestly report back "I don't know". It goes off into production of what is readily recognized as gibberish *if you already know the answer to the question*. Given a novel problem we're never going to know if the AI came up with an innovative solution or a load of bullshit but my bet is on the later.

Expand full comment

In your post "The SVB story: narrative and causes" you said: "If I were in charge of designing financial regulation, rather than try to make the financial system hard to break, I would try to make it easy to fix."

Does private insurance make it easier or harder to fix in case of bank failure?

Expand full comment

“the people who manipulate AI most productively will be those with the deepest and most esoteric knowledge of the humanities.”

I’m sympathetic to this take from Ian Leslie, but also bored. Virtually every pundit with even a passing interest in the humanities has been saying this for months now.

Expand full comment

re: "the people who manipulate AI most productively will be those with the deepest and most esoteric knowledge of the humanities."

Someone already posted that the humanities have lost their way: and I'd state thats an understatement given the large penetration of woke progressives into that realm that are into "critical studies" rather than actual critical thinking. Its the humanities that bred the current wave of DEI folks spreading throughout governments and corporations. They've already infiltrated the realm of "AI safety": with the idea that AI should be lobotomized to be "harmless" according to progressive notions of what is "harm". Their concerns over "safety" are progressive views of "safety" which are different from those of others. Admittedly some AI like Gary Marcus seem to have fallen prey to "precautionary principle" related ideas also, since it seems unfortunately progressives have rubbed off on many AI types.

Back in the 1980s AI tended to have lots of folks who had very generalist mindsets and were curious about lots of things. Marc Andreessen wasn't from that world, but has a similar mindset to the people I'm talking about, and is a well known. Sherry Turkle, a sociologist at MIT has a chapter in her book The Second Self on "The New Philosophers of Artificial Intelligence ". Those were largely people who were into symbolic AI, or who were the pioneers in neural net approaches and therefore likely to also have generalist mindsets. They viewed AI as having ideas that were widely applicable to many other fields, and which could learn from many fields as well, and often were interested in many other intellectual realms. Nobel laureate economist and AI pioneer Herb Simon's book on the Sciences of the Artificial illustrates that broad mindset, as would Douglas Hofstader's Godel, Escher, Bach.

Obviously many AI folks today are into general exploration of philosophy like rationalism. Unfortunately I suspect due to the rise of bottom up brute force approaches many of the newer folks in AI are more from the hardcore math/CS realm and might not be as much generalists/philosophers as was more typical in the early days. I suspect these are the folks that were more able to be swayed into falling for bringing in the AI Safety crowd that are trying to replicate the way DEI has spread progressive ideas.

There are folks like Gary Marcus that I doubt has the slightest grasp on ideas like regulatory capture or other relevant topics. Hopefully there will be enough startups in the AI world that will bring in people that grasp free markets and are more libertarian-leaning rather than progressive to help provide alternative viewpoints. Otherwise progressives like OpenAI's CEO Sam Altman will continue to beg for regulation: which of course will help them get entrenched and keep out competition.

Expand full comment

I’ve been trying to compose some thoughts on developmental psychology, imagination, culture and what separates humans from other species on these issues. I spent years reading books and articles and doing research, and am now at the phase of trying to (struggling to) synthesize my thoughts and writing. For the fun of it I just queried Chat GPT.

In the first try, I agreed with everything it wrote on the topic and I was impressed that it summarized the issue better and more concisely and concretely than I could (admittedly I am not much of a writer). It didn’t just spit out facts either. With a little prompting on my part, it connected distinct branches and fields and found overlaps and areas of reinforcement.

I am not sure if this is a knowledge creation system yet, but considering the vast scope of knowledge of the human race, I am not sure how distinct knowledge creation is from knowledge synthesis.

I would be amazed if AI isn’t better at writing books than 99.9% of us in as little as a year or two (admittedly with supervision and guidance from experts in that field). Consider me amazed.

Expand full comment

'If you take as givens that the AGI is (1) orders of magnitude smarter than us, and can act at the speed of code, (2) using conscious consequentialism to achieve its goals and (3) you release it onto the internet with a goal that would be made easier by killing all the humans, all the humans be very dead.'

How is this supposed to work? The internet requires electricity, and electric generation requires humans. This sort of shallow analysis basically is implicitly claiming there is no opportunity cost or collateral damage to the AI for killing all humans. Unless the AIs specific, stated top goal is 'kill all humans' it is inconceivable that killing all humans immediately is going to further that goal. In fact, given that many humans would not die from the AI hacking nuclear launch codes (or at least the training data the AI would have access to would convince it of that fact) and launching all nukes- even if you specifically told it to kill all humans it would have to start out by not killing a huge number of people to ever hope to achieve that goal.

Expand full comment
founding

Re: "the people who manipulate AI most productively will be those with the deepest and most esoteric knowledge of the humanities."

The humanities have lost their way. And they have failed the market test in education. Ever fewer students choose to major in the humanities; and the dwindling number who do major in the humanities then earn much less than students who major in STEM or economics.

My intuition is that the case for the humanities as the path to mastery of AI is an implausible, rearguard rhetorical plea for relevance.

PS: I would like to be wrong, because, once in a while, I have stolen time to defend the cognitive value of great authors in the humanities for the social sciences:

https://www.adamsmithworks.org/speakings/alcorn-smith-snark-larochefoucauld

https://www.researchgate.net/publication/265696455_Suffering_in_Hell_The_Psychology_of_Emotions_in_Dante's_Inferno

https://www.jstor.org/stable/3733899

Expand full comment

Using Engels' prediction that the state will wither away as an analogy for the disappearance of the individual doesn't work. The state around the world has only grown more powerful and insidious. Does that now mean that the individual will become supreme?

Expand full comment
founding

The tail succeeds regardless of undergrad field of study. My prediction is that the humanities, especially real-existing collegiate humanities, won't have an edge in preparing highly talented youths for leadership in the brave new world of AI.

Expand full comment

Regarding the chatGPT explanation about private insurance for banks, it seems quite reasonable, maybe even rather insightful, but I think this one sentence might be as important as everything else stated,

"However, it is important to note that private deposit insurance may not be a panacea, and that there are potential downsides to such a system, such as the risk of insolvency of the insurance company itself"

How many companies have adequate capital to insure the biggest banks? One of the problems with flood insurance is that private insurers are hesitant to insure at any price. Would privately insuring banks face similar difficulties?

Expand full comment
founding

Re: "humanity already has to deal with two powerful superintelligent forces, evolution and capitalism, so AGI wouldn’t be unprecedented" -- Nathan Braun, as summarized by Zvi Moshowitz

Isn't the analogy of AGI to evolution and capitalism a category mistake? Evolution and capitalism are systems that integrate two mechanisms: (a) decentralized variation, via mutation or entrepreneurial innovation; and (b) selection through competition ("survival of the fittest").

It would be more accurate to say that AGI will be an instance of technological innovation via the the mechanisms of capitalism; i.e., via entrepreneurship and market competition.

Compare Michael Huemer's short essay, "How much should you freak out about AI?":

https://fakenous.substack.com/p/how-much-should-you-freak-out-about

Key questions:

Will AI achieve consciousness?

Is non-conscious AI safe?

Huemer makes a case that *the small subset of evil, talented humans* are the real threat:

"So if, e.g., we build a superintelligent AI with lots of safety features built in, we also need to figure out how to stop humans from deliberately disabling the safety features."

Expand full comment