19 Comments

"As the business professor and AI enthusiast Ethan Mollick says, the people who manipulate AI most productively will be those with the deepest and most esoteric knowledge of the humanities."

Previous history shows that most technology is pretty rapidly adopted by sociopaths and other bad actors. The ever present phenomenon of internet spam should be a mild reminder of this. We make amazing technical strides when we're trying to kill each other.

No offense intended but you think the AI did a good job on that deposit insurance article largely because *it is the article that you would have written yourself* absent AI. There may be some utility in saving yourself wear and tear on your fingers but as far as a demonstration of problem-solving goes, it's an utter failure. The AI is just regurgitating your solution to the problem with little to no additional insight. I've seen enough examples of what AI produces when asked to answer even simple questions where it has little to no information and it doesn't honestly report back "I don't know". It goes off into production of what is readily recognized as gibberish *if you already know the answer to the question*. Given a novel problem we're never going to know if the AI came up with an innovative solution or a load of bullshit but my bet is on the later.

Expand full comment

Given what you write, and I agree with what you write, a real test of AI would be to sufficiently quiz it so that it not only provided the "best" answer / policy but it defended that policy against serious intellectual challenges.

Does AI have conviction? Does it know what it knows? Or does it know nothing and simply follow an algorithm?

What I gather is AI is an impersonator. If you ask it to play Thomas Sowell it will happily oblige. But it will just as happily impersonate Paul Krugman. I suppose with Krugman and with Fauci the Ai would need to know the year and perhaps who was POTUS to do a successful impersonation.

But AI does not have conviction. It does not know what it knows. It is left to the humans to decide what is true and correct, as it always will be.

Expand full comment

Come to think of it, it might be amusing to ask ChatGPT to impersonate a person, then answer questions entirely out of that person's wheel house to see what it comes up with.

"You are Doctor Hammer. Please explain the development of early Minoan weaving using insights from Freud." Or "You are Copernicus. Please describe in 500 words or less Taoism."

I wonder if it will pull writing styles and then overlay a Wiki style article with it, or search their corpus and try to link their words with whatever seems to connect to the topic.

Expand full comment

"You are Rudy Ray More. In one sentence, summarize the fundamental question of existentialism."

"BITCH, ARE YOU FOR REAL?!"

Expand full comment

For skeptics who remain perplexed about the hype surrounding ChatGPT, you might want to check out Scott Locklin's latest post (rant?) on technology. Trigger warning: it is not charitable to those who disagree.

Expand full comment

Appreciate the recommendation. Locklin is a bit salty, but his opinions seem to be supported by experience. Related to what he says, Marginal Revolution just had a post about American civilian aviation being throttled at subsonic speeds. Driving speeds, except in Montgomery County Maryland (Speed trap mecca) have generally increased the last 40 years. Passenger jet speeds remain the same. That is some technological stagnation!

I do think Locklin does not give proper acknowledgement to innovations in materials and medical science. While nanotechnology hasn't realized the futurist vision of miniature robots, discoveries related to the science of manipulating atoms and molecules have enabled new and superior products.

But I agree with Locklin's criticism that much of today's "science" is focused on selling "technology" while leaving it to others who are never named and don't exist to do the engineering that makes the technology practical.

Now to be fair, today's electronic devices are incredible innovations that are only possible because of recent (past 20 years) technological developments. Broad criticisms of science are unfair. However, there are scientific areas that invite great exaggeration and unfulfilled promises and Quantum Computing and AI are two such areas. I do think it is telling that at the same time we have euphoria for ChatGPT, the proving of self-driving vehicles is again stalled.

Direct links to Locklin & Marginal Revolution

https://scottlocklin.wordpress.com/2023/03/23/how-to-be-a-technology-charlatan/

https://marginalrevolution.com/marginalrevolution/2023/03/end-speed-limits-on-aircraft.html

Expand full comment

In your post "The SVB story: narrative and causes" you said: "If I were in charge of designing financial regulation, rather than try to make the financial system hard to break, I would try to make it easy to fix."

Does private insurance make it easier or harder to fix in case of bank failure?

Expand full comment

“the people who manipulate AI most productively will be those with the deepest and most esoteric knowledge of the humanities.”

I’m sympathetic to this take from Ian Leslie, but also bored. Virtually every pundit with even a passing interest in the humanities has been saying this for months now.

Expand full comment

re: "the people who manipulate AI most productively will be those with the deepest and most esoteric knowledge of the humanities."

Someone already posted that the humanities have lost their way: and I'd state thats an understatement given the large penetration of woke progressives into that realm that are into "critical studies" rather than actual critical thinking. Its the humanities that bred the current wave of DEI folks spreading throughout governments and corporations. They've already infiltrated the realm of "AI safety": with the idea that AI should be lobotomized to be "harmless" according to progressive notions of what is "harm". Their concerns over "safety" are progressive views of "safety" which are different from those of others. Admittedly some AI like Gary Marcus seem to have fallen prey to "precautionary principle" related ideas also, since it seems unfortunately progressives have rubbed off on many AI types.

Back in the 1980s AI tended to have lots of folks who had very generalist mindsets and were curious about lots of things. Marc Andreessen wasn't from that world, but has a similar mindset to the people I'm talking about, and is a well known. Sherry Turkle, a sociologist at MIT has a chapter in her book The Second Self on "The New Philosophers of Artificial Intelligence ". Those were largely people who were into symbolic AI, or who were the pioneers in neural net approaches and therefore likely to also have generalist mindsets. They viewed AI as having ideas that were widely applicable to many other fields, and which could learn from many fields as well, and often were interested in many other intellectual realms. Nobel laureate economist and AI pioneer Herb Simon's book on the Sciences of the Artificial illustrates that broad mindset, as would Douglas Hofstader's Godel, Escher, Bach.

Obviously many AI folks today are into general exploration of philosophy like rationalism. Unfortunately I suspect due to the rise of bottom up brute force approaches many of the newer folks in AI are more from the hardcore math/CS realm and might not be as much generalists/philosophers as was more typical in the early days. I suspect these are the folks that were more able to be swayed into falling for bringing in the AI Safety crowd that are trying to replicate the way DEI has spread progressive ideas.

There are folks like Gary Marcus that I doubt has the slightest grasp on ideas like regulatory capture or other relevant topics. Hopefully there will be enough startups in the AI world that will bring in people that grasp free markets and are more libertarian-leaning rather than progressive to help provide alternative viewpoints. Otherwise progressives like OpenAI's CEO Sam Altman will continue to beg for regulation: which of course will help them get entrenched and keep out competition.

Expand full comment

I’ve been trying to compose some thoughts on developmental psychology, imagination, culture and what separates humans from other species on these issues. I spent years reading books and articles and doing research, and am now at the phase of trying to (struggling to) synthesize my thoughts and writing. For the fun of it I just queried Chat GPT.

In the first try, I agreed with everything it wrote on the topic and I was impressed that it summarized the issue better and more concisely and concretely than I could (admittedly I am not much of a writer). It didn’t just spit out facts either. With a little prompting on my part, it connected distinct branches and fields and found overlaps and areas of reinforcement.

I am not sure if this is a knowledge creation system yet, but considering the vast scope of knowledge of the human race, I am not sure how distinct knowledge creation is from knowledge synthesis.

I would be amazed if AI isn’t better at writing books than 99.9% of us in as little as a year or two (admittedly with supervision and guidance from experts in that field). Consider me amazed.

Expand full comment

'If you take as givens that the AGI is (1) orders of magnitude smarter than us, and can act at the speed of code, (2) using conscious consequentialism to achieve its goals and (3) you release it onto the internet with a goal that would be made easier by killing all the humans, all the humans be very dead.'

How is this supposed to work? The internet requires electricity, and electric generation requires humans. This sort of shallow analysis basically is implicitly claiming there is no opportunity cost or collateral damage to the AI for killing all humans. Unless the AIs specific, stated top goal is 'kill all humans' it is inconceivable that killing all humans immediately is going to further that goal. In fact, given that many humans would not die from the AI hacking nuclear launch codes (or at least the training data the AI would have access to would convince it of that fact) and launching all nukes- even if you specifically told it to kill all humans it would have to start out by not killing a huge number of people to ever hope to achieve that goal.

Expand full comment
founding

Re: "the people who manipulate AI most productively will be those with the deepest and most esoteric knowledge of the humanities."

The humanities have lost their way. And they have failed the market test in education. Ever fewer students choose to major in the humanities; and the dwindling number who do major in the humanities then earn much less than students who major in STEM or economics.

My intuition is that the case for the humanities as the path to mastery of AI is an implausible, rearguard rhetorical plea for relevance.

PS: I would like to be wrong, because, once in a while, I have stolen time to defend the cognitive value of great authors in the humanities for the social sciences:

https://www.adamsmithworks.org/speakings/alcorn-smith-snark-larochefoucauld

https://www.researchgate.net/publication/265696455_Suffering_in_Hell_The_Psychology_of_Emotions_in_Dante's_Inferno

https://www.jstor.org/stable/3733899

Expand full comment

I think you are speaking apples and oranges. Or something like that. Let me explain. The fact that STEM grads earn more than humanities is about the masses, the average. The statement about who will best use AI is about the tail, a much smaller number of the "best."

If that isn't compelling maybe this will be, though it is a bit dated. A while back it was reported in WSJ that a study of earnings of mid-career ivy league grads showed no difference between STEM and humanities majors.

Expand full comment
founding

I take your point, about the average vs the tail.

Re: The WSJ finding (if true). The signaling theory of higher-education (Michael Spence, Bryan Caplan) states that the university degree is evidence of a job applicant's by intelligence, diligence, and conformity. Given mismatch between curriculum and career, it would not matter a lot whether the Ivy League grad majored in STEM or in the humanities. What would matter is the Ivy League selection effect, for intelligence and personality. Similarly, Lauren Rivera, *Pedigree,* makes a case that selective colleges acculturate talented youths to recognize one another and to form a cohesive elite (again, over and above any differences in undergrad field of study).

Expand full comment

Yes, and ivy league grads are also in the tail of the distribution. In that tail, humanities grads have a different outcome than the average humanities grad.

Expand full comment

Using Engels' prediction that the state will wither away as an analogy for the disappearance of the individual doesn't work. The state around the world has only grown more powerful and insidious. Does that now mean that the individual will become supreme?

Expand full comment
founding

The tail succeeds regardless of undergrad field of study. My prediction is that the humanities, especially real-existing collegiate humanities, won't have an edge in preparing highly talented youths for leadership in the brave new world of AI.

Expand full comment

Regarding the chatGPT explanation about private insurance for banks, it seems quite reasonable, maybe even rather insightful, but I think this one sentence might be as important as everything else stated,

"However, it is important to note that private deposit insurance may not be a panacea, and that there are potential downsides to such a system, such as the risk of insolvency of the insurance company itself"

How many companies have adequate capital to insure the biggest banks? One of the problems with flood insurance is that private insurers are hesitant to insure at any price. Would privately insuring banks face similar difficulties?

Expand full comment
founding

Re: "humanity already has to deal with two powerful superintelligent forces, evolution and capitalism, so AGI wouldn’t be unprecedented" -- Nathan Braun, as summarized by Zvi Moshowitz

Isn't the analogy of AGI to evolution and capitalism a category mistake? Evolution and capitalism are systems that integrate two mechanisms: (a) decentralized variation, via mutation or entrepreneurial innovation; and (b) selection through competition ("survival of the fittest").

It would be more accurate to say that AGI will be an instance of technological innovation via the the mechanisms of capitalism; i.e., via entrepreneurship and market competition.

Compare Michael Huemer's short essay, "How much should you freak out about AI?":

https://fakenous.substack.com/p/how-much-should-you-freak-out-about

Key questions:

Will AI achieve consciousness?

Is non-conscious AI safe?

Huemer makes a case that *the small subset of evil, talented humans* are the real threat:

"So if, e.g., we build a superintelligent AI with lots of safety features built in, we also need to figure out how to stop humans from deliberately disabling the safety features."

Expand full comment