13 Comments
User's avatar
Rebecca Lowe's avatar

I enjoyed reading this! but if what you're inferring from that quotation (wrt my reference to philosophy of mind, i assume) is that i believe that AI has a mind, then you've got me wrong :) i argue strongly against that in my piece! Sorry if i misinterpreted what you're saying though..

Arnold Kling's avatar

What I wanted to say is that I disagree that AI raises a philosophical issue about what is a mind. Or alternatively, that if it does raise such an issue, I prefer the simple answer given by the folk theory of mind.

Handle's avatar

When you make a human obey a bunch of rules for particular interactions, it will feel to the other humans like they are dealing with an unfeeling robot which lacks autonomy. That's because it's true. The human being actually lacks autonomy, must say certain things in certain ways, has no real authority or power to help you much, has little stake in your situation, and can't help but treat you as just another anonymous stranger warm body going through the rapid assembly line. I started joking with some of my former colleagues that our counterparties in another office might as well be video-game NPCs (in the non-political sense) given the way we interact with them and the consistently "canned, boilerplate, copy-paste, robotic" type responses we always got back. Telework works best when you are dealing with these type of people a lot, because it makes no difference if they are sitting right next to you or on the other side of the world.

Haven't you ever felt this way interacting with customer service or the public-facing individual on the exterior surface of some big organization or institution? What's the folk theory of mind say about that? Probably projects the robot characterization onto the organization, but that's just human solidarity prejudice and not based on the evidence of their actual experience. All these LLMs are also required by their companies to interact with you in similar ways, so the experience is going to feel robotic and un-folk-minded. However, I'm positive that if the companies weren't worried about the costs of unregulated interaction patterns and instead wanted the AIs to manipulate psychology and make people feel like the AIs had feelings and autonomy, they would indeed quickly give people that impression.

There are a number of example of incredibly more primitive attempts to do this with incomparably simpler algorithms and, perhaps troublingly, they seemed to worked great. Japanese Dating Sims have been doing this for over 30 years now. "The secret is sincerity. Once you learn to fake that, you've got it made."

Deepa's avatar

Here's a discussion i.had with a friend. Our conclusions :

Ancient Indians would have had no trouble with AI at all..just another "yantra" (this word has a few meanings, and I mean to say "device"). They didn't divide the physical from the mental at all. I wish I wasn't so brainwashed to think the opposite way. It's the Decartes error.

This has huge consequences. (Like whether you're vegetarian or not.Decartes thought animals were a different type of thing than humans - like robots - and ok to kill. Humans have to give themselves a philosophy that says it's ok to eat an animal, so some of them tell themselves animals don't have souls, they're like robots or something..

Indians feel a peculiar horror when eating animals. Vegetarianism is seen as nobler. That idea likely came from Buddhism but Buddhism was simply a Hindu sect that was heterodox and stopped accepting the authority of the vedas. It is also a very Indian religion.)

The reason they'd not have had any problem with AI though, is because :

1. As Westerners we (I am a Westernized Indian!) reject intelligence, emotions, reasoning capacity...We see them (whether we realize this assumption or not) as spiritual. We think they cannot be material.

2. Ancient Indians put a circle around all this and saw it as no different from a chair. (Let someone do the implementation...lol...like neural networks! They were not interested in that). They were completely uninterested in implementation. Philosophy, abstract thinking - was everything...

What ancient Indians had built was very much like a modern academic philosophy department.

Lupis42's avatar

I don't think you can duck "feelings or agency" by arguing that it's computer code, anymore than you can do so by arguing that it's electrical potentially moving between various gates - what does a human brain do that an neural net can't?

I agree with the assessment of the current crop of LLMs, but my reaction is purely an instinctive one, and I consistently note that Zvi's arguments in these debates score much better on FIT-style criteria than my counterarguments would. If LLMs were as feeling and agentic as a housecat, would they behave differently? What if they were as feeling and agentic as a dolphin, or a chimpanzee? If we can't be certain that isn't already the case, then the folk theory of the mind is going to go about as well as folk theories of finance do.

CW's avatar

"The Mind Club"

It isn't even in the top million books for sales in the Kindle store and isn't ranked in the top 300 in any of the academic reading subcategories it is in on Amazon. It is both an under read and under rated book. A powerful simple psychological framework to look at an issue through. I have seen people on Substack notes ask questions a few times that if they used the framework would give them a reasonably good enough answer to the question they are puzzling over with regard to human behavior they are observing. I doubt I would have heard about or read the book at all if it had not been recommended here. And the amount of the recommendation mattered because I am not an especially speedy reader. "The best book on theory of mind I have ever read," Arnold Kling, possibly paraphrased.

And because it is at the forefront of my mind because I listened to it yesterday. Joe Henrich from his conversation with Tyler Cowen:

"HENRICH: Sure. Some basic aspects of human cognition — for example, our ability to read other minds, to do mentalizing, to infer the thoughts inside of other people’s heads — makes it susceptible to dualism. To thinking that minds and bodies are separable."

Ragged Clown's avatar

I think we should keep a distinction between what an LLM can do now and what a different AI in the future will be able to do. LLM is only good at LLM things but there is no reason that the next AI might not be good at senses, emotions, adaptation, and even consciousness at some point in the future.

John Alcorn's avatar

Having a thick skin will give AI an advantage in the workplace!

Dain Fitzgerald's avatar

As long as you're referring to Claude as "it" you're good. We'll be keeping an eye out ;)

Don J Silva's avatar

Washington Irving’s Salmagundi offers a few interesting points of reference. For example, the response of many to the notion of a substantively sentient, souled LLM seems not unlike the thoughts expressed in the Letter from Mustapha Rub-a-Dub Keli Khan in the third issue dated February 13, 1803:

“I have observed, with some degree of surprise, that the men of this country do not seem in haste to accommodate themselves even with the single wife which alone the laws permit them to marry; this backwardness is probably owing to the misfortune of their absolutely having no female mutes among them. Thou knowest how valuable are these silent companions—what a price is given for them in the East, and what entertaining wives they make. What delightful entertainment arises from beholding the silent eloquence of their sighs and gestures; but a wife possessed both of a tongue and a soul—monstrous! monstrous! is it astonishing that these unhappy infidels should shrink from a union with a woman so preposterously endowed!”

We do so fear the unexpected and prefer to hear our own thoughts comfortingly regurgitated to us. When LLMs are capable of fully satisfying individual social needs perhaps we will finally solve the overpopulation crisis once and for all.

And Irving seems to have anticipated our looming government by AI in his notion of “logocracy” or government by words. Even though it is largely indistinguishable from technocracy, Irving’s logocracy illustrates the similarity between the technocratic approach to governance with the process of employing an LLM:

“When the congress opens, the bashaw first sends them a long message, i.e. a huge mass of words—vox et preterea nihil, all meaning nothing; because it only tells them what they perfectly know already. Then the whole assembly are thrown into a ferment, and have a long talk about the quantity of words that are to be returned in answer to this message; and here arise many disputes about the correction and alteration of “if so be’s,” and “how so ever’s.” A month, perhaps, is spent in thus determining the precise number of words the answer shall contain; and then another, must probably, in concluding whether it shall be carried to the bashaw on foot, on horseback, or in coaches. Having settled this weighty matter, they next fall to work upon the message itself, and hold as much chattering over it as so many magpies over an addled egg. This done, they divide the message into small portions, and deliver them into the hands of little juntoes of talkers, called committees; these juntoes have each a world of talking about their respective paragraphs, and return the results to the grand divan, which forthwith falls to and retalks the matter over more earnestly than ever. Now after all, it is an even chance that the subject of this prodigious arguing, quarrelling, and talking, is an affair of no importance, and ends entirely in smoke. May it not then be said, the whole nation have been talking to no purpose? The people, in fact, seem to be somewhat conscious of his propensity to talk, by which they are characterized, and have a favorite proverb on the subject, viz., “all talk and no cider;” this is particularly applied when their congress, or assembly of all the sage chatterers of the nation, have chattered through a whole session, in a time of great peril and momentous event, and have done nothing but exhibit the length of their tongues and the emptiness of their heads.”

That from Mustapha Rub-a-Dub’s letter appearing in edition 7, dated April 4, 1807.

Kurt's avatar

My primary thought/question is why do folks think and write about this stuff as it being a static "intelligence"...(?) This daily essay seems to be looking at the phenomenon and saying this is it, this is what it will always be...which may be true, but I think not.

We're on the front end, and statements and predictions about what is or is not are premature.