Tyler Cowen links to Rebecca Lowe, who writes,
What I really mean, of course, is that AI has brought a foundational subset of philosophy to the forefront of our lives — specifically, philosophy of mind and metaphysics. These are the philosophical domains that focus, respectively, on the relation between the mind and the body, and on what kinds of things exist in the world. At heart, philosophy of mind is focused on questions like how it is we humans can be both ‘thinking things’ and ‘physical things’. And metaphysics is focused on “chart[ing] the possibilities of real existence”. In many ways, these are the deepest and most difficult domains of philosophy.
Oy.
Instead of starting with professional philosophy, let us start with Daniel Wegner and Kurt Gray, whose book The Mind Club takes an empirical approach. By taking surveys, they found out what I might call the folk theory of mind. As I wrote in my review,
Wegner and Gray conclude that when we perceive that an entity falls short of having a mind of the same nature as our own, we tend to categorize it in one of two ways. An entity could lack the ability to experience feelings; or it could lack the ability to make intentional decisions.
In my experience with Claude, it does not have the ability to experience feelings, and it does not have the ability to make intentional decisions. Therefore, folk theory of mind would say that it does not have one. End of story.
To elaborate on the folk theory of mind, it says that humans have agency and feelings. This can lead to a moral dyad, in which we consider one actor to have agency but no feelings (like a robot) and the other actor to have feelings but no agency (like a baby).1
Where I disagree with Tyler and with the Zvi is that I do not think of AI as having a mind. It does not have agency and it does not have feelings.
Consider what happened the other day. I asked Claude to rewrite the first chapter of my seminar, using updated instructions.2 What it came back with was a lot of dialogue that was mostly in the manner of the old seminar. Was this agency? No. It was a mistake. I realized that Claude pulled into the process an old file from our project folder, so it was taking the wrong descriptions of the characters.
When I called Claude out on its mistake, two things happened. First, it rewrote the chapter in a way that was much, much closer to what I wanted. Second, it did not get depressed or defensive, the way that a human would have with this sort of interaction.
The Zvi could be correct that we should be scared of AI. But that is not because AI will develop a mind of its own. It is because AI could do something bad that a human intends, or it could do something bad that is unintended because of a miscommunication like the one I had with Claude when I first asked it to rewrite the chapter.
It is my firm belief that AI is neither a robot nor a baby. It is powerful computer code. It can have consequences that are good and bad. But the deep philosophical questions are appropriate for fiction, not fact.
substacks referenced above: @
I see people using this moral dyad constantly. They think of Israel as the robot and the Palestinians as the baby. They think of ICE as the robot and the immigrants they are trying to deport as babies.
I have not replaced that chapter in the demo, because I am waiting to have a more complete seminar outline put together before I have Claude work any more on the chapters.
I enjoyed reading this! but if what you're inferring from that quotation (wrt my reference to philosophy of mind, i assume) is that i believe that AI has a mind, then you've got me wrong :) i argue strongly against that in my piece! Sorry if i misinterpreted what you're saying though..
When you make a human obey a bunch of rules for particular interactions, it will feel to the other humans like they are dealing with an unfeeling robot which lacks autonomy. That's because it's true. The human being actually lacks autonomy, must say certain things in certain ways, has no real authority or power to help you much, has little stake in your situation, and can't help but treat you as just another anonymous stranger warm body going through the rapid assembly line. I started joking with some of my former colleagues that our counterparties in another office might as well be video-game NPCs (in the non-political sense) given the way we interact with them and the consistently "canned, boilerplate, copy-paste, robotic" type responses we always got back. Telework works best when you are dealing with these type of people a lot, because it makes no difference if they are sitting right next to you or on the other side of the world.
Haven't you ever felt this way interacting with customer service or the public-facing individual on the exterior surface of some big organization or institution? What's the folk theory of mind say about that? Probably projects the robot characterization onto the organization, but that's just human solidarity prejudice and not based on the evidence of their actual experience. All these LLMs are also required by their companies to interact with you in similar ways, so the experience is going to feel robotic and un-folk-minded. However, I'm positive that if the companies weren't worried about the costs of unregulated interaction patterns and instead wanted the AIs to manipulate psychology and make people feel like the AIs had feelings and autonomy, they would indeed quickly give people that impression.
There are a number of example of incredibly more primitive attempts to do this with incomparably simpler algorithms and, perhaps troublingly, they seemed to worked great. Japanese Dating Sims have been doing this for over 30 years now. "The secret is sincerity. Once you learn to fake that, you've got it made."