Discover more from In My Tribe
Sociology, Links, and ChatGPT
below I recapitulate the discussion with Alex T bout ChatGPT. Going forward, expect more writing on those three topics, less on other topics
Lately, I have been posting “Links to Consider” more frequently. In case you never click on those posts, you should know that I often add my own thoughts as comments to those links, so you are missing out on my thoughts as well as links that I find interesting. I hope that these posts are popular.
My main project is what I call “concepts regarding human interdependence.” This series of posts includes some evolutionary psychology, some cultural anthropology, some social psychology, some economics, some political theory, and some of what in business school is called “organizational behavior.”
I would prefer to just call this “sociology,” because it is what sociology might look like if it had not been captured by Marxists. But academic sociology is what it is. I hope that these posts are popular, also.
For now, I think I can add some value to the discussion of ChatGPT. I think it’s a topic that deserves attention, as I explained in People Who Should Learn to Use ChatGPT. I do not aim to become a ChatGPT expert or guru. I leave that task to others. And others can attempt to be a clearinghouse for news about ChatGPT. I will write about it once a week or so for as long as I think I have useful things to say. I will include links to others’ writings about ChatGPT as part of my “Links to Consider.”
Speaking of ChatGPT, Jaron Lanier writes,
My attitude is that there is no AI. What is called AI is a mystification, behind which there is the reality of a new kind of social collaboration facilitated by computers. A new way to mash up our writing and art.
I agree with that characterization.
Back in the 1990s, Jaron was one of the people who looked at the Internet through 1960s-hippie-colored glasses. I was fond of that crowd, which included Howard Rheingold, Chris Locke, Stewart Brand, and John Perry Barlow, among others. I would put Tim Berners-Lee in that camp.
But since then, I’ve moved on. I am surprised and not pleased by the emergence of dominant corporations and the success of walled gardens on the Internet. But with my economics background I am more willing than Jaron to let go of the hippie sensibility.
There is a lot more in Jaron’s essay to like, including this concern:
Soon, you won’t know if anything you read, or any image or video clip you see, came from a real person, a real camera, or anything real at all. It will become cheaper to show fakes than to show reality. A fake will only require that you enter a sentence asking for it, while reality will demand showing up with a camera.
The bottom line is that I think ChatGPT (and by that I mean “ChatGPT and its relatives”) is worth discussing. Those discussions won’t take over this whole substack. But if you would rather see me go back to writing a lot about Wokeism, it may be time to unsubscribe.
Discussing ChatGPT with Alex Tabarrok
I thought that the discussion of ChatGPT we had on Monday with Alex Tabarrok was very interesting. Alex pointed out that the bot has a capability that no one expected, which is to take a prompt about a person and a situation and write an opera about it. He pointed out that the types of mistakes it makes are often human-type mistakes, which differ from the usual computer errors.
Alex and I agree that there is great potential use for it in education. It takes us quite a bit closer to Neal Stephenson’s vision in The Diamond Age of personalized instruction by computer. I predict that educators and institutions that choose to fight the technology rather than embrace it will go by the wayside.
We speculate that one path for improving the quality of ChatGPT will be for bots to argue with one another, just as computers improved in chess by playing games against one another. Of course, the evaluation function for a disagreement is not as simple as that for chess. Right now, humans are involved in “reinforcement learning.” But if some of that work can be automated, the bots will learn much more quickly. I predict that this will happen, and that within a year ChatGPT will no longer be hallucinating.
During the Q&A, John Nye raised the issue of political leaders wanting to control what the bots can say. One of my catch-phrases is that we decide what to believe by deciding who to believe. The bots will have to decide who to believe, and politicians and other interested parties will look for ways to control that. A participant asked us about censorship and ChatGPT, and Alex and I admired the problem but did not have a solution.
Alex pointed out the potential for bots to be used to spam government. Regulatory agencies issue “requests for comment,” and the bots could be used to overwhelm them. Letters to Congress could similarly proliferate. He suggested we consider that the government officials could use a bot to summarize the letters or comments that come in. For me, Tyler’s phrase “Solve for the equilibrium” came to mind.
Alex talked about the way that ChatGPT seemed to reduce human thought to predictable sequences. It would see that the phrase “the red barn” more often than the phrase “the red epistemology” and uses such patterns to fill in sentences that start with “the red ___.” He then pointed out that his own responses are often predictable, for example how he would respond to someone proposing price controls for new drugs. So maybe the success of ChatGPT’s approach is not surprising, after all.
What will be some of the early applications? Alex suggested ChatGPT could draft legal documents much more efficiently than entry-level employees at law firms. Senior partners could get rid of entry-level positions and enjoy a higher markup in their billing, until the markup is competed away. Perhaps investment bankers will no longer have to use expensive entry-level labor for similarly routine tasks.
A participant in the discussion pointed out that ChatGPT already can handle a lot of routine software coding and database design tasks very well. I wondered if a million contractors in India might be out of a job soon.
Alex speculated that ChatGPT might make something like Google Glass (the ill-fated attempt to create a wearable computer with an eyeglasses format) work now. The high quality of ChatGPT’s translation function could enable you to hear someone speak in a foreign language and have the English text version appear on your glasses. You could be deaf and comprehend someone’s speech in real time.
Again, I think that ChatGPT is like the web in 1993 following the release of Mosaic, the first graphical web browser. It is fun to play with and there are many possible applications. This wave is going to be a big one. Jump on it.