ChatGPT/LLM links, 3/23
ChatGPT-4 on deposit insurance; The Zvi on the alignment problem; Ian Leslie on the two cultures; Chatbots in teaching: Ethan Mollick; Bob Ewing; Tyler Cowen and Alex Tabarrok; Sal Khan;
I gave ChatGPT-4 the following prompt:
You are an economist. You propose that deposit insurance for banks should be provided by a private insurance company. You claim that this will reduce the likelihood of a bank failure. Explain
The reply:
Deposit insurance is a form of financial protection that provides depositors with a guarantee that their funds will be returned to them in the event of a bank failure. The purpose of deposit insurance is to prevent bank runs, which can be detrimental to the stability of the financial system.
Traditionally, deposit insurance has been provided by governments through a central agency, such as the Federal Deposit Insurance Corporation (FDIC) in the United States. However, some economists argue that private insurance companies could also provide deposit insurance, and that this could lead to greater stability in the financial system.
The reasoning behind this argument is that private insurance companies have a greater incentive to accurately assess the riskiness of the banks they insure, and to charge appropriate premiums based on those assessments. In contrast, government-run deposit insurance schemes may be subject to political pressure to keep premiums artificially low, which can create a moral hazard problem. If banks believe that the government will bail them out in the event of a failure, they may take on more risk than they otherwise would.
By contrast, private insurance companies have an incentive to accurately assess the riskiness of banks and to charge appropriate premiums. If they fail to do so, they risk losing money when insured banks fail. This creates a strong incentive for private insurance companies to carefully monitor the banks they insure and to take steps to reduce the risk of failure.
In summary, proponents of private deposit insurance argue that it would lead to greater stability in the financial system by reducing the moral hazard problem associated with government-run deposit insurance schemes. However, it is important to note that private deposit insurance may not be a panacea, and that there are potential downsides to such a system, such as the risk of insolvency of the insurance company itself
I followed up with
Try to improve your answer by using Nassim Taleb's concept of "Skin in the game"
And got this reply:
Nassim Taleb's concept of "skin in the game" is relevant to the debate over whether deposit insurance for banks should be provided by a private insurance company. "Skin in the game" refers to the idea that people are more likely to act in a responsible manner when they have something to lose if they make a mistake. In the case of deposit insurance, this means that private insurance companies may be more effective at preventing bank failures because they have a financial stake in ensuring that the banks they insure do not fail.
When a private insurance company provides deposit insurance, it has a direct financial stake in the success of the banks it insures. If a bank fails, the insurance company must pay out the insured deposits, which can be a significant cost. This creates a strong incentive for the insurance company to carefully assess the riskiness of the banks it insures and to take steps to reduce the risk of failure.
In contrast, when deposit insurance is provided by a government agency, the individuals responsible for overseeing the program may not have the same level of financial stake in the success of the banks they insure. As a result, they may not be as motivated to ensure that the banks they insure are taking appropriate risks and maintaining sound financial practices.
By using private insurance companies to provide deposit insurance, the concept of "skin in the game" can be leveraged to improve the stability of the financial system. Private insurance companies are more likely to carefully assess the risks associated with the banks they insure and to take steps to mitigate those risks because they have a direct financial stake in the outcome. This can reduce the likelihood of bank failures and help to maintain stability in the financial system.
Zvi Mowshowitz has a long post covering recent discussions of many aspects of AI. I will zero in on one excerpt:
Nathan Braun makes the argument that humanity already has to deal with two powerful superintelligent forces, evolution and capitalism, so AGI wouldn’t be unprecedented. . . He does not think that ‘conscious intelligence’ counts for very much, and that an entity 1,000x as smart as us in IQ terms would not be all that powerful, and wouldn’t be able to do things we couldn’t imagine faster than we can process them.
I flat out don’t consider this a possible hypothesis. With probability of (100% minus epsilon, and I’m being generous with the epsilon): If you take as givens that the AGI is (1) orders of magnitude smarter than us, and can act at the speed of code, (2) using conscious consequentialism to achieve its goals and (3) you release it onto the internet with a goal that would be made easier by killing all the humans, all the humans be very dead.
Let me try to take Nathan’s side. As individuals, humans are not very smart. None of us could make a pencil, much less an iPhone. It is our culture that is orders of magnitude smarter than we are. So we are living with an entity (our culture) that is much smarter than any individual human. Human culture might be a match, or even more than a match, for any AI.
The paragraph I just wrote is very hand-wavy. I am anthropomorphizing both “human culture” and the AI. I am treating them both as individual conscious entities. I guess a lot rides on whether one or the other or neither behaves like a conscious entity.
AI is about to take vast swathes of analytical and technical activity off our hands. Many of the big technological and engineering problems of the future will be solved by machines, albeit in collaboration with human scientists (at least for now). We can’t allow the machines, or the scientists, to dictate what those problems are or how to think about them. They’re not qualified. As the business professor and AI enthusiast Ethan Mollick says, the people who manipulate AI most productively will be those with the deepest and most esoteric knowledge of the humanities.
The rise of AI should push us to think more deeply about the fundamental questions of human life: how to be good, how to treat others, what to want (frankly, we’ll have more time on our hands to do so). This is where the humanities come into their own.
He refers to a famous lecture by C.P. Snow, The Two Cultures, published in 1959. In 1957, Merle Kling had written on this topic,
With his visions blocked by a massive wall of technology, science, mathematics and expertness which he is unable to penetrate… it is fair to speculate that the role of the intellectual will come to resemble more closely that of the archaeologist in our society. The archeologist is not persecuted. He is subsidized, permitted to release his aggressions by spading ancient dirt, accorded token honors and courtesies, and--disregarded. Perhaps what Engels said of the state may be said of the intellectual: he will not be abolished; he will wither away. The intellectual is no longer a man without a country. But he may be a man without a future. And if he is not in league with the future, can he be right?
There are tech stars who are comfortable with at least some of the humanities. I can think of Marc Andreessen. And there are writers with a feel for technology. I can think of Neal Stephenson.
I happen to believe that it is hard to understand information technology without having some formal exposure to computer science and some coding experience. I think that CEO’s who lack technical skills in the computer realm are at a disadvantage. In that regard, I share my father’s pessimism about the role of the pure literary intellectual, at least in the near future. But perhaps Leslie is correct that this may reverse in the age of AI.
For ChatGPT, try: You generate clear, accurate examples for students of concepts. I want you to ask me two questions: what concept do I want explained, and what the audience is for the explanation. Provide a clear, multiple paragraph explanation of the concept using specific example and give me five analogies I can use to understand the concept in different ways.
After the chatbot responds, he has it provide an explanation of photosynthesis for elementary school children. Note how careful he is in his instructions to the chatbot.
But I wonder whether the teacher will be disintermediated. As Marc Andreessen and Tyler Cowen have pointed out, the chatbot could reside on a tablet that a child carries around with him all the time. It becomes a friend to the child. Knowing the child, it does not have to be told how to provide explanations. And learning becomes child-directed rather than teacher-directed.
Asking Chat a generic prompt may give you no better answer than Google or Wikipedia. If you ask, “What is behavioral economics?” Chat may even be rather boring and banal.
You’ll get a better answer with something like this: What is behavioral economics? Give an answer in 300 words that a leading expert in behavioral economics would provide if she were speaking to her doctoral students.
Again, the case for writing more elaborate instructions in prompts.
Tyler Cowen and Alex Tabarrok have a paper that describes how an economics professor can use chatbots. They show an Ethan Mollick level of understanding (I mean that as the highest praise). One excerpt:
GPTs can also take on personas. You can tell a GPT, for example, “You are now Thomas Jefferson the American revolutionary and President.” and it will then answer questions based on that persona. Useful perhaps for history students. You are now visionary entrepreneur Steven Jobs, analyze the following business proposal for flaws, problems and improvements.
Sal Khan demonstrates. Strongly recommended.
Substacks referenced above:
@
@
@
@
"As the business professor and AI enthusiast Ethan Mollick says, the people who manipulate AI most productively will be those with the deepest and most esoteric knowledge of the humanities."
Previous history shows that most technology is pretty rapidly adopted by sociopaths and other bad actors. The ever present phenomenon of internet spam should be a mild reminder of this. We make amazing technical strides when we're trying to kill each other.
No offense intended but you think the AI did a good job on that deposit insurance article largely because *it is the article that you would have written yourself* absent AI. There may be some utility in saving yourself wear and tear on your fingers but as far as a demonstration of problem-solving goes, it's an utter failure. The AI is just regurgitating your solution to the problem with little to no additional insight. I've seen enough examples of what AI produces when asked to answer even simple questions where it has little to no information and it doesn't honestly report back "I don't know". It goes off into production of what is readily recognized as gibberish *if you already know the answer to the question*. Given a novel problem we're never going to know if the AI came up with an innovative solution or a load of bullshit but my bet is on the later.
In your post "The SVB story: narrative and causes" you said: "If I were in charge of designing financial regulation, rather than try to make the financial system hard to break, I would try to make it easy to fix."
Does private insurance make it easier or harder to fix in case of bank failure?