20 Comments
Mar 2, 2023Liked by Arnold Kling

An important part of life’s journey is developing a sense of independence from your parents. Learning to make decisions without relying on them directly. We’ll future generations learn to become independent of their “Illustrated Primers?”

Expand full comment

The Illustrated Primer had a human directing it. I'm just not comfortable with the notion that Microsoft/Google will know every details of my grandchildren's inner and outer life. They will use that knowledge to shape the child toward their interests and especially toward their profits. It will never be their friend, it will never love them. It will use them.

Expand full comment

Others have probably said this before, but here goes. It seems to me that if the current generation of AI machines succeeds in merely producing prose that is indistinguishable from that of say news outlets, it is really a sad indictment of the human writers.

Expand full comment

Re: Marc Anderseen- It seems like a certain type of person (nerds, but not meant derogatorially) really forget how much stuff happens in the physical world. Sure, an advanced AI can find you all the information to understand current black hole theory, and eventually even synthesize explantions tailored to how you understand things, but it can't throw a baseball with you, or show you how to throw a ball (which is almost impossible to explain in a way that a person can implement while just doing it in front of them a bunch allows them to copy), and it can't go outside and be your 4th player so you and your 2 actual friends can play 2 on 2 basketball.

Marc's mental model of a 'friend' sounds a heck of a lot more like a servant than an actual friend. This is probably a boon for lonely people who have few to zero real friends anyway, but still not actually a friend.

Also physical touch is ingrained in people, its a requirement just to live for infants.

Expand full comment
founding

> Think of the trainers as seeding the models with artificial data.

An alternative interpretation is that they are simply the models’ teachers. Models are a lot more responsive to reward and punishment than human students, which gives the trainers more power (with humans we might call such an effect “grooming”).

Expand full comment

No sooner do we start to hear about all these AI Instagram models, and here comes AI simulation Wilt Chamberlain.

Expand full comment

"Or we could watch a simulated debate between Keynes and Hayek about today’s economy."

Does the value in having a simulated debate between Keynes and Hayek rest in Keynes's and Hayek's ability to defend their arguments and refute the arguments of their adversary? Are you sure that the simulated Keynes is making it's best effort (*) in that debate? If no one is able to certify that the debate is "authentic", why would a skeptical observer derive any conclusions out of that debate? Should a skeptical observer assume that ChatGPT is using the debate to trick her into accepting an argument from authority ?

(*) My definition: As good as the real Keynes would do.

Expand full comment

" As users come to see the clear influence of prior beliefs, they will tend to view these models as puppets of their trainers rather than objective sources of information."

But if the users think the trainers are "experts" and good people (~people on my team, which means they share the right priors with me), they may believe that anything other than puppets are wrong. See, for example, many subscribers to the New York Times.

Expand full comment

The way to use regressions to combine with your priors on the MPC would be just run the data and do the t test for a= 0.8-, instead of a= 0 Right?

Expand full comment