6 Comments

I don't think describing ChatGPT as 'software' creates useful associations. It is software only in the most superficial sense: it is not hardware but runs on hardware. Excel is software: basically a big bunch of step-by-step instructions written by humans, executed by ultra-moronic bean-counting homunculi. This means it is possible for users to form accurate mental models of at least parts of Excel. This is not true for ChatGPT and similar entities. I feel it is much more illuminating to think of them as evolved artificial life forms, or perhaps communities (synusia) of such life forms. Training sets and procedures are their environment, and space within the neural network's parameter set is the resource they compete for. They behave, like animals do, rather than function, as machines and algorithms do. People who train these networks are closer to farmers practising artificial selection to create new breeds than to programmers planning and creating step-by-step instructions in their minds and exporting them in forms executable by ultra-moronic bean-counting homunculi. Very old, very complicated, and badly managed software _can_ reach a state when it behaves rather than functions, but software developers recognize this as an evil and given opportunity work hard to remedy it. Users who haven't formed accurate mental models of the relevant parts of software they are interacting with can also feel that the software behaves rather than functions, but this is a consequence of their having inaccurate mental models when accurate mental models exist (the software itself being such a model). ChatGPT and its peers aren't such models and therefore forming mental models of their behavior is like forming mental models of the behavior of your coworkers, or of a dog. A number of academic papers have been published already investigating how this or that large neural network does what it does. I looked over a couple and they feel like psychology or physiology papers picking apart rats rather than like computer science papers picking apart algorithms. In fact, the very idea of investigating how an algorithm does what it does is almost a contradiction in terms, because the algorithm is 'how' boiled down to essence. We have now arrived at the point where we can appreciate Searle's Chinese Room gedankenexperiment. It is now obvious that Moldbug was right when he argued 15 years ago that it is the Room rather than the man in it who speaks Chinese, but that this does not lead into the paradoxes Searle imagined because the rules the man follows in making the Room work are not globally intelligible to him.

ETA: attentive readers will notice that there is a continuum between globally intelligible algorithms and GPT-class digital life forms, with Excel somewhere in the middle. As a very rough estimate of complexity, the source code of OpenOffice (main trunk, exclusive of tests, extras and assets such as document templates) is ~350Mb of C++, but of course it is very much redundant; 7z compresses it to 40Mb. GPT-3 has 175G parameters and they use int8, so it's 175Gb and it's a good bet that it does not compress at all. On the other end, Euclid's algorithm, which gave algorithms their name, is just dozens of bytes.

Expand full comment

Another limitation of ChatGPT at present: I just tried to use it and it came back with the following error message at 9:30 AM on a weekday (prime time?):

"ChatGPT is at capacity right now

Get notified when we're back online"

Expand full comment

My essay caused that 😉

Expand full comment

There was a Marginal Revolution post a very long time ago about when UPS first instituted newer computer algorithms for the package delivery routes. The algorithms beat either most or all of the best drivers, but a great driver that knew how to optimize and override the algorithm in the right places could blow the algorithm alone away in terms of productivity gains. I don't know if this is an appropriate model to graft onto this, but this has been part of where I think ChatGPT is going in the medium term, but for language dominated white collar professions.

Expand full comment

> I’m not sure if we can agree on what the equivalent is of a reckless driver where we would want ChatGPT to take over for humans.

ChatGPT has a bias toward expressing things in a very polite and civil way. Perhaps it could “seize the wheel” from some reckless tweets…

Expand full comment

I asked ChatGPT three questions about my industry.

1) Should have been easy as you can find the information on Medicare Plan Finder, but it either said it didn't have the data or gave me the wrong answer.

2) The second question is something different I could probably look up, but would take me time. ChatGPT either told me it couldn't find the answer or gave me an answer that was defiantly wrong (I can see how it got it wrong, but it doesn't help me).

3) ChatGPT did a lot better on my third question, which asked for no hard numbers and the results probably came from some online sources debating the Medicare Risk Adjustment process.

I'm not sure this was better than a Google search though. ChatGPT works very hard not to take a side or say anything controversial or unofficial, or to dive too deep, even if you keep asking questions. The top results from a Google search would have provided many reports and articles with detailed math and deep analysis of the issue, with authors taking a stand on the issue and making recommendations. Chat GPT took the most shallow and lukewarm from all that only.

I feel like if its a topic with a lot of articles online, Chat GPT could write a mostly data free and milquetoast Voxsplainer article about it, but would be useless for anyone actually in my industry.

Expand full comment