13 Comments

Both Bhide's article and this response are great examples of how much AI discussion is a (mostly accidental) motte-and-bailey fallacy. Because there's no strong definition of "AI" as a moniker, it can apply to almost anything, and by using the same term for two very different concepts, we can all get very confused! On the one hand, machine learning is really cool, allowing computers to (e.g.) generate semi-plausible text (motte), and on the other, we've all seen _The Matrix_ (bailey).

When Bhide "shudder[s] to think about the buggy LLM-generated software being unleashed on the world" because of under-trained developers "writing" code with ChatGPT copypasta, he's referring to the motte, and when he (along with many commenters here and elsewhere) refers to "existential risks to humanity," he's referring to the bailey. ChatGPT is in no way Agent Smith from _The Matrix_. It just has a similar name.

Expand full comment

I definitely think that Silicon Valley types overestimate the rate at which mainstream businesses and people will adopt AI. I suspect that the timeframe will be something akin to that of cloud computing.

Expand full comment

I am inclined to agree. Business processes tend to value accuracy and traceability, that is being right, checking the work, and if it isn't right figuring out why. As it stands, LLMs are pretty bad those. Once the hallucinations problem gets reduced it might be better, likewise with reducing the black box aspects, but it will take a while. Many (most?) businesses are still struggling with complex MRP optimization systems.

Expand full comment

You can reduce the hallucination occurrences, but at what point is it reliable enough to actually reduce your workload?

Expand full comment

Agreed, although "close enough that I don't get in trouble for it" will probably be the standard that is in practice adopted, and that will be pretty low for some people :D

Expand full comment

Yes, this is all true, although I think there is a more fundamental issue at play here, and that is that it just takes a while for large, bureaucratic organizations to fully process how to use a new technology most effectively. And that just happens on a longer timescale than the optimists expect.

Expand full comment

Indeed.

I wonder too if there won't also be a Hansonian issue at play too, where AI will make a lot of suggestions that managers don't like and will be sidelined as a result. I've seen that with other planning systems, certainly.

Expand full comment

"checking the accuracy of LLM responses made them productivity killers"

Exactly. LLMs have a serious factuality problem. Interacting with them can seem like you're having a conversation with Dory, the memory-challenged fish from Finding Nemo. LLMs will happily concede their error if you point it out, but then "correct" their initial response by giving you yet another factually incorrect response. Until this fundamental deficiency is fixed, their actual economic impact will be significantly constrained.

What LLMs are pretty good at is generating relatively simple source code to solve common problems. Part of why this is better than other uses is because you can immediately run the code to verify its accuracy. You're not left hunting around search engines in the way required to validate other kinds of responses. (If you have to go to the search engines, you might as well have done that to begin with.) Source code gives you have a more rapid validation means than the more purely informational responses provide.

Expand full comment

Bhide is mostly correct, ai progress into business will be incremental -- but it will be the S curve with some sharp, rapid, low 5 or 10% than rapidly to 70 ~ 90%. It's really annoying that still not great, 100% accurate on 99% of trials, voice to text dictation.

It's clearly far from actually "understanding" speech.

His fear of bad software is way overblown, tho -- the humans checking & testing the modules will know whether the code works according to the test cases. He doesn't mention how ai has, at the machine code level, improved a sort routine. Something humans have failed to do in over a decade.

https://arstechnica.com/science/2023/06/googles-deepmind-develops-a-system-that-writes-efficient-algorithms/

Wherever it's clear what the rules are, and what is "good" vs "bad", or Win vs Lose, the computer will learn to Win over humans. Including designing better code, and likely more efficient ai DeepMind game players as well as LLMs to interact with humans.

He also fails to mention neural nets.

https://www.pcmag.com/news/first-human-to-receive-neuralink-implant-says-it-lets-him-play-civilization

A Centaur is half human / half machine, clearly separated. A Cyborg is more integrated, like a human with enhanced heart, arms, legs. Computer Aided neural link control of one's own skeleton, and soon an external exo-robot is coming.

Coming too are neural links allowing Computer Aided Telepathy -- with an ai, allowing the human to direct the ai actor according to the human wishes. We're more likely to get huge criminal / tyrant attempts by Cyborg enhanced humans directly accessing a guiding ai which then accesses all the multiple other ai agents to do the nefarious stuff that the (high Dark Triad?) psychopathic human wants.

In defending against all aiBot aided human schemes, the defense will involve multiple and often NOT interconnected ai systems for defense.

Still, auto-ai fighter robots, Beserkers, remain a quite realistic fear, but not in the next 10 years so much.

Killing the economy because of Climate Alarmism remains a bigger threat.

Expand full comment

I am not sure how useful any historical innovation stories are to predictions of how a technology that itself has intelligence will impact society. The existential risk, as I understand, comes from AI that can take action independently of human instruction. In that case, thinking about how humans use the technology is largely irrelevant and everything gets weird.

Expand full comment

AI seems to have landed firmly in optimization and "curve fitting" style prediction stuff. The governance end of medical and general use will be quite profound. And if the uncanny-valley-full video , graphic art and music stuff that's easy to find is any indication , I don't think it'll make much of a dent at all.

Quoth Adam Savage - "AI has no point of view."

Expand full comment

Very insightful post (as usual). Thanks 🙏.

Expand full comment
Comment deleted
Apr 13
Comment deleted
Expand full comment
Comment deleted
Apr 13
Comment deleted
Expand full comment

I agree and have been thinking about this country’s inevitable movement toward ‘clientship’ for years. I think the origin was Abraham Lincoln’s rewarding the industrialists of his time with tariff skimming. He set the stage for the gilded age. He introduced us to the idea of an income tax. He showed that if you were able to skillfully deploy political rhetoric you could in effect become a tyrant (start a war without congressional approval, suspend habeas corpus, incarcerate thousands for voicing disapproval of the war, close newspapers whose opinions he didn’t like). Over the last 160 years we’ve seen a gradual progression of the patron-client relationship with accelerated bursts during the Wilson, Roosevelt, Johnson, Bush and Biden administrations. Citizens are pretty much docile sheep now, upset only if we don’t get the amount of government manna we feel entitled to. We will gladly follow The Patron’s rules because we have long forgotten, if we ever knew, what it means to live in a free society. We have been brainwashed by decades of sophisticated political ponerization. Now we are content to stay in our comfortable media bubbles, fooling ourselves that our lives have meaning by occasionally yelling at our screens. Our Patron has us exactly where it wants us. AI with universal surveillance is being deployed now. Get ready to enjoy your cricket burgers and Soylent green smoothies.

Expand full comment