15 Comments

Both Bhide's article and this response are great examples of how much AI discussion is a (mostly accidental) motte-and-bailey fallacy. Because there's no strong definition of "AI" as a moniker, it can apply to almost anything, and by using the same term for two very different concepts, we can all get very confused! On the one hand, machine learning is really cool, allowing computers to (e.g.) generate semi-plausible text (motte), and on the other, we've all seen _The Matrix_ (bailey).

When Bhide "shudder[s] to think about the buggy LLM-generated software being unleashed on the world" because of under-trained developers "writing" code with ChatGPT copypasta, he's referring to the motte, and when he (along with many commenters here and elsewhere) refers to "existential risks to humanity," he's referring to the bailey. ChatGPT is in no way Agent Smith from _The Matrix_. It just has a similar name.

Expand full comment

I definitely think that Silicon Valley types overestimate the rate at which mainstream businesses and people will adopt AI. I suspect that the timeframe will be something akin to that of cloud computing.

Expand full comment

I am inclined to agree. Business processes tend to value accuracy and traceability, that is being right, checking the work, and if it isn't right figuring out why. As it stands, LLMs are pretty bad those. Once the hallucinations problem gets reduced it might be better, likewise with reducing the black box aspects, but it will take a while. Many (most?) businesses are still struggling with complex MRP optimization systems.

Expand full comment

You can reduce the hallucination occurrences, but at what point is it reliable enough to actually reduce your workload?

Expand full comment

Agreed, although "close enough that I don't get in trouble for it" will probably be the standard that is in practice adopted, and that will be pretty low for some people :D

Expand full comment

Yes, this is all true, although I think there is a more fundamental issue at play here, and that is that it just takes a while for large, bureaucratic organizations to fully process how to use a new technology most effectively. And that just happens on a longer timescale than the optimists expect.

Expand full comment

Indeed.

I wonder too if there won't also be a Hansonian issue at play too, where AI will make a lot of suggestions that managers don't like and will be sidelined as a result. I've seen that with other planning systems, certainly.

Expand full comment

"checking the accuracy of LLM responses made them productivity killers"

Exactly. LLMs have a serious factuality problem. Interacting with them can seem like you're having a conversation with Dory, the memory-challenged fish from Finding Nemo. LLMs will happily concede their error if you point it out, but then "correct" their initial response by giving you yet another factually incorrect response. Until this fundamental deficiency is fixed, their actual economic impact will be significantly constrained.

What LLMs are pretty good at is generating relatively simple source code to solve common problems. Part of why this is better than other uses is because you can immediately run the code to verify its accuracy. You're not left hunting around search engines in the way required to validate other kinds of responses. (If you have to go to the search engines, you might as well have done that to begin with.) Source code gives you have a more rapid validation means than the more purely informational responses provide.

Expand full comment

Bhide is mostly correct, ai progress into business will be incremental -- but it will be the S curve with some sharp, rapid, low 5 or 10% than rapidly to 70 ~ 90%. It's really annoying that still not great, 100% accurate on 99% of trials, voice to text dictation.

It's clearly far from actually "understanding" speech.

His fear of bad software is way overblown, tho -- the humans checking & testing the modules will know whether the code works according to the test cases. He doesn't mention how ai has, at the machine code level, improved a sort routine. Something humans have failed to do in over a decade.

https://arstechnica.com/science/2023/06/googles-deepmind-develops-a-system-that-writes-efficient-algorithms/

Wherever it's clear what the rules are, and what is "good" vs "bad", or Win vs Lose, the computer will learn to Win over humans. Including designing better code, and likely more efficient ai DeepMind game players as well as LLMs to interact with humans.

He also fails to mention neural nets.

https://www.pcmag.com/news/first-human-to-receive-neuralink-implant-says-it-lets-him-play-civilization

A Centaur is half human / half machine, clearly separated. A Cyborg is more integrated, like a human with enhanced heart, arms, legs. Computer Aided neural link control of one's own skeleton, and soon an external exo-robot is coming.

Coming too are neural links allowing Computer Aided Telepathy -- with an ai, allowing the human to direct the ai actor according to the human wishes. We're more likely to get huge criminal / tyrant attempts by Cyborg enhanced humans directly accessing a guiding ai which then accesses all the multiple other ai agents to do the nefarious stuff that the (high Dark Triad?) psychopathic human wants.

In defending against all aiBot aided human schemes, the defense will involve multiple and often NOT interconnected ai systems for defense.

Still, auto-ai fighter robots, Beserkers, remain a quite realistic fear, but not in the next 10 years so much.

Killing the economy because of Climate Alarmism remains a bigger threat.

Expand full comment

I am not sure how useful any historical innovation stories are to predictions of how a technology that itself has intelligence will impact society. The existential risk, as I understand, comes from AI that can take action independently of human instruction. In that case, thinking about how humans use the technology is largely irrelevant and everything gets weird.

Expand full comment

AI seems to have landed firmly in optimization and "curve fitting" style prediction stuff. The governance end of medical and general use will be quite profound. And if the uncanny-valley-full video , graphic art and music stuff that's easy to find is any indication , I don't think it'll make much of a dent at all.

Quoth Adam Savage - "AI has no point of view."

Expand full comment

Bhide’s comment that “transformative technologies… ...evolve through a protracted, massively multiplayer game that defies top-down command and control” strikes one as insightful and seems accurate enough, even if it also seems true of most other social and cultural phenomena as well. At the moment we may be seeing this process working itself out with the apparent impending failure of the CHIPS Act (https://www.bloomberg.com/opinion/articles/2023-03-28/chips-act-funding-isn-t-what-us-semiconductor-manufacturers-need?utm_source=website&utm_medium=share&utm_campaign=copy )

as manufacturers decline the quid pro quo entailed therein, with X in Brazil (apparently random and arbitrary censorship by what seem to be judges politically aligned with the party in power) and with Google in California (a state in financial crisis demanding taxable payments for links to its equally desperate pet media outlets from the deepest pockets around). It may seem possible that given the energy and space demands of computing centers, regulatory incentives, available monetization channels, and the elasticity of demand, whatever new applications might arise out of whatever it is that goes under the moniker “AI” will face similar trials in reaching and maintaining mass adoption.

As Bhide and Dr. Kling note, this “protracted, massively muliplayer game” has ample precedent. Perhaps this pattern might be profitably considered through the lens of the political science concept of “clientelism.” Encyclopedia Brittannica has a brief but excellent entry on clientelism: https://www.britannica.com/topic/clientelism But the incentives faced by multitude of players perhaps a bit more complicated than the simple model of clientelism offered there. In terms of economic development, the incentives in play may include:

“Clientelistic practices of exchange involve at least five different aspects to a varying degree in the four countries. Two of them concern benefits targeted directly at electoral mass constituencies, namely social policy benefits (public housing, and to a lesser extent differential access to social insurance benefits for unemployment, old age, and sickness) and public sector employment in the civil service (patronage). The other two modes of clientelistic exchange work through business arrangements. On the one hand, the politicized governance of public or publicly controlled enterprises allows politicians to benefit supporters through public procurement contracts, soft loans, and influence on the hiring

policy of such companies. Also in this instance, jobs are at stake. On the other hand, even where governments do not exercise control over enterprises through ownership or contractual relation, politicians may politicize the regulatory process that affects the operation of private businesses

(e.g., with regard to subjects such as land zoning, building codes, environmental and health protection, anti-trust and fair-trade regulation). Here mass supporters may indirectly benefit from politicians’ benevolence through higher wages, greater job security, and better employment opportunities. Firms may even help ‘deliver the votes’ to their favored politicians and indirectly monitor the clientelistic exchange.

One final aspect of clientelism concerns the extent to which it is formally legally codified or tacitly practiced through informal arrangements. The general presumption in the literature is that clientelism operates in informal ways, but this is not always borne out. Some clientelistic practices may be perfectly legal and therefore harder to discredit politically.An example might be the parties’ appointment powers to corporate managing boards of state-owned companies in Austria in the past. Not all clientelistic practices therefore are also instances of corruption in the technical-legal sense.”

Kitschelt, Herbert. "13 The demise of clientelism in affluent capitalist democracies." Patrons, clients, and policies (2007). pp. 299-300.

It does not seem so difficult to superimpose such patterns upon developments in the AI arena (and also subvert Kitschelt’s notion of demise) nor to use them to anticipate what will be coming down the pike.

Expand full comment

“Clientship,” in the ancient Greek and Roman sense, or even perhaps in its feudal incarnation as “serfdom,” does indeed seem a useful analogy to understanding the condition of the lower social orders in the United States today. In Ancient Rome the relationship between patron and client was rooted in an all pervasive religious culture reflected in the regulation of nearly all living conditions and surprisingly miniscule details of everyday life. The client could not own land and any work done on the land was in the name of the patron who was entitled to take the client’s personal property and money at will. The client was not free to leave a patron but instead was bound from father to son to the same family. Any question of change was completely beyond conception. This seems not so unlike what we have today in the United States. The patron can be seen as the patrician class of governance in that it is unconstrained in the demands it can make of the client class, the notion of constitutional constraints being flushed with the adoption of judicial review. Want to leave the country? Pay a fine. Want to drive a car? It must be electric. Pay whatever taxes demanded or else. Protest? To prison. The future of AI will largely play out as the means of achieving a convergence between modern life and the ancient Roman harmony of the orders. It will be only a matter of months before EPA regulations require your blue-tooth enabled toilet seat to control the dispensing and rationing of toilet paper. The endless shelves of legal codes, legal interpretations, regulations, etc will be the real fuel spurring AI adoption and will enable the patricians to rule with an all-encompassing iron fist that the Roman patricians could only have dreamed of. Rest assured all the savings from social security reform and medicare coverage reductions will be plowed into achieving an authoritarian degree of control over every individual’s mundane decision so total that even Red China will be put to shame.

Expand full comment

I agree and have been thinking about this country’s inevitable movement toward ‘clientship’ for years. I think the origin was Abraham Lincoln’s rewarding the industrialists of his time with tariff skimming. He set the stage for the gilded age. He introduced us to the idea of an income tax. He showed that if you were able to skillfully deploy political rhetoric you could in effect become a tyrant (start a war without congressional approval, suspend habeas corpus, incarcerate thousands for voicing disapproval of the war, close newspapers whose opinions he didn’t like). Over the last 160 years we’ve seen a gradual progression of the patron-client relationship with accelerated bursts during the Wilson, Roosevelt, Johnson, Bush and Biden administrations. Citizens are pretty much docile sheep now, upset only if we don’t get the amount of government manna we feel entitled to. We will gladly follow The Patron’s rules because we have long forgotten, if we ever knew, what it means to live in a free society. We have been brainwashed by decades of sophisticated political ponerization. Now we are content to stay in our comfortable media bubbles, fooling ourselves that our lives have meaning by occasionally yelling at our screens. Our Patron has us exactly where it wants us. AI with universal surveillance is being deployed now. Get ready to enjoy your cricket burgers and Soylent green smoothies.

Expand full comment

Very insightful post (as usual). Thanks 🙏.

Expand full comment