15 Comments

Both Bhide's article and this response are great examples of how much AI discussion is a (mostly accidental) motte-and-bailey fallacy. Because there's no strong definition of "AI" as a moniker, it can apply to almost anything, and by using the same term for two very different concepts, we can all get very confused! On the one hand, machine learning is really cool, allowing computers to (e.g.) generate semi-plausible text (motte), and on the other, we've all seen _The Matrix_ (bailey).

When Bhide "shudder[s] to think about the buggy LLM-generated software being unleashed on the world" because of under-trained developers "writing" code with ChatGPT copypasta, he's referring to the motte, and when he (along with many commenters here and elsewhere) refers to "existential risks to humanity," he's referring to the bailey. ChatGPT is in no way Agent Smith from _The Matrix_. It just has a similar name.

Expand full comment

I definitely think that Silicon Valley types overestimate the rate at which mainstream businesses and people will adopt AI. I suspect that the timeframe will be something akin to that of cloud computing.

Expand full comment

"checking the accuracy of LLM responses made them productivity killers"

Exactly. LLMs have a serious factuality problem. Interacting with them can seem like you're having a conversation with Dory, the memory-challenged fish from Finding Nemo. LLMs will happily concede their error if you point it out, but then "correct" their initial response by giving you yet another factually incorrect response. Until this fundamental deficiency is fixed, their actual economic impact will be significantly constrained.

What LLMs are pretty good at is generating relatively simple source code to solve common problems. Part of why this is better than other uses is because you can immediately run the code to verify its accuracy. You're not left hunting around search engines in the way required to validate other kinds of responses. (If you have to go to the search engines, you might as well have done that to begin with.) Source code gives you have a more rapid validation means than the more purely informational responses provide.

Expand full comment

Bhide is mostly correct, ai progress into business will be incremental -- but it will be the S curve with some sharp, rapid, low 5 or 10% than rapidly to 70 ~ 90%. It's really annoying that still not great, 100% accurate on 99% of trials, voice to text dictation.

It's clearly far from actually "understanding" speech.

His fear of bad software is way overblown, tho -- the humans checking & testing the modules will know whether the code works according to the test cases. He doesn't mention how ai has, at the machine code level, improved a sort routine. Something humans have failed to do in over a decade.

https://arstechnica.com/science/2023/06/googles-deepmind-develops-a-system-that-writes-efficient-algorithms/

Wherever it's clear what the rules are, and what is "good" vs "bad", or Win vs Lose, the computer will learn to Win over humans. Including designing better code, and likely more efficient ai DeepMind game players as well as LLMs to interact with humans.

He also fails to mention neural nets.

https://www.pcmag.com/news/first-human-to-receive-neuralink-implant-says-it-lets-him-play-civilization

A Centaur is half human / half machine, clearly separated. A Cyborg is more integrated, like a human with enhanced heart, arms, legs. Computer Aided neural link control of one's own skeleton, and soon an external exo-robot is coming.

Coming too are neural links allowing Computer Aided Telepathy -- with an ai, allowing the human to direct the ai actor according to the human wishes. We're more likely to get huge criminal / tyrant attempts by Cyborg enhanced humans directly accessing a guiding ai which then accesses all the multiple other ai agents to do the nefarious stuff that the (high Dark Triad?) psychopathic human wants.

In defending against all aiBot aided human schemes, the defense will involve multiple and often NOT interconnected ai systems for defense.

Still, auto-ai fighter robots, Beserkers, remain a quite realistic fear, but not in the next 10 years so much.

Killing the economy because of Climate Alarmism remains a bigger threat.

Expand full comment

I am not sure how useful any historical innovation stories are to predictions of how a technology that itself has intelligence will impact society. The existential risk, as I understand, comes from AI that can take action independently of human instruction. In that case, thinking about how humans use the technology is largely irrelevant and everything gets weird.

Expand full comment

AI seems to have landed firmly in optimization and "curve fitting" style prediction stuff. The governance end of medical and general use will be quite profound. And if the uncanny-valley-full video , graphic art and music stuff that's easy to find is any indication , I don't think it'll make much of a dent at all.

Quoth Adam Savage - "AI has no point of view."

Expand full comment

Bhide’s comment that “transformative technologies… ...evolve through a protracted, massively multiplayer game that defies top-down command and control” strikes one as insightful and seems accurate enough, even if it also seems true of most other social and cultural phenomena as well. At the moment we may be seeing this process working itself out with the apparent impending failure of the CHIPS Act (https://www.bloomberg.com/opinion/articles/2023-03-28/chips-act-funding-isn-t-what-us-semiconductor-manufacturers-need?utm_source=website&utm_medium=share&utm_campaign=copy )

as manufacturers decline the quid pro quo entailed therein, with X in Brazil (apparently random and arbitrary censorship by what seem to be judges politically aligned with the party in power) and with Google in California (a state in financial crisis demanding taxable payments for links to its equally desperate pet media outlets from the deepest pockets around). It may seem possible that given the energy and space demands of computing centers, regulatory incentives, available monetization channels, and the elasticity of demand, whatever new applications might arise out of whatever it is that goes under the moniker “AI” will face similar trials in reaching and maintaining mass adoption.

As Bhide and Dr. Kling note, this “protracted, massively muliplayer game” has ample precedent. Perhaps this pattern might be profitably considered through the lens of the political science concept of “clientelism.” Encyclopedia Brittannica has a brief but excellent entry on clientelism: https://www.britannica.com/topic/clientelism But the incentives faced by multitude of players perhaps a bit more complicated than the simple model of clientelism offered there. In terms of economic development, the incentives in play may include:

“Clientelistic practices of exchange involve at least five different aspects to a varying degree in the four countries. Two of them concern benefits targeted directly at electoral mass constituencies, namely social policy benefits (public housing, and to a lesser extent differential access to social insurance benefits for unemployment, old age, and sickness) and public sector employment in the civil service (patronage). The other two modes of clientelistic exchange work through business arrangements. On the one hand, the politicized governance of public or publicly controlled enterprises allows politicians to benefit supporters through public procurement contracts, soft loans, and influence on the hiring

policy of such companies. Also in this instance, jobs are at stake. On the other hand, even where governments do not exercise control over enterprises through ownership or contractual relation, politicians may politicize the regulatory process that affects the operation of private businesses

(e.g., with regard to subjects such as land zoning, building codes, environmental and health protection, anti-trust and fair-trade regulation). Here mass supporters may indirectly benefit from politicians’ benevolence through higher wages, greater job security, and better employment opportunities. Firms may even help ‘deliver the votes’ to their favored politicians and indirectly monitor the clientelistic exchange.

One final aspect of clientelism concerns the extent to which it is formally legally codified or tacitly practiced through informal arrangements. The general presumption in the literature is that clientelism operates in informal ways, but this is not always borne out. Some clientelistic practices may be perfectly legal and therefore harder to discredit politically.An example might be the parties’ appointment powers to corporate managing boards of state-owned companies in Austria in the past. Not all clientelistic practices therefore are also instances of corruption in the technical-legal sense.”

Kitschelt, Herbert. "13 The demise of clientelism in affluent capitalist democracies." Patrons, clients, and policies (2007). pp. 299-300.

It does not seem so difficult to superimpose such patterns upon developments in the AI arena (and also subvert Kitschelt’s notion of demise) nor to use them to anticipate what will be coming down the pike.

Expand full comment

Very insightful post (as usual). Thanks 🙏.

Expand full comment