23 Comments

Please read Handle's comment to the previous post https://arnoldkling.substack.com/p/three-components-of-social-order/comment/51272808. There is more than enough widely available tech already to make American streets almost as safe as Japanese ones - has been for decades. The hurdles on the road towards that objective are political and cultural, not technological.

Expand full comment

"As you know, some famous tech bros are excited about humanoid robots. As you also may know, I am inclined to make a different bet, on robots designed with specific goals in mind."

The physical capability "multipurpose killer app" for humans is probably our hands, which really are amazing in the wide variety of things they can do with a high degree precision (though often fragile in proportion to the level of precision). Instead of human body-resembling machines, I anticipate a large number of specialized platforms and approaches used to protect and transport those hands to the things they are intended to handle, and then deploy them for that work. I hope you can forgive me but I simply cannot resist naming these "HandleNoids".

I suspect we are going to get a lot of things that look like the wide variety of arthropods, but with deployable hands if they need to manipulate anything and also with wheels for ground movements (immensely superior to legs when on flat terrain), or propellers for flying or for movement on or in water. If swimming in water then probably more like fish or cetaceans than arthropods. If on uneven ground, "robot spider-centaur" seems intuitively ideal as a basic design, reminiscent of the tachikoma "think-tanks" from Ghost in the Shell.

Nature and engineering are both full of examples of physical specialization having large advantages in efficiency and performance to do any particular physical task vs any kind of general purpose approach.

Consider what we do with the "sub-robots" we already have in our tools. It's common for many ordinary people to have a number of slightly different hand-held motorized rotary tools rather than one general purpose one, for example, a drill / driver, an impact wrench, a "dremel", a compact router, a motorized ratchet driver, a polisher, and arguably circular saws and angle grinders qualify. That's not counting larger rotary tools like a drill press, lathe, or bench grinder. It often doesn't make sense to try to perform many of those functions by means of swapping out attachments or bits or making modifications to a single motor tool, though Lord knows I learned that the hard way.

Expand full comment

I think what will actually happen, if we can automate the work of all the paper shufflers in the back offices, is that we will create more paper shuffling jobs.

Expand full comment

We've already seen the way some of that untapped potential will remain tapped.

Already it seems that some algorithm outputs will either not be accepted or pose a greater risk of liability if there is not a guild-credentialed human professional who has reviewed and blessed off on that output, who can be made to testify in ways humans can understand and disciplined by the law and his guild in ways that incentivize due diligence. If a human has to review everything that is getting automated, then it's not really getting automated. Productivity might rise a lot, but the immense opportunity to scale has barely been touched.

It's not just the law doing this to companies, they are incentivized to do it to themselves. They need the human professionals as a kind of insulating buffer. If someone is suing, investigating, or prosecuting a company based on some claim about an algorithm system's outputs, then one cannot "put an algorithm on trial" without putting the whole company and entire chain of command on trial. This forces the company to have to reveal a lot of things they think must be kept secret: valuable proprietary trade secrets about how the algorithm works, internal deliberative processes with details about who directed what to happen when (e.g., "On May 16th, Diana Washington told the engineering lead to raise the frequency of transgender representation 100 fold from 0.18% to 18%"), with the additional problem that "code is evidence" that is written and recorded with duties to avoid deletion, in ways that human mental states responsible for decisions and derived from common acculturation, unrecorded face-to-face conversations, and tacit understandings is not.

Many organizations have been able to hide the ball on what they are really doing and why by resort to the Socially Acceptable Excuse that "people aren't robots, man" and that we are kind of loosey-goosey creatures who made holistic judgments in a fuzzy way based on a lot of factors that simply can't be reduced to a bunch of numbers, patterns, statistical relationships, and formulas! Because people accept this excuse, it lets organizations get away with all kinds of mischief that they couldn't get away with if, in fact, it were all reducible to numbers, patterns, statistical relationships, and formulas, which, duh, it totally is.

Consider the example of racial quotas in admission and hiring at elite universities. In reality, Harvard has something like a 15% quota for East Asians. Can you find a smoking gun document that explicitly articulates this policy. You can't; it doesn't exist. It doesn't need to, because insiders involved know what's supposed to happen and is generally on board with making it happen, and outsiders accept the excuse of "holistic admissions, the numbers are a total coincidence, ha ha," blah blah blah. Now, imagine you replaced the entire Harvard admissions and hiring processes with algorithm systems that did exactly the same things. Uh oh, the algorithm is now written down, enemies of Harvard now have a smoking gun that is explicitly and illegally racialist in character, and the individuals who wrote it down or ordered them to do it that way can be identified, subpoenaed, and held accountable. Can't have that!

At least half of USG is actually run this way via unwritten understandings between members of inside circles. If there is something you want to do, but writing it down would get you in trouble, then that won't stop you from doing it, because you'll just find a way to get it done which doesn't involve writing it down. But you can't use this trick if an algorithm is doing it. Silly coders, tricks are for humans!

Expand full comment

Except if it's an LLM then the algorithm is not written down in a way any human can understand, no more than human mental states are written down in brains in a way any human can understand. The salient difference seems to be that humans have ways of communicating without records (at least until corporate law requires all officers to wear GoPros with full sound and video recording at all times), whereas all that goes into a computer - whether for number-crunching by traditional coded-up algorithms or digested by a machine learning digital creature - can be easily recorded and there is zero legal hurdles to requiring it to be recorded.

Expand full comment

I doubt it will create more paper shuffling jobs but it will certainly create more paper shuffling.

Expand full comment

If an auto-tech-debt-fixing LLM actually works I will be very impressed. It is not implausible, but in my experience the tedium of fixing tech debt is only part of the issue. It is also often the case that products once rushed to market then are genuinely difficult to fix properly without even temporarily regressing their functionality. This is doubly true when users have come to depend on undocumented or even buggy behavior.

I have been learning modern data science techniques lately and wondering if LLMs might do well improving long term productivity there. A great deal of scientific and technical R&D now requires boring data cleaning and manipulation which seems ripe for greater automation.

Expand full comment

"Those phone menu trees with foreign workers hanging on the end of them drive consumers crazy."

For some companies, this is a bug. For other companies, this is a feature, -the- feature, so, the worse the better.

Consider this story about Anthem: https://aphyr.com/posts/368-how-to-replace-your-cpap-in-only-666-days

Then consider whether it's reasonable to infer from that story that Anthem actually wants good automated customer service to fix problems like the one Kyle Kingsbury had. First comment makes the point, "This puts Bulgakov’s and Gogol’s satires of the Soviet system to shame."

Expand full comment

Developer cost is around 5% of software cost. It's not low hanging fruit. The CVEs related to actual programming are an absurdly small subset. We can write code now that's vanishingly close to correct; we just choose not to. Formal use cases ( message sequence charts ) are declining from a peak around 2000 CE . I don't see right now how an AI can help with the current challenges. The distribution of trust is where the money goes - contracts, governance, coordination.

Look into "organic" software by the DoD. Seems to be going quite well.

Expand full comment

Oh man, that brings back memories. It goes to show that even USG can do impressive things if it (1) spends trillions of dollars, (2) over decades, (3) tries a thousand different things, (4) is even remotely able to discern what works better in each iterative refinement. I think the ASSIP started pushing 'organic' in the early W Bush administration.

I actually met (Air Force Major General) Claude Bolton who studied electrical engineering then became a top fighter pilot in Vietnam and worked a lot on that stuff when he was running acquisition for two hot wars. To say he was a stellar individual in many ways is to put it mildly. I haven't met (Air Force Major General) Ed Bolton, who joined after Vietnam and by all accounts is just as impressive and also has a very compelling life story. I'm told they are not related, which feels wrong even if true, so makes me want some cultural institution of "honorary relatedness".

Expand full comment

I don't know how big is the market for performance royalties for music - I'm thinking of public spaces in particular - but assuming it's big enough, why would you want to pay those royalties to the humans who wrote and published that music when AI can generate "ambient music" for free?

I'm guessing almost no one pays much attention to ambient music anyway, it's just something that sounds on the background while you're shopping at the supermarket or sipping a coffee at the coffee shop, so why would people mind if you replace Lady Gaga's Poker Face with an instrumental AI generated song that sounds vaguely similar?

Expand full comment

"You can start a business to help firms use LLMs to improve customer support and lower their labor costs at the same time."

Arnold, for this to happen you need LLM that has the social maturity of a 30 year old, and not of a 15 year old. My impression of the headlines I see of LLM snafus is the technology has not yet arrived to deal with persons who may give unexpected prompts and inputs. Consider, how does a company ensure an LLM agent will not go off script and demean the intelligence of the customer or make embarrassing equivocations? How do you program LLM to show "intelligent" but not create unpredictable risk?

The phone trees I deal with are intentionally dumb and aggravating. An LLM phone tree would be an improvement, but then it might prove to be aggravating and costly to the company that employs it.

Consider, for example, the embarrassment to a company if a customer records their conversation with an LLM agent and solicits the LLM agent to speak badly about the company! Or to get the LLM agent to say politically incorrect things. Or to say abjectly stupid things. Is this a risk a company wants?

Expand full comment

I've heard there are already companies using AI customer support with better customer satisfaction than human support.

Expand full comment

If the AI doesn't have to put you on hold while it asks its manager, it has an unfair advantage, lol.

Expand full comment

"I demand to speak to your friend pretending to be your supervisor!"

Expand full comment

How robot assisted surgery currently works: https://www.youtube.com/watch?v=nxoRSEIqs2I

Robotic surgery numbers: https://www.strategicmarketresearch.com/blogs/robotic-surgery-statistics

Expand full comment

There are already myriad tools that will scan code and help to improve them. While that’s a form of tech debt, that’s not the tech debt that most companies worry about these days (Given the amount of lines of code that are running all over the world, the fact that these large-scale failures don’t happen continuously is proof that most of the running code is - while not good - likely mostly sufficient).

True tech debt is generally *architectural or structural* choices, not bugs introduced by poor coding practices. Think choosing a data model that can’t scale to a million customers, or a monolithic architecture that needs to be broken apart into microservices, or not having built for CI/CD so you have to bring the service down to perform an upgrade. These aren’t problems that an LLM can naively solve; these are things that require the codebase to get pulled apart and refactored.

Expand full comment

Regarding software it can go either way. Right now the emphasis is on copilots that help you push crapware out faster.

I would like to see AI used to aid the improvement of systems. Pattern matching is a start, and will help with shallow problems amenable to spot fixes.

The real challenge will be to make tools that render in-the-too-hard-bucket surgeries feasible and safe.

Expand full comment

"I would make the opposite bet."

I applaud you for taking the optimistic position.

I often have brief listens to podcasts on my Google mini. I like that I can pause or stop and restart later where I left off. Google's software seems to break once or twice a year. Not updating the start point has been the problem at least twice. Currently it tells me what episode it's going to start where I left off and then it goes silent. The episode never starts. Whatever the glitch, in the past it has taken surprisingly long for google to fix it. Usually more than a week.

Expand full comment

I look forward to the first ED-209.

Expand full comment
Comment deleted
Mar 9
Comment deleted
Expand full comment

Qualified immunity is a judicially-invented doctrine that was invented to mitigate the unfairness involved in trying to hold people accountable in circumstances when no one could reasonably predict the rapid issuance or application of countless new judicially-invented doctrines.

You can't automate what you can't predict, and you can't predict abuses of discretion masquerading as jurisprudence.

Expand full comment
Comment deleted
Mar 9
Comment deleted
Expand full comment

Hi Guest User, you asked my to respond to this, but it's not clear to me whether we actually clash on some key issue or whether you are actually asking me to answer a non-rhetorical question. Could you clarify where you think our disagreement lies?

I'm quite familiar with the history of this subject and the legal arguments surrounding qualified immunity, and I have been convinced by scholars like the brilliant Will Baude that the Warren Court made it up just like they made up so much else.

I would be happy to see it abolished, but not in isolation, but as part of a package deal rolling back all the Warren Court (and Burger Court!) innovations, with a credible commitment to never do stuff like that again. All the ones for constitutional criminal procedure at least, but I'd say all the rest too, which seems like the only principled thing to do if jurisprudential fabrication is indeed such an egregious sin. I've observed, however, that many of the people who argue against qualified immunity are cool with many of the other Warren-Burger Court 'discoveries', so I often take the sincerity of their invention-based protests with a grain of salt.

There are certainly a lot of terrible cases and unjust holdings that were facilitated by the doctrine's existence, but then again there are a lot of terrible crimes and injustice perpetrated on innocent citizens by criminals when the police can't or won't do anything about crime. It's sad that it seems we have to pick our poison until the current moronic regime that unwisely put us in this unenviable position is replaced.

In the same case they upheld the doctrine of judicial *absolute* immunity 8-1, even though, as Douglas pointed out, it's kind of perverse to make police officers more vulnerable than judges to liability for not anticipating that laws would be found to be unconstitutional on appeal (the local judge and police in that case won at federal. trial.)

One question to ask is that if there are so many good right-wing and good left-wing reasons to be against the doctrine, why did Warren support it and why does every SCOTUS continue to uphold or even strengthen it, going on 60 years? I think the "legal realism" answer is pretty obvious. They wanted to be able to continue abusing their authority to interpret the constitution in order to act as a super-legislature and continue to rapidly and radically change the structure and social equilibrium of American law enforcement in a way that politics and elections couldn't undo.

And they understood that if every single new decision like that could open the door to thousands or millions of surprising, retroactively-applied civil suits about past actions, then there would be incredible amounts of resistance against any such new decision, and the judges would hesitate making such changes. If, on the other hand, they could deem some law unconstitutional but with liability only applied prospectively, then it would easier to swallow, thus easier to decide and implement.

So qualified immunity only incidentally protects policemen, fairly for many innocent officers but unfortunately unjustly for some bad apples too. But what it really protects is judicial power by substantially lowering the cost of using it. And just like judicial absolute immunity, it's something that exists because it's judges who get to decide, and, naturally, they always decide in favor of judges.

Expand full comment
Comment deleted
Mar 12
Comment deleted
Expand full comment

"We cannot demand policing that understands the basics of the 1st, 4th and 5th amendments if we don’t want more crime? Competent policing is not an option?"

I've got to say it's a bit amusing to keep being put back on "team pro-QI" even though I've explicitly stated to you I'm not. But I can certainly understand where it came from, why it survives, and that these reasons are more compelling on a practical, if not legal, level than you make them out to be.

But as for "We cannot demand policing that understands the basics of the 1st, 4th and 5th amendments if we don’t want more crime?" No, we cannot. I'm sorry you don't like this state of affairs, I don't either, but the system has evolved to make this trade off an unfortunate fact.

The reason is because something qualifying as "law" that maps to "the basics of the 1st, 4th and 5th amendments" barely exists today, and *certainly* did not exist during the Warren and Burger Court eras.

If in the 60s and 70s you asked 100 lawyers with experience in constitutional criminal procedure how any case would turn out, you would never get more than two thirds of them to guess right, usually a lot less. If you gave any of these people a 2:1 payoff if they would bet 10% of their wealth on the outcome of any case, they would refuse the bet. You talk about police being liable, but who is holding lawyers liable to clients for not telling them from the beginning that they are going to lose a case. Shouldn't they have know? Ha, lawyers are almost always totally immune from such claims.

When the top judges in the country were going back and forth to polar opposite results on appeal then appeal from the appeal, and SCOTUS votes were 5-4 or 6-3 over and over with the dissents almost always making good arguments that the result was wrongly decided, then you literally have the people in the world in the best possible position to understand "the basics" of the law in complete chaotic, and unpredictable disagreement.

You think non-lawyer cops back then were supposed to do better than that, but all the lawyers up and down the line were justly off-the-hook? Cops couldn't "understand the basics" because there were no basics to understand. The cops actually begged for "the basics" - "Tell us exactly what the constitution means we can and cannot do in bright line rules and we will train that and do that" but the best they could do was constantly update the guidance with the latest case-law.

The bottom line is that while one can understand actual law, there is no way to "understand" abuse of power that is masquerading as "law", because, by its nature, it is a discretion to deviate from established understandings and expectations and dictate novel shifts in policy. If we are going to hold cops liable for that, then every judge and lawyer involved in any case that didn't correctly identify the final outcome on a question of civil right violation liability from the very start - as the cop is being expected to do - ought to lose a month's income too.

Expand full comment