LLM Links, 2/15
Bill Gates backs humanoid robots; Mark McNeilly backs AI romance; Let's replace all the lawyers; Venkatesh Rao on a path for AI improvement
What’s more useful: multiple robots that can each do one task over and over, or one robot that can do multiple tasks and learn to do even more? To Apptronik, an Austin-based start-up that spun out of the human-centered robotics lab at the University of Texas, the answer is obvious. So they’re building “general-purpose” humanoid bi-pedal robots like Apollo, which can be programmed to do a wide array of tasks—from carrying boxes in a factory to helping out with household chores.
…If we want robots to operate in our environments as seamlessly as possible, perhaps those robots should be modeled after people.
Pointer from Alexander Kruel.
I am going to disagree. From a software standpoint, a humanoid robot acts like a swarm of robots. As a human, I can say to myself, “I am going to pick up a green pepper and slice it,” and proceed to do it. I don’t have to consciously tell my nerves, muscles, and fingers what to do. But a robot is going to have to be much more conscious about what each part does.
What does the process of slicing a green pepper look like in the kitchen of the future?
a) today’s kitchen, navigated by a humanoid robot
b) a collection of robots: a refrigerator robot, that dispenses the pepper; a drone robot, that takes the pepper to the counter; a self-propelled knife robot, that slices the pepper.
I predict (b). The software will be less complicated to write. Moreover, the kitchen with robotic implements will be able to perform all of the tasks for making a salad simultaneously. It (they) can wash the lettuce and slice the tomatoes while slicing the pepper.
while there is a worthwhile debate on AI’s likelihood to end humankind, there is another possibility that is perhaps more probable; AI will not destroy us. Instead, it will seduce us. By that I mean, humans will become so enamored with interacting with AI devices and ultimately, AI robots, we will lose interest in other humans and perhaps gradually die out.
Tyler Cowen links to a paper that reports that large language models can be effective at reviewing contracts. There is a lot of mundane work in the legal/finance sector. I remember in the 1990s that every time Freddie Mac issued a mortgage security, staffers would have to review the offering circular. That sounds like something to outsource to a large language model.
When I was at the Fed in the 1980s, when a new data release came out, such as a report on retail sales or on new orders for durable goods, an economist would have to do a write-up to pass on to the members of the Board. These reports were carefully reviewed by senior staff, who agonized over whether the memo should say “edged up” or “rose slightly” when the number increased by 0.1 percent from the previous month. You could replace a lot staff work with an LLM.
But my guess is that the Fed is not eager to cut down on staff. Nor are firms in the securities business. Nor is the legal profession.
I expect soon to see rules that contract reviews, offering circulars, and Fed staff memoranda must be completed and signed by human beings.
Loosely speaking, agents (I’m not going to qualify with the usual long list of qualifiers usually applied — social, autonomous, distributed and so on — take them as implied) are AIs that can make up independent intentions and pursue them in the real world, in real time, in a society of similarly capable agents (ie in a condition of mutualism), without being prompted. They don’t sit around outside of time, reacting to “prompts” with oracular authority.
I think of it as social learning applied to computers. He points out that such an idea has been around for a while. His take on it is to emphasize trial-and-error learning.
Here’s an easy way to remember this point: You can’t train your way out of experimentation needs no matter how smart you are.
As with much of Rao’s writing, I find this essay to be intriguing but hard for me to decipher.
substacks referenced above:
@
@
@
"humans will become so enamored with interacting with AI devices and ultimately, AI robots, we will lose interest in other humans and perhaps gradually die out."
Something like that seems to be happening with smartphones. But it's not losing interest in people. Quite the opposite. It's becoming so interested in unusual or entertaining people that boring ordinary people can't compete. And those entertaining people don't stick around and bother you if you don't want them to. They show you their best, that is, their best in the way of entertaining you--because that's why you've chosen to watch them.
This may eventually lead to loneliness, and a desire for actual people, but great difficulty in actually doing anything with other people because you never developed the knack, and, God, they are so borrrrring some times.
A vs B
I think you couldn't be more mistaken. Clearly you aren't an engineer even if your programmer expertise should give you some insight.
The tech for "b" requires far more mechanical components that tend to be expensive, never mind that more parts means more maintenance.
The potential of "a" is that AI software allows much reduced mechanical that is far more expensive on a per unit basis. Option "a" has much greater potential of being effective at a reasonable cost.