3 Comments
тна Return to thread
Comment deleted
Jul 13
Comment deleted
Expand full comment
Jul 13Edited

My son tasks AI to write code in languages he doesn't know and he is able to tweak what gets to something adequate. If he started from scratch there would be so many commands he had to search and so many syntax errors it would take far longer than AI making the first cut.

I don't know where the AI-written code is as far as efficiency but I don't honestly know that regarding my son either but I can say with little doubt that most coders create rather inefficient code or expend far to much labor for rather unimportant improvements, or both.

Expand full comment

In general the compelling use case is "replacing the best expensive humans at scale."

I think that the fact that a lot of this started with "words-in, words-out" systems is misleading a lot of people because people have been focusing on that kind of framework and mostly learning how to use them and discussing those kinds of systems.

Instead, people should be focusing on "A system which has (1) incredibly effective learning and training by detailed and insightful observation of huge amounts of complicated examples in all kinds of sensory dimensions and which happens with minimum human effort in structuring or guiding the learning process, and (2) which can interact with humans and receive instructions and refining corrections quickly with small amounts of ordinary human language."

So, what happened first is that we "trained" the chat systems on huge collections of strings of ideas expressed in human language words. And lots of people are focusing too much on that, without seeing the unlimited potential of the bigger picture. For example, when we train them on stuff which is not just words, like the set of all pictures or videos tagged with descriptions and styles, then they are also extremely good at generating that stuff, being as good as the best humans, but much faster and cheaper.

And when we train them on closely observing what the best, most expensive humans in most fields do, then they are going to figure out "how the humans do it, and how it can be done at least as well with machine capabilities." While the fixed costs will be high*, once all the lessons there are to learn are captured in an always-improving software system, the marginal cost will be very low of making copies and scaling to supply as much as that activity as the global economy will demand, and that marginal scaling cost is likely to continue to decline rapidly.

As for the cars, my impression from the latest reports about them is that we now have self-driving cars that are objectively no more dangerous on average than human drivers and perhaps even better (and continuing to improve), it's just that most humans are a lot more sensitive to robot danger than to human danger.

Eventually the sensitivity differential will relax and the safety differential will expand and the "driving-relevant terrain" will continue to adjust in a more robot-car-friendly directions, with the robot-cars all networking and communications with each other to perfectly manage traffic and prevent accidents in ways that humans can't, and so it's even plausible that in the 2030s some places will start banning human drivers altogether, or having whole urban areas excluding privately-owned vehicles so that one only sees work, delivery, and robo-taxi vehicles.

*Because the costs of compute for training are currently so high, innovations that can even shave a few percentage points off could be worth billions, and that creates an opportunity for big money for anyone who can do it, and just to show how rapidly things are moving in that direction, there are already start-ups designing chips specialized to perform these particular compute-intense training tasks as efficiently as possible.

Expand full comment