15 Comments
User's avatar
Reader's avatar

I drive a Tesla and it drives me around 90 percent of the time. It gets better every few months. I don’t think people really comprehend how good it is and how close they are. They have millions of cars on the road doing it right now, sending back real life training data where the car is in shadow mode pretending to drive while you do, and making note when you do something that the car would have done differently. Waymo only has thousands of cars generating this data.

I want lots of companies to succeed in this space! But I don’t think Waymo is the clear winner.

It’s incredibly useful and stress relieving to have a car that does all the hard work in heavy traffic and I won’t go back.

Tom Grey's avatar

The expected private Tesla— part time taxi will be winner in 2026, or maybe not till 2027. Then it will start exploding in growth in cities and places with high-paying jobs and already a lot of Teslas.

Owning your own self-driving taxi which, after it takes you to work, & later back, is then available to be a taxi for others is a huge increase in the intensive use of the capital invested in such owned taxi cars.

Uber, Lyft, Bolt (big in Bratislava) will all move to hybrid business s models offering auto-autos or driver-taxis, with the auto-taxis getting cheaper each quarter.

Now I wonder if there are Waymo trucks, auto-trucks will be reducing truck driver demand, visibly in the next 24 months.

(In Europe, manual transmissions are still the most common rather than automatic, and cars are often called autos, so there will be auto-auto-autos.)

gas station sushi's avatar

Everything is great until the Tesla decides to “phantom brake” at 70 mph on the freeway with tailgaters behind. The phenomenon has actually gotten worse not better with each software update over the past few years.

Scott Gibb's avatar

Arnold is my DKU (designated keeper-upper). Thanks.

policarpo's avatar

Re: Domain Expertise

Assuming:

(1) new data is being produced more or less continuously

(2) to maintain relevance, AI’s will need to be trained on at least some of this new data

(3) some firm is able to limit access to valuable segments of this new data

(4) it would be costly to replicate these valuable segments of new data independently

it would seem like there is something of a Coasean situation in which the firm able to limit access to the valuable data, in order to maximize its net present value, must determine whether it is more profitable to sell the data via an internally developed product or to sell access to the data to other firms.

One would imagine that the economics of different use cases vary and that some segments of the market will be tightly regulated. (https://itrexgroup.com/blog/calculating-the-cost-of-generative-ai/ )

The cost of developing AIs is apparently falling. (A bit old, but: https://carey.jhu.edu/articles/end-ais-artificial-scarcity ).

And foundation models are facing stiff competition already from a swarm of “good-enough” cheaper models. The chatbot tells me:

“As of 2025, the United States leads globally in AI model development, with 40 notable AI models produced in 2024 alone—surpassing China’s 15 and reflecting the U.S.’s dominant role in frontier AI innovation. While exact numbers vary by definition, the U.S. is home to over 6,956 AI startups, many of which are developing proprietary or public AI models. The number of public AI models is growing rapidly, with nearly 2 million models reported in 2025 globally, a significant increase from 81 notable models in 2024.“

Might we see something similar to the trajectory of web search engines? There are apparently over 1,500 web search engines on the market today. And apparently a special purpose search engine can be built for as little as $50,000. (https://stratoflow.com/how-to-build-a-search-engine/ ). If AI model development follows similarly, it seems as if access to data is going to the bottleneck in producing model value. Data owners and producers may well invent new ways to monetize their commodity and limit access to it just as has happened with internet searches.

Not ready to count the niche market out yet.

Christopher B's avatar

Another data bottleneck is likely to be the small percentage of books that have been reproduced in digital form. Even though Google alone has scanned over 40 million books, the best estimate is that only about 15% of all books ever published have been scanned. And if they were scanned as images then there is a question of how many and how well they have been OCR'ed into comprehensible and searchable text. Gemini also reminded me that many books produced in the 20th Century are still under copyright which is another hurdle to making them available for AI training.

policarpo's avatar

Concur. And that does seem to suggest an entrepreneurial opportunity. But even in an area like law, where the United States Code and the Federal Register are in the public domain, it would seem like the urgency to address the problem of the corrupt federal judiciary would argue in favor of a narrow, auditable AI judge use case. If we really wanted the rule of law, we would have the judges replaced by AIs as early as later this year.

Yancey Ward's avatar

It is also my sense that OpenAI is fading fast- I read tons of stuff discussing AI developments and Open AI is quickly becoming a much smaller part of those discussions.

Andy in TX's avatar

I think the key insight in this post is this one: "To take advantage of AI, you need to be willing to completely re-think your mission and your role. My guess is that the professionals at large incumbent organizations who are most willing to do that are also the ones most likely to leave and strike out on their own. What organizations will be left with are the folks who are inclined toward denial and resistance." That last sentence really captures how a lot of people in higher ed, law, etc. are reacting - and is why there will be major disruption in such industries.

Roger Sweeny's avatar

Don't count out those "folks ... inclined toward denial and resistance" just yet. There are several bills in the New York state legislature right now that would expand and extend laws against unauthorized practice of law and medicine. The accreditation agencies in education--which, after all, are run by incumbents--will certainly attempt to limit change in schools.

stu's avatar

Companies already in a market have an inherent advantage but it has always been and always will be that companies that adapt as needed prosper, companies that don't or make the wrong changes lag. AI adoption is not different in this respect.

I wouldn't want to predict how education will change or not change.

Law firms don't necessarily need to adapt to AI but independent lawyers, partners, and associates most certainly do. The ones who will prosper are the ones who figure out how to use AI to do their jobs faster and better.

Aside: there was mention before about maybe associates going away. I don't think so. They will still do the same job only faster and better. They may even get more and higher level tasks assigned to them so it's not even clear their numbers will drop.

The Voluntarian's avatar

Do think that investors are overhyping AI and the productivity gains that they'll generate? I'm skeptical that AI will be as profitable as investors hope, and I worry that it could be another DotCom bubble situation.

I say this as some who has been required to use AI for college assignments (yes, required by the professor). AI is quite inefficient at gathering useful sources at times, and often will make pointless corrections to grammar that can strip away your voice in the writing. At best, AI is only good for letting you gather your thoughts into a coherent manner. Even then, they're programmed to keep you engaging, so it may simply praise whatever you say in spite of incoherence and then ask you pointless, easy questions. This may change in the future, in which case the investors will absolutely get their money back and then some. But that's if the AI generates returns soon enough.

Jeff Abrams's avatar

Left out one key question … regulation….

Marginal revolution today noted NYS with a proposal to limit what chatbots can advise on.

Wonder if this may be one case where the public can win out over specialized groups given how obvious the game is now.

My guess is there will indeed be efforts by specialty groups. Perhaps they can delay certain things but I think the benefits and uses of AI will be too obvious to significantly curtail things for too long.

gas station sushi's avatar

Target: Microsoft Copilot. It’s the "Clippy on Steroids." Windows users rave about how integrated it is in every possible facet of the OS.

I have fond memories of Clippy from the 90s and now it’s finally back.

stu's avatar

"The other reason that I am bearish on AI is that I do not trust Sam Altman as a fiduciary. Having read Keach Hagey’s biography, I worry that Altman inherited too much of his father’s floofy profit/non-profit financial notions."

The best reason to bet against OpenAI is that most of the current players are likely to drop out of that competition. The fact that some competitors have other revenue streams probably doesn't help either.

As for "floofy profit/non-profit financial notions," I suspect that has less bearing on your prediction than your general dislike of non-profits. In others words, your prediction says more about you than OpenAI.