The "problem" is the ambiguity of other people's thinking. One's own thinking can be clear: Just work harder (as Tyler Cowen says) and use the new technology more than anyone else.
Tyler has never started a business that wasn't an intellectual, non-profit, or quasi-institutional project. I'm not convinced he knows what he's talking about since his primary business is talking and takes, i.e., an intellectual ecosystem builder.
He's not taken on significant financial risk, built a revenue generating enterprise, nor managed capital at scale. If I'm wrong on any of that, someone correct me.
How 'bout Dalio, Taleb, Athey, Graham...why, even our own Kling...that are more in line with a start-up operation? Arnold seems to be a bit more circumspect on the topic.
Well I'd say Emergent Ventures is the most vital new funding source in the country, so as a philanthropic entrepreneur I'd say he's doing incredible work there. EV flies under the radar for many people and it shouldn't.
I didn't say he was a chimp; he's obviously brilliant. We'll see about EV; nice idea, generous, helpful, etc. The venture capital game isn't for wimps; if he's still around with a string of successes after 5 years, I'll grant him genius status.
Advice like "work harder and use the technology more than anyone else" ...is not business advice. It's something else.
I think mostly working smarter is the correct, and Arnold’s, advice. Use AI generally, and soon aigents for specific practical ways to help you in your own work, and hobbies. So use more tech than your competitors, so you can cooperate more valuably with more customers/ friends/ allies. Doing more.
Like vibe-reading books, which I am (or think I should be but aren’t quite) eager to try for my blog reading & commenting, which I enjoy so much in my retirement.
An octogenarian entrepreneur - a millionaire many times over - was once asked why he still worked. He replied that he kept seeing opportunities to start new businesses and couldn't pass them up; he had trained himself to see the unmet needs and desires that others overlooked. Perhaps the best thing we can do for our students is to teach them to recognize and take advantage of the opportunities that surround them in a free society.
What advice would you give your 18 year old self? Not clear. AI, I believe, will have enormous impacts, and not in the distant future. Many friends and acquaintances believe AI cannot be “creative,” whatever that word means. But extremely few individuals have a unique ability to combine ideas or scientific concepts in new and useful ways. AI is not competing against Leonardo Da Vinci. It’s competing against the other 99.9999% of mankind.
It’s not competing against Leonardo, against Leonardo’s somewhat less brilliant younger brother, nor - crucially - against Leonardo’s chambermaid and her man-of-all-work husband. Is there going to be a need for a middle class? Will the fascination with and favored status of the prison population pivot, on the left, to a passionate defense of one’s niece’s BS job?
I think it was Holmes who said that the essential skill of a lawyer is to be able to predict what a court would do if the issue ended up in front of a court & this is a skill that is highly amenable to machine learning since it is essentially knowing how courts have decided similar matters. What is similar takes judgment but it mechanical sort of judgment. Someone asked me for legal advice on an unfamiliar subject in a different jurisdiction. As a matter of general legal common sense I had ideas. I ran the questions through an LLM and it produced a two page memo that was in accord with my opinions but also spotted issues I missed. What was produced a firm would probably have billed around $1-2K for. The written analysis was probably better than what about 90% of the lawyers practicing in the area would have produced. It is hard to see how the legal profession in its current form will survive AI.
The “failure” of economics as a science is primarily due to the reality of uncertainty, combined with the fact that if there is some economic “law” or regularity, like House Prices Always Go Up, with people able to make money by following that law, the actions of the people making money change the regularity, invalidating it.
Most of life is full of one time only big events, so the frequency idea of probability, based on the Law of Large Numbers, can’t quite be used. But if one chooses probability as a measure of the decision maker’s info, the known unknown probabilities can be estimated, to any number of accurate digits, even including positive & negative unknown unknowns.
My own learning about decision analysis & the very low likelihood of being a successful entrepreneur made my decisions not to try to start a business more rational, but also more certain to not be successful, not also not a failure, since untried. Most entrepreneurs are thinking the odds don’t apply to them, both successful unsuccessful.
I’m pretty sure the biggest losers And winners of ai use will be folk who heavily use the computer & digital info in their work. A few will be wildly more successful, very many will lose their current jobs, and there especially won’t be so many alternative positions opening for those without good ai using skills.
Your point about culture and institutions slowing AI's impact is one I keep returning to. The ambiguity you describe isn't just uncertainty about whether AI will matter; it's uncertainty about where it will bite first and hardest.
I've been trying to map that unevenness — identifying where new work is already appearing as automation stalls at the edges. The physical world remains stubbornly undefeated. So do trust, judgment, and exception-handling. Some of this is transitional; some may be durable new infrastructure.
Your "keeper-upper" framing from an earlier post stuck with me — the person who helps everyone else in the organization use AI effectively. That's one example of a role that doesn't exist and may scale as adoption spreads unevenly.
If AI is truly unique in its impact, I'm not sure how much predicting we do now is going to be worth anything in the future. If AI is just another GPT that has massive impacts but doesn't bring about the singularity, then I suppose the advice to focus on learning to think and analyze, spot opportunities, etc. is still pretty much the same. True, we don't know what the practice of law will be like in 5 years, but a lot of the skills taught in legal education are useful analytical ones (certainly not all the useful analytical ones). Whether going to law school is worthwhile depends on what your alternative is.
How about teaching them deeply and well in the domain they are hired to teach in. To maintain human knowledge outside the machine… and reintroduce serious literature & humanities studies which are not in the vile grip of some po-mo orthodoxy or other. For starters.
So if you are right, and in a few (or more) years, senior lawyers will need AI to supervise them, does it bode well for your students to go for a law degree since junior lawyers become senior lawyers?
Re: "As of now, AI’s are competitive with junior workers in the legal profession, but to be safe you need a “real” lawyer to check an AI’s work. But I would bet that in five years we will be in the opposite situation: an AI will reliably do the work of a senior lawyer"
Tyler Cowen (EconTalk w/ Russ Roberts last week) states that lawyers will be employed as regulators of AI:
"I also think, somewhat counterintuitively, AI will lead to more lawyers. I'm not sure that is a good thing, but we will need to write a lot of new laws for the AIs.
Now, a big part of me believes the AIs would write those laws better than humans could, but I do not think we will let them do it, rightly or wrongly. So, humans will use AI assistance in drafting those laws. I think lawyers who work in government will be a growth sector for the foreseeable future." — EconTalk (30 March 2026): https://www.econtalk.org/ai-employment-and-education-with-tyler-cowen/
The "problem" is the ambiguity of other people's thinking. One's own thinking can be clear: Just work harder (as Tyler Cowen says) and use the new technology more than anyone else.
Tyler has never started a business that wasn't an intellectual, non-profit, or quasi-institutional project. I'm not convinced he knows what he's talking about since his primary business is talking and takes, i.e., an intellectual ecosystem builder.
He's not taken on significant financial risk, built a revenue generating enterprise, nor managed capital at scale. If I'm wrong on any of that, someone correct me.
How 'bout Dalio, Taleb, Athey, Graham...why, even our own Kling...that are more in line with a start-up operation? Arnold seems to be a bit more circumspect on the topic.
AK has been very upfront about his semi-successful web business in the 90s, selling out for low riches before the dot,com crash.
AK is one of the very few folks on the interweb who, when he says something, I STFU and listen.
Well I'd say Emergent Ventures is the most vital new funding source in the country, so as a philanthropic entrepreneur I'd say he's doing incredible work there. EV flies under the radar for many people and it shouldn't.
I didn't say he was a chimp; he's obviously brilliant. We'll see about EV; nice idea, generous, helpful, etc. The venture capital game isn't for wimps; if he's still around with a string of successes after 5 years, I'll grant him genius status.
Advice like "work harder and use the technology more than anyone else" ...is not business advice. It's something else.
I think mostly working smarter is the correct, and Arnold’s, advice. Use AI generally, and soon aigents for specific practical ways to help you in your own work, and hobbies. So use more tech than your competitors, so you can cooperate more valuably with more customers/ friends/ allies. Doing more.
Like vibe-reading books, which I am (or think I should be but aren’t quite) eager to try for my blog reading & commenting, which I enjoy so much in my retirement.
An octogenarian entrepreneur - a millionaire many times over - was once asked why he still worked. He replied that he kept seeing opportunities to start new businesses and couldn't pass them up; he had trained himself to see the unmet needs and desires that others overlooked. Perhaps the best thing we can do for our students is to teach them to recognize and take advantage of the opportunities that surround them in a free society.
What advice would you give your 18 year old self? Not clear. AI, I believe, will have enormous impacts, and not in the distant future. Many friends and acquaintances believe AI cannot be “creative,” whatever that word means. But extremely few individuals have a unique ability to combine ideas or scientific concepts in new and useful ways. AI is not competing against Leonardo Da Vinci. It’s competing against the other 99.9999% of mankind.
It’s not competing against Leonardo, against Leonardo’s somewhat less brilliant younger brother, nor - crucially - against Leonardo’s chambermaid and her man-of-all-work husband. Is there going to be a need for a middle class? Will the fascination with and favored status of the prison population pivot, on the left, to a passionate defense of one’s niece’s BS job?
I think it was Holmes who said that the essential skill of a lawyer is to be able to predict what a court would do if the issue ended up in front of a court & this is a skill that is highly amenable to machine learning since it is essentially knowing how courts have decided similar matters. What is similar takes judgment but it mechanical sort of judgment. Someone asked me for legal advice on an unfamiliar subject in a different jurisdiction. As a matter of general legal common sense I had ideas. I ran the questions through an LLM and it produced a two page memo that was in accord with my opinions but also spotted issues I missed. What was produced a firm would probably have billed around $1-2K for. The written analysis was probably better than what about 90% of the lawyers practicing in the area would have produced. It is hard to see how the legal profession in its current form will survive AI.
The “failure” of economics as a science is primarily due to the reality of uncertainty, combined with the fact that if there is some economic “law” or regularity, like House Prices Always Go Up, with people able to make money by following that law, the actions of the people making money change the regularity, invalidating it.
Most of life is full of one time only big events, so the frequency idea of probability, based on the Law of Large Numbers, can’t quite be used. But if one chooses probability as a measure of the decision maker’s info, the known unknown probabilities can be estimated, to any number of accurate digits, even including positive & negative unknown unknowns.
My own learning about decision analysis & the very low likelihood of being a successful entrepreneur made my decisions not to try to start a business more rational, but also more certain to not be successful, not also not a failure, since untried. Most entrepreneurs are thinking the odds don’t apply to them, both successful unsuccessful.
I’m pretty sure the biggest losers And winners of ai use will be folk who heavily use the computer & digital info in their work. A few will be wildly more successful, very many will lose their current jobs, and there especially won’t be so many alternative positions opening for those without good ai using skills.
Your point about culture and institutions slowing AI's impact is one I keep returning to. The ambiguity you describe isn't just uncertainty about whether AI will matter; it's uncertainty about where it will bite first and hardest.
I've been trying to map that unevenness — identifying where new work is already appearing as automation stalls at the edges. The physical world remains stubbornly undefeated. So do trust, judgment, and exception-handling. Some of this is transitional; some may be durable new infrastructure.
Your "keeper-upper" framing from an earlier post stuck with me — the person who helps everyone else in the organization use AI effectively. That's one example of a role that doesn't exist and may scale as adoption spreads unevenly.
If you're curious: https://rajeshachanta.substack.com/p/the-last-meter-economy
The ambiguity you describe may be the landscape for a while. Mapping the terrain as it emerges seems more useful than waiting for clarity.
If AI is truly unique in its impact, I'm not sure how much predicting we do now is going to be worth anything in the future. If AI is just another GPT that has massive impacts but doesn't bring about the singularity, then I suppose the advice to focus on learning to think and analyze, spot opportunities, etc. is still pretty much the same. True, we don't know what the practice of law will be like in 5 years, but a lot of the skills taught in legal education are useful analytical ones (certainly not all the useful analytical ones). Whether going to law school is worthwhile depends on what your alternative is.
How about teaching them deeply and well in the domain they are hired to teach in. To maintain human knowledge outside the machine… and reintroduce serious literature & humanities studies which are not in the vile grip of some po-mo orthodoxy or other. For starters.
So if you are right, and in a few (or more) years, senior lawyers will need AI to supervise them, does it bode well for your students to go for a law degree since junior lawyers become senior lawyers?
Re: "As of now, AI’s are competitive with junior workers in the legal profession, but to be safe you need a “real” lawyer to check an AI’s work. But I would bet that in five years we will be in the opposite situation: an AI will reliably do the work of a senior lawyer"
Tyler Cowen (EconTalk w/ Russ Roberts last week) states that lawyers will be employed as regulators of AI:
"I also think, somewhat counterintuitively, AI will lead to more lawyers. I'm not sure that is a good thing, but we will need to write a lot of new laws for the AIs.
Now, a big part of me believes the AIs would write those laws better than humans could, but I do not think we will let them do it, rightly or wrongly. So, humans will use AI assistance in drafting those laws. I think lawyers who work in government will be a growth sector for the foreseeable future." — EconTalk (30 March 2026): https://www.econtalk.org/ai-employment-and-education-with-tyler-cowen/
A very depressing thought…