Reactions to AI
Peter Diamandis on corporate enthusiasm; Susan Pickard on academic hostility; Hollis Robbins on same; V.O. de Mello on moral objections; Claude on hostility to automobile in the 1910s.
Claude Code (Anthropic’s agentic coding tool) now has run-rate revenue above $2.5 billion, having more than doubled since the beginning of 2026. Business subscriptions have quadrupled since the start of the year, and enterprise use has grown to represent over half of all Claude Code revenue.
Software engineers have always been the rate-limiting factor for every startup I’ve invested in. You can never hire enough. The Fortune 500 barely gets any – they all flow to Silicon Valley. Now you can buy intelligence on a metered basis. Pay per token. No recruiting, no vetting, no retention, no equity. Just intelligence as a utility. Consumers pay $20/month. Enterprise power users pay $200/month. And companies are spending millions per year because the ROI is there.
In academia, to use AI marks you in the same way as voting Brexit, or insisting on the reality of biological sex: you are someone who lacks discernment, who isn’t a member of the tribe.
…The academy’s refusal to even consider how to positively incorporate AI is a form of gatekeeping. My colleagues wish to insist that academics alone are the legitimate purveyors of knowledge. This is apparent from their tone, a mixture of moral panic and wounded authority, which seems clerical in nature. “I can’t trust students.” “My judgement is under siege.” “Why teach at all?”
The Conference on College Composition and Communication, the primary professional organization for writing educators, just passed a resolution affirming the right of students and faculty to categorically refuse the use of generative AI in the classroom. The productivity benefits of the technology are dismissed as unsubstantiated claims; adoption is characterized as corporate intrusion by Big Tech. Members are instructed to reject the workforce preparation justification, insisting instead that the writing classroom must remain a space for processing feelings and engaging in civic participation.
Victoria Oldemburgo de Mello and others write,
Our research reveals that for a meaningful segment of the public, resistance to AI is moral in nature: a conviction that AI use is fundamentally wrong, not merely impractical. This moral opposition generalizes across applications and predicts real behavioral reluctance to use AI even when it would be personally advantageous. These findings reframe AI adoption as a challenge that cannot be solved through technological improvement alone, pointing instead to the need for strategies that engage the moral dimensions of public attitudes.
Pointer from Tyler Cowen. I see a lot of intense negativity toward AI. There are legitimate criticisms to be made, but most of the complaints come across to me as not well informed. People see AI as “not creative,” whereas I see creativity as searching the “adjacent possible,” and AI can do that at least as well as humans. People see AI as undermining literacy, when I think that other uses of the Internet are more harmful.
I asked Claude if the automobile caused a similar reaction in its early days. The conclusion:
So in short: yes, the auto backlash was intense and widespread, particularly in the 1900s–1920s. But it was arguably more reactive (responding to visible carnage on streets) whereas AI anxiety today is more anticipatory — trying to reason about harms before they fully materialize.
substacks referenced above: @
@



You should review SarbOx before deciding that corporate America is going to use AI to develop one-off apps, let alone one integrated into highly regulated/audited systems.
https://www.metricstream.com/insights/sox-it-controls.htm
The CEO is technically risking jail time for violations.
Anthropic does appear to be the best horse to bet on in the AI race going on the limited financial information available. Their projection of positive cash flow by 2028 is certainly encouraging as is their current position atop the heap. Yet, the downside risks don’t seem to be easy to ignore:
- Its current $390 billion valuation is apparently based upon expectations of future growth, not current earnings. Suggests a high potential for volatility.
- If it hits hard times, forced asset sales won’t really be able to generate much revenue. And one wonders how true that is of other giant firms too large to buy out that have declared their intention to never pay a dividend. Take Amazon. When revenues eventually start drying up all those shareholders who thought that they were protected by the value of the firm’s salable assets might be in a for a disappointment.
- The technology is changing rapidly so a competitive advantage today may well be worthless tomorrow.
- One is seeing increased talk about the "data wall." Models based on using public or synthetic data may well be disappointed in the competitive performance of such assumptions.
More generally, the tea leaves can be read a variety of other ways as well. I was struck by this ZeroHedge piece the other day, discussing the signal all the AI superbowl ad spending and influencer payments is sending:
“When was the last time truly revolutionary tech needed a billion-dollar ad campaign?
Did the iPhone need influencer deals? Did Google Search need Super Bowl ads in 1998? Did email need this? No. People just used them.
You know what does need massive paid promotions? Pharma drugs. Crypto exchanges. Online gambling. MLM schemes. Products where adoption is hype, not utility. And now, apparently, AI.
‘This will eliminate your job. Also please use it. Here’s $600K to tell your followers it’s cool.’
They need humans to sell a product designed to replace humans. They need creators to promote tech that makes creators obsolete. They need influencers to build trust in a system that eliminates influencer marketing.
Here’s a question: if $700 billion per year can’t produce a product that sells itself, when exactly does this make money?
$700 billion in spending, cash flow collapsing, stocks tanking, SEC filings about raising capital — and the best growth strategy is paying TikTokers to demo features.
Either AI is about to deliver the greatest economic transformation in human history (and they need influencers to convince you this)… or we’re watching the most expensive corporate Hail Mary ever thrown.“
( https://www.zerohedge.com/political/super-bowl-top-signal )
Anthropic has an influencer marketing program on LinkedIn to flog Claude. It uses the hashtag #ClaudePartner and is marked as "brand partnership" on sponsored posts. One wonders how many less transparent efforts are out there.
So color me a bit unsure about what is or is not in the Kool-Aid at the moment.