20 Comments
User's avatar
Don Silva's avatar

Anthropic does appear to be the best horse to bet on in the AI race going on the limited financial information available. Their projection of positive cash flow by 2028 is certainly encouraging as is their current position atop the heap. Yet, the downside risks don’t seem to be easy to ignore:

- Its current $390 billion valuation is apparently based upon expectations of future growth, not current earnings. Suggests a high potential for volatility.

- If it hits hard times, forced asset sales won’t really be able to generate much revenue. And one wonders how true that is of other giant firms too large to buy out that have declared their intention to never pay a dividend. Take Amazon. When revenues eventually start drying up all those shareholders who thought that they were protected by the value of the firm’s salable assets might be in a for a disappointment.

- The technology is changing rapidly so a competitive advantage today may well be worthless tomorrow.

- One is seeing increased talk about the "data wall." Models based on using public or synthetic data may well be disappointed in the competitive performance of such assumptions.

More generally, the tea leaves can be read a variety of other ways as well. I was struck by this ZeroHedge piece the other day, discussing the signal all the AI superbowl ad spending and influencer payments is sending:

“When was the last time truly revolutionary tech needed a billion-dollar ad campaign?

Did the iPhone need influencer deals? Did Google Search need Super Bowl ads in 1998? Did email need this? No. People just used them.

You know what does need massive paid promotions? Pharma drugs. Crypto exchanges. Online gambling. MLM schemes. Products where adoption is hype, not utility. And now, apparently, AI.

‘This will eliminate your job. Also please use it. Here’s $600K to tell your followers it’s cool.’

They need humans to sell a product designed to replace humans. They need creators to promote tech that makes creators obsolete. They need influencers to build trust in a system that eliminates influencer marketing.

Here’s a question: if $700 billion per year can’t produce a product that sells itself, when exactly does this make money?

$700 billion in spending, cash flow collapsing, stocks tanking, SEC filings about raising capital — and the best growth strategy is paying TikTokers to demo features.

Either AI is about to deliver the greatest economic transformation in human history (and they need influencers to convince you this)… or we’re watching the most expensive corporate Hail Mary ever thrown.“

( https://www.zerohedge.com/political/super-bowl-top-signal )

Anthropic has an influencer marketing program on LinkedIn to flog Claude. It uses the hashtag #ClaudePartner and is marked as "brand partnership" on sponsored posts. One wonders how many less transparent efforts are out there.

So color me a bit unsure about what is or is not in the Kool-Aid at the moment.

stu's avatar
20hEdited

Agreed Anthropic's lead can disappear in a second. Revenue and profit projections can change even faster and due to more causes.

Since long before AI, most tech companies have had very small assets in comparison to revenue and profit. Very few can weather downturns by selling assets

Mechanization of agriculture massively reduced the workforce to produce food. Virtually everyone is the better for it. AI may replace jobs faster and be more disruptive but there is no reason to think jobs lost will not be replaced with different jobs.

Peter's avatar

Typo disappeared, might want to fix. Took me a couple to figure out the word there, I can't be the only one. FYI.

Treeamigo's avatar

You should review SarbOx before deciding that corporate America is going to use AI to develop one-off apps, let alone one integrated into highly regulated/audited systems.

https://www.metricstream.com/insights/sox-it-controls.htm

The CEO is technically risking jail time for violations.

Lex Spoon's avatar

You might be right, technically.

In general, though, I would not expect it to be a large issue once things settle out. With my experience at Sox and PCI-impacted companies, the system is set up so that you can have an army of absolute monkeys writing the code and still be compliant. That's how the industry works, and--what do you know--that's how the regulations are designed to play ball with. I didn't design the implementation myself, but that is my impression from working at such places.

By the way, there is a similar give and take with copyright law. Each time the technology changes, the copyright law gets modified to account for it. It's not like there is any real philosphical basis of copyright that transcends the technology; it is the other way around, where pioneers work out a way to do things, and then whatever that is gets locked down by the law.

Anyway, for Sox and PCI, my experience is that they are implemented in a way that most of the engineers at a company never direcly encounter them. What happens is that engineers are forced to use the company repository (GitHub) and to use the company review system (GitHub).

Given all of this, it seems like you can replace the engineers by AIs and just have them still do pull-request submissions, reviews, and commits, all audited.

Treeamigo's avatar

Doesn't matter who is writing the code, true. Indian body shops produce a lot of the basic stuff now. What matters is that it is documented, tested/QA’d, is not in the public domain, there is clear segregation of duties throughout the process.

The coding of a new app itself is usually not the time consuming part. The systems architecture and upstream/downstream integration, particularly if it touches a system of record, and the whole process itself is the bear- can take a year or two to implement major systems upgrades.This coding is just man hours and really isn’t the scarce or high value resource. They are the stone masons working on a complex building project. These hours will be reduced as coders use AI, you are correct (like suing prefab materials , but the regulatory and control environment limits potential efficiency gains. Prefab makes construction cheaper but it still takes years to get buildings approved, permitted, documented and completed.

I don’t see some VP of operations vibe coding corporate apps.

What they can do is create their own personal or team tools that won’t touch or be part of the “corporate” systems or official processes. This happens all the time now but will be better with AI.

stu's avatar
20hEdited

Bad arguments against AI do not make it good. Pointing out the bad arguments against AI is a rather weak endorsement of AI.

It is worth pointing out that the value of AI for production is very different from its value in education. Yes, education is a product. Yes, applied learning (such as for software development) has value. But general learning is a tricky thing. I'm aware of no studies proving technology, beyond writing and maybe calculators, advances learning. The best evidence, which I have doubts about, suggests it retards learning more than it helps. Should we expect different from AI? Caution seems advisable. It seems the conservative approach.

Mark Anderson's avatar

I recall similar feelings among my son's elementary school teachers about computers during the 80s

Cinna the Poet's avatar

Anecdotally, my experience as a faculty member has been very different from what Pickard describes.

Most good universities provide free AI similar to the $20 plans to all faculty and students.

stu's avatar

Maybe that's a start but the bigger question is how it is incorporated, or not.

Cinna the Poet's avatar

For sure, just saying I don't think there's much stigma in most academic fields.

stu's avatar

Maybe that's true but I don't think making AI available is a good indicator of that.

Cinna the Poet's avatar

The claim from the article was that "to use AI marks you in the same way as voting Brexit, or insisting on the reality of biological sex."

Universities aren't spending money specifically to fund anti-immigrant organizing or trans-critical research, and if they did there'd be a huge outcry in university communities. There isn't even a modest outcry about them providing free AI. Ergo, the article is wrong about a stigma existing at least at anything like that level of intensity.

Cinna the Poet's avatar

Profs are not shy about complaining very loudly when the administration spends money on stuff the profs think is bad.

Max Marty's avatar

Some people will leverage AI to become the new renaissance men of our time. Others will use it to redefine the lows of “couch potato”. Both are true.

Kels Bells's avatar

Architects couldn’t resist technology changes because they didn’t have tenure. If administrators that hire new professors are part of the generative AI resistance (or AI in general), it will take a long time to weed them out.

Les Cargill's avatar

"Software engineers have always been the rate-limiting factor for every startup I’ve invested in."

Factor isolation in how long software takes is a fraught activity. There are all sort of decay curves. My most recent trips thru the software engineering literature primarily identify subject matter expertise as a limiting factor - how familiar with this problem are the team? We seem forced to do discovery and engineering simultaneously. Maybe code generators will help with that. It is not clear how.

I'm a strong advocate of "metaprogramming" - using scripting languages to generate code based on lists but this ain't that. You still have to generate the lists, show some manner of coverage of the problem and what not. It's just less fiddly and really only goes so far. It also takes a lot of getting used to. You also need good test vectors.

"The Fortune 500 barely gets any – they all flow to Silicon Valley."

There's a reason Mike Judge chose Pied Piper as a name for a firm in the 'Silicon Valley" series. An unfair characterization of SiVa is as Pleasure Island from "Pinocchio" while F500 has other serious constraints.

Maybe Claude et al will all help. I don't know.

Don Silva's avatar

Apparently there is a business case (at least as a non-profit and not particularly large https://sacra.com/c/gptzero/ ) or AI detection as evidenced by firms like ZerorGPT (https://www.zerogpt.com/ ) which possibly ironically use AI to detect AI generated text.

Interestingly, one of their studies found that 7% of the 100 most popular Substacks significantly relied on AI in more than 1 in 10 of their posts. Someone claimed that a top 100 substack was entirely AI generated but there apparently isn’t any confirmation of this. However, it would not be surprising to see this happen.

Substack appears to generally be AI neutral with no policies on the matter. Substack content is scraped and indexed by web crawlers and used in training datasets for LLMs. Substack’s privacy settings allow AI training by default. Paywalls do not block AI scraping.

So, one might be sympathetic to the academics and their publishers. If the day has not already arrived when any research question can just be entered into an LLM or AI and a publication quality paper returned, that day is not too far off:

“Gatsbi is an AI tool specifically designed to generate complete research paper manuscripts or patent disclosure drafts of publishable quality. It automatically produces publication-ready scientific manuscripts with equations, tables, charts, diagrams, in-text citations, and references, tailored to the user's chosen topic. Its key features include idea generation, implementation into detailed research plans, and robust referencing using Google Scholar for up-to-date, accurate citations.

SciSpace (formerly Typeset.io) is another top contender, offering end-to-end support for academic publishing. It provides AI-powered tools for drafting manuscripts, peer review assistance, and journal recommendations, helping researchers streamline the entire publication process.

Paperguide excels in organizing and structuring research findings, making it ideal for early-stage projects and synthesizing literature for thesis or paper writing. It supports the full workflow from literature collection to high-quality academic writing.“

Another occupation going the way of the elevator operator. Unfortunately, it is doubtful that the flow of loot flowing from the federal government to subsidize the academic way of life will ever be staunched.