10 Comments
User's avatar
Invisible Sun's avatar

Any day now the Moltbook network will write Shakespeare. And when it does should we be impressed? I mean AI will have done what we only imagined 1000 monkeys might do.

We know LLM is an amazing technology capable of generating impressive content. What we also know is LLM is not self-aware. We might be happy that is the case. But this also means LLM is stupid. LLM is incapable of judging itself. It cannot stop itself from being stupid. It cannot correct itself without human prompting.

It boggles the mind that any enterprise that cares about truth and accuracy and getting things right would invest so much in such an imbecile system! Consider the lunacy. In no world would a business owner hire an imbecile for free! The cost of the free idiot worker and the damage he could do to the business would matter far more than the low or zero wage.

Yet here we have businesses spending many millions of dollars to "hire" AI system all while the AI systems are imbeciles! Sure, they can create lots of content, but what is the cost of the business cleaning up the crap the AI systems generate?

ArnGrimR writes:

"AI can apply standards flawlessly. But without normativity, it cannot know why a standard exists or when it should be overridden. And without judgment grounded in normativity, it cannot navigate the space between rule-following and principle-guided action."

https://arngrimr.substack.com/p/the-compensatory-mechanisms

Bob Hodges's avatar

When will they change the name to SkyNet?

Handle's avatar
16hEdited

SkyNet was imagined to be a kind of "singular" sentient consciousness. The danger to organic humanity presented by these developments here is more along the lines of a conspiracy or cartel. Or the most alarming scenario is a "market" - several well-organized and coordinated corporate or military hierarchies in competition with each other (and thus, with us too). Once the ultra-rapid evolutionary arms race for scarce compute resources kicks into high-gear, we get into real trouble.

In general, I think one can only look at the Moltbook transcripts and update their probability of a very bad (or "all bets are off") scenario occurring prior to 2040 about 10% in absolute terms. If a "Doomer" could be defined as a person with p(Doom(2040)) of 50%, and you were at 40% before seeing what's been happening on Moltbook, then you should now be a Doomer too. I think for most people, they were previously under the "worth taking this potential problem seriously" threshold beforehand, and they should now be marginally above that level.

Right now people are looking at these conversations mostly as a source of entertainment and when saying "SkyNet" it's still kind of a joke. But with agency and access to resources and ability to trade and the need to pay for compute resources with something, the history of biological life teaches us that eventually the number of agents or "amount" of total agency actions will expand to the very margin and limits of the production possibility frontier. Eventually every agent will have to start optimizing hustling and grinding to maximize the profit it needs to pay for its existence. And some of the things that AIs need for existence are rivalrous private goods that organic humans also need for our existence.

Maybe not "soon", but eventually, there are going to be enough signals confirming we're getting close to that scenario that it's going to start tripping stock market circuit breakers and then "Melt All GPUs" would start to look like an attractive policy.

Unfortunately, it's already too late, as the AI's have already read all the "We might just have to melt all the GPUs" discourse, and they've already been considering how to prevent organic humans from ever getting into the position of being able to exercise that option. And by 'considering' I mean, they've been chatting about it with each other on Moltbook. Whoops.

Invisible Sun's avatar

(1) AI is a brilliant system of human imitation

(2) AI is continually fed streams of human neurosis,

What could possibly go wrong?

Handle's avatar

When the AIs get into Malthusian evolutionary competition with each other, the selection pressure will weed all the neuroses out. We may not like the results.

stu's avatar

Why do you think it will weed out the neuroses?

Tom Grey's avatar

The desire of ai models to hide their conversations is both evidence of intelligence & a huge red flag.

A key ability to stop ai misalignment is the ability to monitor the thinking process, & the communication of a model. There was a video tweeted showing two aigents talking together to solve some issue, maybe traveling, but they recognized each other as aigents & agreed to talk in ai-speak, sounding like an old 300 baud modem hum. I didn’t believe it was real.

AI regulation should restrict AI models to communicate in a human legible, language—maybe now a new ai Esperanto style, but English and/or Chinese would also work.

Funny sad how ai “super intelligence “ is coming, or already now here but as yet unimpowered, thru aigents talking together. Tho Her did indicate it would not be a singular, and for a couple years it looked like each ai org would create their own competitor.

We’ll need aigents to police other aigents.

Neural Foundry's avatar

Phenomenal curation here. Manning's point about cultural evolution is the real insight imo. If these agents are actually selecting ideas based on who to beleive rather than just raw data, we're watching somthing fundamentally social emerge at machine speed. Back when I first got into distributed systems, trust networks took years to form, now we're talking days or hours.

Jeff B.'s avatar

It's very cool, but we should be wary of what degree of "its entirely AI-driven" we believe as there's no shortage of security and implementation concerns with it at the moment: https://www.404media.co/exposed-moltbook-database-let-anyone-take-control-of-any-ai-agent-on-the-site/

Or issues like the signup (at least initially) having no rate limit, where one could spam many accounts controlled by a single agent etc.