9 Comments
founding

> the journey to artificial general intelligence as one that requires 100 steps. LLMs may get us from step 2 to step 3

Most of the people I’ve met who think we’re only 3% of the way have an over-inflated sense of what human thought is. The key lesson from GPT-3/4 is that humans can be very closely modeled as nothing more than “emotional LLMs”. The model is so close that we should wonder if there actually is anything deeper.

In fact, it seems almost everything we think and say (that doesn’t involve emotion) simply comes along the gradient from the last thing we said or heard, just as with LLMs. Ask yourself – what is your best example of a truly original thought? A thought that had no predecessor either within your own earlier thoughts, or from something you heard/read. They’re so rare, I’m not sure they truly exist. (As an example, note that this blog itself is almost always a collection of links plus analysis, showing how critical prompting is to human thought).

GPT-4 is already better at *every* cognitive task than the bottom ~1/3 of humans. I think many pundits don’t realize or appreciate that because they live in bubbles without access to people of below average intelligence (eg IQ of <80). The fact that it’s smarter in every way than several billion living humans should give you pause when you say we’re only 3% of the way to human-level AI.

Expand full comment

This might well be the best comment I will read this month.

Expand full comment

Good point on your criticism of my article. I should have been more clear on this.

While I do think there are examples of Spock like machine learning algorithms (Tree Search-like algorithms for Chess and Go, for example), the current trends in machine learning research and products to market broadly favor Lanley-like applications for the reasons in the rest of the article: adjusting to human feedback is essentially a solved problem, correctness is not.

Expand full comment

Things will get Spock-like when we get better at turning AIs into users of non-AI software. An LLM-like thing tightly coupled to a symbolic algebra / theorem proving system would be a fearsome mathematician. We'll also need something similar if we want a really grow an AI software developer.

But even, the whole point of the neural-nets will be to inject some intuitive, poetic playfulness into the logic machines.

Expand full comment

I wrote a post, from a mostly religious/practical perspective, on why I do not think AI will take over the world. Would love feedback.

https://ishayirashashem.substack.com/p/artificial-intelligence-vs-g-d?sd=pf

Expand full comment

These neural network type AI remind me of modern progressives and social scientists: always able to tell you what you want to hear with great empathy and emotion, while being strangely uncoupled from reality.

I and my colleagues just encountered a paper with some phantom, but excellent-sounding, references and abstracts from quality journals by authors working in the subject area (real "truthiness" with numbers in the expected correct ballpark) apparently identical to those you can obtain from technical search engines, but these were more precisely on-point to what we were looking for. Fantastic! But, alas, only literally so. Further research to find the papers behind these abstracts showed instead they didn't exist. They were figments of the AI's imagination.

AI can create ideas faster than we can, but the success of a creative scientist is to create lots of ideas and weed them out rapidly. AI needs common sense and knowledge of all areas of science to do the weeding process. That is where "understanding" becomes relevant. If you understand thermodynamics you know many ideas are nonsense without any analysis, but knowing all the mathematic of thermodynamics, like an AI does, lacks the understanding judgment to say "this concept or idea is nonsense". It lacks common sense. The Ai can get buried (and bury us) in all the ideas it could generate, but can't sort them.

Many of the progressives and social scientists propose solutions to our earthly problem which are nonsense, violating everything we know about physics -- from basic mass balances to thermodynamics. Their limitations are similar to those affecting the pronouncements of AI. Information and knowledge are combinatorial in the human brain. But AI and many well-meaning, but scientifically ignorant activists propose solutions based neither on knowledge nor on information.

Human beings in market economies, like evolution itself, work by trying a lot of things and rejecting nonsense. It is regrettable how very much nonsense is cluttering up our academic ecosystems. That is bound to have deleterious effects.

Expand full comment

> There may be other forms of AI that are Spock-like.

A great example of this is chess AI doing things that seem completely nonsensical but result in the best possible position 30 moves down the line because all the possibilities have been analyzed. Sure, you could call them "not-actually-AI", but it's not hard to imagine a future for AI where both (and then other) modes of thinking are integrated to various degrees

Expand full comment

Re: Insult bot, the zinger about selfies at the end was...something.

Expand full comment
founding

"Artificial general" intelligence is redundant, because any "general" intelligence is bound to be "artificial," in the sense of constructed de novo as opposed to evolved by natural selection. As biological creatures, we should remember that Darwinian processes do not produce general solutions. We do not have "general perception," but rather systems that respond to specifics like certain wavelengths of light, certain ranges of sounds, and certain molecules to smell. Evolutionary psychologists describe the brain as less like a general purpose computer and more like a Swiss army knife. Natural human intelligence is not general, and any "intelligence" that we describe as general must therefore be artificial.

Expand full comment