39 Comments

The anthropologist Rob Boyd once began a seminar with this question: which species occupies the broadest range of habitats in the world? I was thinking some insect or bacterium but the answer is obvious, it's us. By a long way. He then noted that a young human child without access to adults would not be able to survive for more than a few days, if that. Not matter how individually capable mentally or physically. What we have managed to achieve as a species is entirely through our capacity to transmit and build on prior knowledge, which is as you say a form of collective intelligence. I was just reminded of this incident by your post.

Expand full comment

The need for humans to communicate prior knowledge, and the individual intelligence being combined collectively is a really good point.

Expand full comment

Depending on where one draws the line on "species", I suspect the hardy tardigrades (which I think are close enough to lump together) greatly surpass humanity in this respect or at least give us a run for our money. Practically indestructible relative to us, they can be found in hot springs, ocean depths, deserts, Himalayan peaks, and even survive exposure to radiation or outer space. They split off from nematodes over half a billion years ago, don't seem to have changed much in the past 100 million years, which counting in generations is a long time for something with a lifespan of 3-4 months. They survived all "Big Five" (plus Capitanian) mass extinction crisis events. No need for culture or rearing, they hatch from their eggs in just two weeks and are ready to go because they've already got all their adult cells - they grow mainly by making those cells bigger and molting exoskeletons as necessary. Depending on where one draws the line of nervous tissue belonging to a "brain", it's arguable they have even fewer than the 302 brain neurons of the extensively studied c elegans, perhaps even a third fewer.

Expand full comment

I say calling all 1000+ slightly different species one species is on the slightly untrue side of the line, especially in a picky comment about broad range of habitat. Tho the few 302 brain neurons of c elegans was interesting to me.

I was pretty sure Eskimo Igloos don't suffer with cockroaches.

Expand full comment

Maybe he was thinking of mammals - the answer has got to be bacteria.

Expand full comment

Sorry, I misread too.

Expand full comment

Your main point seems accurate enough but similar to Handle, I'm also going to pick at the details. I'm not sure what you mean by "young human child" but this seems to contradict your statement.

https://www.nbcnews.com/news/world/how-4-children-survived-40-days-jungle-plane-crash-amazon-colombia-rcna88791

Expand full comment

Stu, handle is not picking at details he's trying to suggest that a phylum with over a thousand species can be defined as a species, I'm not sure why, maybe to show how clever and knowledgeable he is. Totally missing the point. The story you posted has a 13 year old, old enough to have absorbed a lot of culture, which is Boyd's point. I really don't see the point of these comments but hopefully you and he got something out of it.

Expand full comment

I think I properly noted the limits of my example but maybe this one is better.

https://en.wikipedia.org/wiki/Dina_Sanichar

To the best of my knowledge, tardigrade need a liquid water environment to thrive so arguably they fail on that point but surely there's a species within the phylum that lives in a greater range a habitat than humans.

Expand full comment

Your definition of intelligence seems to be defining intelligence as not something individual humans possess more or less of, but something groups of humans have. I think you have applied the metaphor of group intelligence so hard that you have ceased to apply it to its original subject, leaving a gap where once we had a word for that capacity of humans to take a set of information and work out solutions to novel problems.

Expand full comment

Whatever Arnold is defining, he should use a different term or phrase to label it. "Capacity for accurate epistemic updating" just doesn't map to how people have used the word "intelligence" (or equivalents in other languages) for thousands of years, or how they understand it in common usage today. You can add extra and special "term of art" technical meanings to the dictionary's list of possible definitions of well-established words, but it's always better and less confusing to just invent new words or use combinations of old ones when you want to express concepts that are distinct from normal use and understanding.

People have an intuitive sense that there is something like "mental talent" / "cleverness" / "brightness" that differs greatly between people and is a capacity separable from one's accumulated knowledge base and experience.

The folk-functional test for intelligence is the ability to do mental things that we think only tend to be doable by people brighter than some threshold. That is, some people are not bright enough to ever be able to do them well no matter how much you try to teach them or how hard they try to get better, just like some people will never be able to run a marathon in under 3 hours.

Without getting into Moravec paradox territory, when normal people say "artificial intelligence", they are just talking about software being able to do those kinds of things in a way that is hard to distinguish from (or at least as good as) the things that, before that software capability existed, we thought only smart and talented people were able to do.

Expand full comment

This seems the most important issue - communicating to others what we mean when we're talking about "intelligence".

Usually what we think smart folk can do in their heads.

Collective intelligence, cultural intelligence, spatial intelligence (/ability), emotional intelligence -- lots of different possible modifiers. For the retarded, Intellectual Disability is for those very low in human intelligence.

Expand full comment

Agree with the point that individual capability should be distinguished from collective capability. At the individual level and in a very high-level theoretical sense, I think of individual intelligence as the capability to 'find/understand patterns' - leading to better predictions of what is likely to happen next (at whatever level - atomic/spatial/cultural/emotional/etc.).

What might it mean to encounter an intelligence that is twice as powerful as the highest human intelligence? I'd posit that such intelligence would detect patterns in the world which no human could even pick up on, leading to much better predictions (and power over) the future.

Expand full comment

"I think you have applied the metaphor of group intelligence so hard that you have ceased to apply it to its original subject,"

The original subject, left on its own as a baby, wouldn't even have survived a few days, much less develop any intelligence.

It's the process of nurturing, interacting, and teaching it that creates the intelligence.

The very intelligence that we recognize upon someone is ultimately cultural and historical, not merely their brain's inherent capability.

Obviously regardless of any "inherent capability", without access to the collective cooperational intelligence they would have to reinvent the wheel (literally: the wheel, and also fire all the way to geometry, algebra, chemistry, etc.) from scratch. Such a person, emerging out of growing up isolated would appear primitive and dumb to us, even if nominally their inherent "brain capacity" was of someone with 150 IQ.

So, if anything, we overestimate the importance and reach of individual intelligence. It's close to zero without the coopoerational collective intelligence.

Expand full comment

Per the Cambridge English Dictionary, intelligence is the ability to learn, understand, and make judgments or have opinions that are based on reason. (Of course, there is much more to life than just reason.) Arnold seems to be talking more about wisdom, which the CED defines as the ability to use your knowledge and experience to make good decisions and judgments. I think that in our culture we often fail to make this distinction. We hear a lot about people being smart, but not much about their being wise. I think the latter quality is much less common. We all know people who are quite sharp at some things, perhaps their profession, but lack intellectual curiosity and seem shallow. One thing is clear from the recent Gemini matter; Google's version of AI is completely lacking in wisdom. Maybe somebody should start thinking about AW, i.e., artificial wisdom.

Expand full comment

Notice the difference in the definitions: intelligence makes for judgments based on reason; wisdom refers to "good decisions and judgments." That would imply some set of values by which they could be judged good. That is a problem for the hubristic technocrats of Silicon Valley; to judge by their unthinking commitment to woke pieties as reflected in their AI product. That is a commitment to pre-determined attitudes in judging whatever emerges in human relations, mechanically applied, regardless of the circumstances. It is shallow and rote to the point of stupidity. It means they lack the capacity for moral reasoning, and therefore are hindered in reaching what might be considered good decisions under any reasonable moral framework.

Expand full comment

Gregory Bateson had an interesting theory that the separation that Western philosophy assumes between the individual’s mind and the natural and social environment isn’t really there.

Expand full comment

Imagine if a powerful AI were given the means to observe the world and draw its own conclusions from scratch rather than being trained on a huge corpus of human thought.

We would get some new insights, but we probably would not like the AI’s assessment of the importance of the human race.

Expand full comment

Re: "Institutions help to guide the evolutionary process. Free speech, open inquiry, and the scientific method for gathering and debating evidence are examples of such institutions. [... .] [Intelligence] is the process by which good institutions guide our beliefs toward truth and away from error. The institutions [...] include social norms and prestige hierarchies."

The trick is to have institutions that favor truth-seeking and discovery (what Arnold calls "intelligence") *and* social trust (cooperation).

Science and markets are standard examples of institutions that integrate — and tend to increase —discovery and trust.

Expand full comment

Arnold

Careful reasoning and interesting . . . as always.

One observation.

‘Science’ as a process also needs to be carefully defined.

Seems to me that ‘scientific method’ isn’t really science - but - natural philosophy.

Newton called his book “Mathematical Principles of Natural Philosophy “.

Now that’s accurate. Precise. Insightful.

Science just means knowledge.

The people who contributed to modern ‘knowledge’ understood they were doing philosophy.

In fact, mathematics can be understood as just interesting mental conclusions, without real foundation.

See Morris Kline’s “The Loss of Certainty “.

Think Gödel.

Pascal.

In that sense, mathematics isn’t study of physical nature, it’s revealing the ability of human reason to invent , create.

Like music - Bach, Mozart , Beethoven - created amazing patterns. Is that ‘science’?

Was their music found or created?

Galileo, Kepler, Copernicus, Pascal, Newton, Faraday, Maxwell , Wheeler, etc., drew their ‘scientific method’ from their devout biblical faith.

Their absolute determination to find and present ‘truth’ that Creator deserved, was their ‘scientific method’.

Integrity, love of truth, deep rooted humility, keen sense of modesty before God, essential.

This was their ‘scientific method’.

Really justifies the ancient claim - “theology is the queen of the sciences”.

Thanks

Clay

Expand full comment

Yes, this is why we won't get "real" AI, but we'll be quickly approaching a Simulated Intelligence that can pass any equivalent "Turing Test" / "AI test" -- there will be no metrics or available process to reliably determine whether the words you read on a screen, or book; or words you hear, are from a real person or a simulated intelligence. We're not there yet, but likely in the next 5 years for text, and audible voice, a bit longer for full deepfake of an actual person (like Elvis. Or Biden or Trump), and perhaps a bit longer for a composite fake person.

It will have been programmed to manipulate you in someway -- for the porn sexbots, to satisfy your sex fantasies and help you have orgasms. Those where the customers want to be manipulated will likely be most popular first, including getting excellent rationalizations for whatever beliefs they already have. Maybe a big market. Rational arguments, not truth.

James Damore DID get fired, from Google, for suggesting that something which is probably true would make a proposed DEI-ish program work less well.

So we already knew Google is not interested in the truth so much, rather wanting manipulation -- and addictive behavior, as Ted Gioia wrote about.

OTOH - maybe simulated truth is also a potential result, and would likely be better than deliberate false information (like the previously respected FBI has been so often not telling the truth about Trump).

Expand full comment

https://futurism.com/the-byte/lou-reed-widow-laurie-anderson-ai

Laurie Anderson talks to a simulated Lou Reed, who often babbles but often is insightful.

Simulated intelligence with a personality might be sooner than AGI.

Expand full comment

This reminds me a lot of Michael Polanyi on scientific consensus. Part of his thesis is that the scientific community generates a 'principle of mutual control' that watches the quality and value of the work being done by scientists even though each individual scientist can only himself/herself evaluate the work of a small fraction of the body of knowledge. However, the whole can be trusted because there is are chains and neighborhoods of overlapping networks of competencies that support the whole. So one scientist can't evaluate the whole, but he can evaluate the work of others near him, who can evaluate the work of others slightly farther away and so on from astronomy to biology

Expand full comment

On termonology, in the organization I consult in the term AI is not used. They use the term Machine Learning.

Expand full comment

Many animals are considered intelligent such as dogs. Dogs can be trained to detect drugs or explosives. On can imagine artificial trained to monitor sensor device do similar activities. Controlling a vehicle to travel along roadways in normal conditions is also analogous task to humans that can be considered intelligent although the AI driver still lacks the ability deal various abnormal situations such some instructions from emergency responders.

Expand full comment

"Intelligence, as I define it, will not come from larger datasets and more computing power. Intelligence is the process of improving knowledge. It requires treating beliefs as contestable. We need to be open to learning."

"The process of improving knowledge" has at least two aspects. One is moving the knowledge of humanity forward. The other is taking existing information and improving one's own beliefs and making better individual decisions. Maybe LLMs can't do the first but they are pretty good at taking existing knowledge and making better decisions.

Expand full comment

I was excited about LLMs for a while, but I now realize that their worth will be in automating jobs that are mostly not really productive or useful- the benefit might be to force the humans doing those jobs today to find other more useful creative things to do, but that, of course, is probably a vain hope- they will just create new categories of make-work for themselves. I will get excited when an "artificial intelligence" solves a math problem that humans haven't already solved.

Expand full comment

This is a pretty good test - solving known unknowns. Like this dedekind math problem:

https://phys.org/news/2023-06-ninth-dedekind-scientists-long-known-problem.html

More importantly for me, for a faster sort -- kind of unbelievable:

https://mathscholar.org/2023/06/deepmind-program-discovers-new-sorting-algorithms/

In both cases, a lot of scientists were working with AI, and for the next few years that will likely be the case. Known unknown math problems will be solved; but especially software will be re-written in machine code (harder to read than assembly!), and optimized.

This is also where the AI computer virus might well insert itself -- the AI alarmists should be using AI to attack the code of other AIs so as to harden it.

Expand full comment

“Google’s version of Claudine Gay” was great lol. I personally don’t understand the backlash — opponents of AI routinely predict that it will exacerbate racial, gender, etc. biases in digital media, so I don’t understand why people don’t see Gemini as a refreshing *counterexample* to that talking point. Obviously black samurai or brown popes have never existed and any person using Gemini for historical research would need to understand that, but for a tool that is supposed to help us think outside of our natural human confines and preconceptions, I don’t see the harm in Gemini creating fantastically, unrealistically diverse depictions of humanity; in fact, maybe AI like Gemini will help the real, non-AI world look a bit more like its generated content that people today seem so outraged by.

Expand full comment

Jack Krawczyk is Google's Claudine Gay. Gemini is a generative AI project commissioned by the Harvard Department of Sociology and required to conform its output to their views and goals. Anyone who decides to pay Google a dime for Gemini's output is a an active participant in this disgustingly manipulative effort or a fool.

Expand full comment

Are you sure Gemini was commissioned by Harvard? I see no proof of that online. Also, I think Gemini is free to use. I'm not sure how Gemini's output represents a 'disgustingly manipulative effort' given that Google almost immediately responded to rightist backlash in an almost submissive and conciliatory manner. It's unfortunate that *fake* pictures of black samurai or brown popes or whichever Gemini-generated image you're so upset about can evoke such emotions in people. Why not just use one of the many other generative AI models that produces only images of white people lmao.

Expand full comment

I'd say it's 50/50 that they just wanted the publicity. I remember however long ago - the first voice assistant came along, and tech-industry couple-friends of ours were early adopters. She said that all her boys wanted to do for the next 24 hours was to spew filthy questions at it and collapse in laughter.

These were *very* bright boys who generally made good use of their time and went on to bright futures. I am sure they were not all that different from the dudes who created this thing.

So it boggles the mind that they didn't play around with their own model - at least to the extent of "Show me some Nazis".

Expand full comment

It's a joke

Expand full comment

I doubt you meant your last sentence as a joke.

Expand full comment

Now ask yourself what parts of Gemini's architecture you can't tell is true or not?

Expand full comment

So far, it appears to me that at this stage the so-called AI is a collection of knowledge minus understanding. Hence the many hallucinations and, as much as I can follow the LLM business of predicting "tokens" or words, this is all that is going on inside the black box, word predictions based on algorithms. Intelligence, or actual cognitive processes, are not yet really taking place. Perhaps we should ask if there will ever be a day where there is artificial "wisdom", involving a moral dimension, considering not only what is true or effective but also what is right and good. We humans are so accomplished in wisdom, aren't we?

Expand full comment

I read somewhere the other day, the opinion of some notable, something to the effect that it is odd to seize on "hallucinations" when it is all necessarily a hallucination.

Expand full comment

And, if true, then that notable’s opinion is also a hallucination. Were that notable kicked in the shin, they would cry out and likely protest the assault. Whereupon the answer: “You’re just imagining things.”

Expand full comment

Well Said.

Within my first five minutes with GPT it was obvious it had a left leaning bias.

Expand full comment