31 Comments

I will play Devil's Advocate. Arnold wrote:

"A schizophrenic and I can both come up with novel word sequences. The schizophrenic is better at generating them. I am better at filtering out the ones that donโ€™t provide meaning."

What is it in you that filters out nonsense from meaning? Would an AI randomly combine words, come up with Lewis Carroll's "Jabberwocky" or Wallace Steven's "The Snowman", and not reject both as nonsense? Right now, it is humans that are doing the real filtering out of what AI produces, and to my way of thinking, humans will still be doing the filtering out 20 years from now by writing the rules for what the AI outputs for consumption.

Expand full comment

Yes. And is it not a subjective submission that AI is replicating actual creative thought?

Isn't objective thought debatable... And if so, doesn't that make it subjective if it is not conclusive ...and then does something has to have a consensus to be conclusive... And doesn't consensus deny the minority of people that are not already effectively robots in their inability to filter and dissect and reorganise existing patterns of thought?๐Ÿ˜

Expand full comment

Ethan Mollick has an new article re AI creativity related to your essay. Together, I find what you both say to be fascinating.

Expand full comment

The comment about creativity strikes me as naive, for two reasons: (1) it does not match history. What mashup of previously existing ideas is represented by special relativity, Joule's discovery of the mechanical equivalent of heat, the valence-bond theory of the chemical bond, integral calculus, or the Fourier transform? History is strewn with ideas that came absolutely from the blue, that cannot possibly be described as some mash-up or recombination fo existing ideas. (I mean, *maybe* it has some validity in the area of punditry and fiction writing, where I kind of am inclined to agree there isn't much new under the Sun since Cato the Elder thundered in the Senate about Carthage, but it would be unwise to generalize from this very limited area of human invention to all the areas where we have had, and continue to have, genuinely new ideas.)

(2) If it were true that it were possible to construct sense by filitering nonsense, then LLMs would *already* work that way. It's not hard to create a Perl script that generates endless numbers of random sentences (even grammatical sentences). Surely the easiest way to construct an AI would therefore be to hook such a program to a "filter" that "merely" discards all the nonsense sentences and keeps the good ones? Why wasn't that done in, say, 1975?

But that of course is *not* how LLMs work, and such an approach would be sterile -- because the space of nonsense is *so* much larger, in practice effectively infinitely larger, that even with a guaranteed perfect filter in place (i.e. glossing over the considerably challenge in even defining, let alone building, a filter that reliably sorts sense from nonsense), you simply get lost in the space of nonsense without ever stumbling upon sense. The old aphorism about hoping that 10 million monkeys randomly typing on a typewriter would reproduce Shakespear, and "all" you need to do is have a function to recognize it when they do, comes to mind. There have been similar arguments about protein folding and more speculatively about evolution, but this model in general is naive and doesn't work in practice. I suggest that's *why* LLMs had to approach the problem from entirely the other direction: "begin* with an enormous corpus of sense, generated by human beings, and attempt to detect patterns in it, and then construct other sentences using those discovered patterns. That bypasses the otherwise insurmountable problem of wandering around in an infinite sea of meaninglessness and hoping to stumble on a tiny islet of meaning by accident.

In fact, I suggest we don't know why and how humans are creative. The recombination idea seems naive to me, not to mention out-dated -- it has not succeeded in any area in which it has been applied. So I'm very doubtful we will learn anything about creativity by just making an even larger LLM. We'll need to wait for some genuine creative insight into the problem, it can't be brute forced.

Expand full comment

> What mashup of previously existing ideas is represented by special relativity, Joule's discovery of the mechanical equivalent of heat, the valence-bond theory of the chemical bond, integral calculus, or the Fourier transform? History is strewn with ideas that came absolutely from the blue, that cannot possibly be described as some mash-up or recombination fo existing ideas.

The history of science is *not* strewn with such ideas. It is only popular accounts in the ancient and successful mold of Vitruvius' sharing dubious anecdotes about Archimedes, or Voltaire about Newton and the apple, which create this mistaken and misleading impression. This distortion of history may be done out of honest appreciation of genius, honest ignorance, disinclination or difficulty of telling the real messy story of important discoveries to ignorant people, difficulty of communicating old ideas to people weaned on new ones (e.g. reading Darwin's Origin of Species it is hard for us who grew up knowing about DNA etc. to understand how Darwin thought about inheritance), or from a desire to elevate contemporaries at the expense of older predecessors, but I can't see how it is ever good whatever the motive.

Expand full comment

I wrote several weeks ago on either Kling's substack or possibly the Zvi's that what would truly impress me as a creative act of an LLM/AI at a human level would be for it solve a previously unsolved math problem, like one of the Clay Mathematic Millenium problems.

Expand full comment

You seem to have 'Liked' a very lot of my comments on various 'Stacks over recent weeks....and thanks for that. So could I persuade you to add my: https://grahamcunningham.substack.com/ to your Substack free reads...as No 17?

Expand full comment

What is interesting, and somewhat concerning, considering recent discoveries of government interference with free, uncensored speech, is if AI generated responses will be based on broad, truly collective internet information and input vs โ€œfact-checked, pre-approved and decidedly one-sided, narrative-based output.

Expand full comment

Just like with the onset of computers; garbage in, garbage out.

Expand full comment
Aug 13, 2023ยทedited Aug 13, 2023

Essentially it is automating or assisting work. People over the centuries always used tools to get more work done.

I'm kinda waiting till we get domestic robots combined with ai. Think of what it can do, cleanup my house or a hospital, have some chat assist.

And as for people with dementia it will always eager to talk and help them

Expand full comment

Yes. Having been there, it is hard to be a friend to a friend that doesn't remember you and can't consider your sacrifice. But then that's kind of like giving a mobile phone to a child instead of being a caring parent....

Expand full comment

โ€œCurrent versions of AI represent words or other concepts (such as musical sounds) as multidimensional vectors. All it takes for an AI to be creative is for it to try to add vectors that have never been added before.

Adding two arbitrary vectors is very unlikely to result in a new concept that is at all useful. For an AI to be creative, it will have to be able to extract from many possible vector combinations those rare syntheses that are worth retaining.โ€

Spot on. Many smart people eg David Deutsch types among many others donโ€™t understand this.

Expand full comment
Aug 14, 2023ยทedited Aug 14, 2023

"AIโ€™s can never..." and "...what AIโ€™s will and..." etc.

The apostrophe-s denotes possession, not plural. As in

This hat belongs to Andrew; it is Andrew's hat. The apostrophe-s on the end of Andrew says the hat belong to Andrew

I know AI is an acronym, but a writer can indicate more than one of it by just adding "s" without the apostrophe. More than one Artificial Intelligence (AI) would be simply "AIs".

Granted it would look better in a serif type font like Times New Roman - the "I" would look more like a capital "I". Regardless, AIs is correct. Note that the "s" must be lowercase to indicate plural of the acronym's meaning (see what i did there?). If you were to use a capital "S" - AIS - you would confuse Artificial Intelligence with Automatic Identification System, or some such.

BTW, if you had more than one Automatic Identification System, it would be abbreviated as "AISs"

Lastly, using plural and possessive, one might write:

That is Andrew's hat, but he has lots of hats. You should never write it as "...he has lots of hat's."

Just sayin.

โ€”Sargent Grammer

Expand full comment

I write spiritual humor, and frankly gpt is terrible at it. It needs to be trained for spirituality. I write about traveling with my kids as atonement for sin, ask it to edit for grammar, punctuation and clarity, and it changes to the suffering of travel purifying me. Doesn't like the part about how it's hard to travel with kids. In general, it can do spirituality or humor, but not both.

It also adds a redemption Arc to every bad character.

It has a good generalized Bible awareness, but has not incorporated either Jewish or Christian commentary.

Expand full comment
founding
Aug 13, 2023ยทedited Aug 13, 2023

RE: "If you wanted it to, a large language model could beat the most prolific schizophrenic at generating novel content. But in order to be appreciated as creative, AIโ€™s will have to become excellent at filtering out the novel content that doesnโ€™t work. Once they learn to do that, AIโ€™s will reach human or super-human levels of creativity. If there is some barrier to AIโ€™s becoming excellent at such filtering, I do not see it."

Let me play with some concepts, and raise questions for readers.

Arnold sketches creativity as a two-step process of variation and selection.

The creator โ€” whether a person or AI โ€” may handle both steps. Or there may a division of labor in the two steps, with AI handling variation and the individual handling selection (or perhaps sometimes vice versa?). Or an iterative process of variation and selection between AI and the individual.

But usually the output of this creative process must reckon with another filter; namely *selection by others*; i.e., the demand side, the market.

The creator โ€” whether a person or AI or a hybrid โ€” might labor with an eye to the demand side (i.e., the market or audience or gatekeeper).

A complication arises on the demand side: People might not know that they want a particular novelty until a creator (or entrepreneur) supplies it. A creator might offer a novel output, which might inspire demand (a novel preference in others). Who knew that they wanted an iPod before Apple 'dropped' the product via a silkscreen-graphic silhouette cartoon dancer commercial? A wildfire of demand ensued. Arnold suggests that experiments in AI creativity might inspire new preferences in companionship.

Can AI achieve autonomous *entrepreneurial* creativity? Or will it not be more than an instrument of human entrepreneurship?

Can AI achieve *originality,* i.e., creativity that rejects aesthetic or conceptual conventions? Take Arnold's example, the pop song. Think of the move, in the 1960s, from the pop song to the concept album (e.g., "Freak Out" or perhaps "Sargent Pepper's Lonely Hearts Club Band").* Can AI create *original* forms of music and inspire original preferences on the demand side?

Is AI Drake original in this sense? Does emergent AI companionship subvert conventions and norms?

* Note: The LP album, which was a technological precondition for the concept album, was introduced in 1948 and quickly became popular. But artists and record labels did not seize the possibility of the concept album until the mid-1960s.

Expand full comment

Some Sci-Fi ideas remain relevant, especially a "Young Lady's Illustrated Primer" - because the ai remembers prior interactions. I'm sure that the most personal, and many most successful, ai apps will do a LOT of personal remembering.

Similarly there are e-butlers, janitorbots, maidbots, and other limited physical assistants and ai-bot digital assistants. In order to be successful, the e-butler/ ai.bot will also have to be reliable and accurate, according to the user's idea of accuracy.

"Make a substack change so that each post has a link to the previous, and next, chronological posts". There should be a bot that does this for you; and an e-butler that could either do it for you or show you, step by step, how to do it yourself. [I'd like Arnold to add such buttons.]

One of the key issues is to reduce the size of the LLM models, which seems plausible when the 10, 15, or 65 billion parameters are specified in a model after extensive training. This trained model (/parameter values) should be able to be copied into a new LLM model which thus becomes already trained, ready to work for some individual or some org.

I'm sure that education will be one of the early success areas - because there are right/wrong standards available for use in testing. Both in testing the ai-tutor as well as testing the human students after they've received instruction. I suspect, and hope, that ai-ESL (English Second Language) tutors will rapidly catch on with the billions (1,000 million) who aren't native English but understand how valuable it is to learn English. Teaching ai-tutors (e-tutors?) how to teach English will expand knowledge about how to make ai-tutors better teachers. And, rather than the pre-K dreams, it seems likely to start with college or business "learning for a job", like so many adults who now pay to learn English. [When at IBM, I argued that Watson should do this - the Watson group decided to teach "psychology" first, instead. Haven't heard much on that front - but in July, 2023, a new business assistant watsonx was unveiled. IBM seems to be betting the company on watson ai]

Expand full comment

I dont think because AI can put 2 random ideas together that makes it "creative" . Was the big bang random ?

Expand full comment

And again, I question the validity and authenticity of the โ€œ2 random ideasโ€.

Expand full comment

AI is all human inspired information cultivated into canned responses to an algorithm-based set of data. Where is the creativity coming from and who is controlling the data managed and subsequently shared as โ€œresponsesโ€.

Expand full comment

A good analogy, but remember that it was a different technology -- the internet -- that really enabled all those recipe scenarios. If there been no internet, PCs would still be useful (spreadsheets, word processing) but only valuable to the extent that, say, Microsoft c. 1993 was valuable.

Expand full comment

AI can't be creative, because it is a statistical model. No library ever was creative, while containing all the knowledge of the world, ready to be recombined in any way you like. The prompter is creative in steering the output into a certain location in latent space, but a statistical model lacks the mental process to be creative in any way.

It's allways funny to read from the 'AI is creative'-crowd :)

Expand full comment

Good article. But what's wrong with tracking the recipes? Looking more generally - any software is a recipe.

And from AI I expect the same - to know how to prepare my favourite dishes, and, hopefully, implement it when requested.

Expand full comment

Leibniz and Newton both developed calculus the same year, maybe the same month. Newton published first and is credited with the discovery, but we use Leibnizโ€™s far better notation.

The ai will simulate creativity and intelligence. Most folk, most of the time, wonโ€™t be able to tell the difference, but will like some digital stuff more than other stuff. Like now, but with more stuff close to stuff youโ€™ve already liked.

All jobs mostly done digitally can, and will, be mostly done by AIs. Maybe we can call them Al.

Expand full comment

Nobody ever thought that PCs were for keeping track of recipes. That was a marketing fantasy for people who didn't understand their market.

Expand full comment