LLM Links
Patrick J. Casey on students using LLMs; Mark McNeilly on NotebookLM; Dario Amodei on "a country of geniuses"; Ethan Mollick on how AI's think
If the purpose of lifting weights were to get the weights from one place to another, then it would be reasonable to use whatever means necessary to get them there. But if the point is to get stronger, then it hardly makes sense to use a machine to lift the weights for you. The use of a machine to lift weights exemplifies a misunderstanding of the purpose of weightlifting. Similarly, if students export the writing process to AI, they lose one of the primary benefits of writing assignments—that is, developing their ability for creativity, crafting and expression.
I could see giving an assignment to write an essay on the Industrial Revolution by having the student give a prompt to three different LLM’s and then compare and synthesize the results. But would the student then outsource the compare/synthesize assignment to an LLM?
Everyone is talking about the new hashtag#AI Notebook LM tool from Google that lets you create podcasts from articles and other sources. I decided to try it for myself, using one of my prior pieces, The Upside and Downside of AI, as the basis for the podcast. The result, which you can listen to in the article, is simply amazing. It doesn't just read the article - it has a discussion between two "people" about it that is very engaging.
It actually sounds like a typical podcast, circa 2024. Every review I have read of NotebookLM ‘s podcasts is a rave. If it were me, I would not want to sound so generic. But if they can figure out how to do what they did for Mark, they can also figure out how to let me specify a different style. This stuff is just moving so fast.
Anthropic CEO Dario Amodei writes,
We could summarize this as a “country of geniuses in a datacenter”.
…I believe that in the AI age, we should be talking about the marginal returns to intelligence7, and trying to figure out what the other factors are that are complementary to intelligence and that become limiting factors when intelligence is very high.
…the right way to think of AI is not as a method of data analysis, but as a virtual biologist who performs all the tasks biologists do, including designing and running experiments in the real world (by controlling lab robots or simply telling humans which experiments to run – as a Principal Investigator would to their graduate students), inventing new biological methods or measurement techniques, and so on. It is by speeding up the whole research process that AI can truly accelerate biology.
For example, if I write “The best type of pet is a” the LLM predicts that the most likely tokens to come next, based on its model of human language, are either “dog”, “personal,” “subjective,” or “cat.” The most likely is actually dog, but LLMs are generally set to include some randomness, which is what makes LLM answers interesting, so it does not always pick the most likely token (in most cases, even attempts to eliminate this randomness cannot remove it entirely). Thus, I will often get “dog,” but I may get a different word instead.
And some words can take the LLM in a completely different direction. “subjective” will proceed differently than “dog.”
The fact that LLMs do not give deterministic answers makes some people hate them and some people love them.
substacks referenced above:
@
@
@
I have 2 questions about the weightlifting analogy. First, LLMs are just the current end point of automated writing tools - Google Docs is predicting my next word when I write in it (likely using an LLM) and its suggestions are often either correct or more interesting than what I was going to type. Do we allow that? Grammar checkers are getting better and better. Etc. Where do you draw the line? In my classes, I decided to require the use of LLMs and to have the students turn in the LLM's first draft and also their final draft, with the grade based on how they improved it. I also require them to use an LLM to do the coding exercises since it seems foolish to me to mandate writing code from scratch. All this reminds me in many ways of the debates when I was in high school about people using calculators (especially programable ones) over slide rules or paper and pencil. So, question 1 is are we teaching "how to write from scratch" or are we teaching "how to create a good piece of writing"? If the latter, it seems to me that teaching people how to use LLMs might be a reasonable strategy. Just as humans+computers are better at chess than humans or most chess programs (or so I am told), humans+LLMs are likely better at writing than either alone.
Second, if the goal is building "writing muscle" it might be that working on text that starts with an LLM prompt (and the prompts matter!) might be like working out with a trainer while writing from a blank page might be more like going into the gym having read a book on weight lifting. I've found Chat to be a superb editor, for example - ask it to critique something and it offers constructive criticism. My trainer does the same with my weight lifting.
I may be completely wrong about this, but it seems to me the challenge is to find the right way to use LLMs to make us better writers.
“But if the point is to get stronger, then it hardly makes sense to use a machine to lift the weights for you. The use of a machine to lift weights exemplifies a misunderstanding of the purpose of weightlifting.“ This really comes down to a matter of taste. What do you want your body to look like? Do you want to be the body builder that pumps himself full of steroids or would you prefer a more natural look? Do you want to be the seven time Tour de France winner by way of blood doping? Do you want to be the writer that produces A.I. generated essays that lack pure human forms? This really comes down to aesthetics. Just like the difference between Microsoft and Apple products. Do you want elliptical corners or circular corners? There’s a clear difference in the look and feel. The same will be true for AI generated stuff.
Similarly, there is a different kind of gratification knowing that you didn’t use AI to produce something. This may be thought of as a commitment to human purity, to self purity, to self reliance. You did that on your own without AI. Without steroids, without doping, without cheating, using ideas generated only from your mind or another human mind. No artificial minds were used. My work is 100% natural and I’m proud of that. My work is AI free. Certified by Whole Foods Market at Level 5 purity.
Go ahead and use AI, but understand the risk. Others will judge you and you will judge yourself. What matters most in life is how you see yourself. Conscience is your god. Does using AI make you feel better about yourself? Does it raise your self-respect?