I have 2 questions about the weightlifting analogy. First, LLMs are just the current end point of automated writing tools - Google Docs is predicting my next word when I write in it (likely using an LLM) and its suggestions are often either correct or more interesting than what I was going to type. Do we allow that? Grammar checkers are getting better and better. Etc. Where do you draw the line? In my classes, I decided to require the use of LLMs and to have the students turn in the LLM's first draft and also their final draft, with the grade based on how they improved it. I also require them to use an LLM to do the coding exercises since it seems foolish to me to mandate writing code from scratch. All this reminds me in many ways of the debates when I was in high school about people using calculators (especially programable ones) over slide rules or paper and pencil. So, question 1 is are we teaching "how to write from scratch" or are we teaching "how to create a good piece of writing"? If the latter, it seems to me that teaching people how to use LLMs might be a reasonable strategy. Just as humans+computers are better at chess than humans or most chess programs (or so I am told), humans+LLMs are likely better at writing than either alone.
Second, if the goal is building "writing muscle" it might be that working on text that starts with an LLM prompt (and the prompts matter!) might be like working out with a trainer while writing from a blank page might be more like going into the gym having read a book on weight lifting. I've found Chat to be a superb editor, for example - ask it to critique something and it offers constructive criticism. My trainer does the same with my weight lifting.
I may be completely wrong about this, but it seems to me the challenge is to find the right way to use LLMs to make us better writers.
How do you ensure the student has any unique input at all to the assignment's output? How do you determine the level of that input?
Four years ago this coming January I took up the study of mathematics again after an hiatus of 35 years. We had calculators in the mid 1980s that could do the routine mathematical computations- addition/subtraction, multiplication/division, powers, roots, trig and even hyperbolic funtions. People of my age and older, however, knew how to do all of this by hand using trig and log tables because calculators weren't so available when we were in junior high and high school. And it goes without saying that there was no Desmos or Wolfram to calculate derivatives, integrals, or matrices for you in the mid 1980s as a student. Now, I love how I have at my fingertips the ability to do all of this to check my work or to able to work through proofs that are beyond my native ability. However, I wonder how young students of today use these tools and how would I ensure that a student understood how to, for example, integrate a function by hand. Are they using these as crutches or as tools to do things they could do themselves but use them for useful efficiency?
“But if the point is to get stronger, then it hardly makes sense to use a machine to lift the weights for you. The use of a machine to lift weights exemplifies a misunderstanding of the purpose of weightlifting.“ This really comes down to a matter of taste. What do you want your body to look like? Do you want to be the body builder that pumps himself full of steroids or would you prefer a more natural look? Do you want to be the seven time Tour de France winner by way of blood doping? Do you want to be the writer that produces A.I. generated essays that lack pure human forms? This really comes down to aesthetics. Just like the difference between Microsoft and Apple products. Do you want elliptical corners or circular corners? There’s a clear difference in the look and feel. The same will be true for AI generated stuff.
Similarly, there is a different kind of gratification knowing that you didn’t use AI to produce something. This may be thought of as a commitment to human purity, to self purity, to self reliance. You did that on your own without AI. Without steroids, without doping, without cheating, using ideas generated only from your mind or another human mind. No artificial minds were used. My work is 100% natural and I’m proud of that. My work is AI free. Certified by Whole Foods Market at Level 5 purity.
Go ahead and use AI, but understand the risk. Others will judge you and you will judge yourself. What matters most in life is how you see yourself. Conscience is your god. Does using AI make you feel better about yourself? Does it raise your self-respect?
Ai turns bullet point into long email that is sent to aiBot that turns long email into bullet point. (Humans pretend to write or read long email)
One comment remembers early translation to Russian & back:
Out of sight, out of mind >> invisible maniac.
As aiBot assistants get better, the human bosses who want workers willing to do the work of writing and reading long complex stuff to summarize the relevant stuff for the job, will find aiBots to be better than 80% of college grads at that kind of thought work. Middle managers seem at increasing risk, as are IT professionals like the 80% of Twitter that Musk fired.
For the next decade or three, the thought workers able to work well with aiBots, or aiServants, will become more valuable. With Boomers probably too old for many of them to get a career bump from early ai use—tho Gen X folk with experience & work habits might become leaders faster with aiBot help.
"The fact that LLMs do not give deterministic answers makes some people hate them and some people love them."
Someone can correct if I am wrong but all of the answers an AI gives are deterministic at all times. Just because it can give different answers to the same question isn't the measure of whether or not it was predetermined by the machine's algorithms- it is the algorithms themselves that determine the level of "randomness".
I understand intellectually how these tools will likely be tremendously productive eventually, but emotionally all of this just bums me out. Given that most of us are not superstars, intellectually or otherwise, a world in which machines cheaply produce intellectual work at the 75th or 90th percentile is a world in which most of us are likely servants (or doing other non-routine manual labor for which machines lack training examples) for the small number of people who can vault over the bar of whatever the machines do.
I think that David Roman guy who writes about Greek and Roman history used the Notebook tool a week or two back to create a little podcast based on something he'd written about Alcibiades. It was kinda cool, but parts were sort of like a poorly scripted ad read. I wouldn't want to listen to more than a few of these.
All middle manager report creating report summarizers are at risk. Anything humans can do digitally, ML using aiBots will be able to do, better than average. And at lower wages.
Letting the ai run stuff, under supervision, will improve ai faster.
The purpose of lifting weights is a great analogy for the proper use of ai in education. Hard to get there, maybe—lazy folk always “just want the right answer”.
I have 2 questions about the weightlifting analogy. First, LLMs are just the current end point of automated writing tools - Google Docs is predicting my next word when I write in it (likely using an LLM) and its suggestions are often either correct or more interesting than what I was going to type. Do we allow that? Grammar checkers are getting better and better. Etc. Where do you draw the line? In my classes, I decided to require the use of LLMs and to have the students turn in the LLM's first draft and also their final draft, with the grade based on how they improved it. I also require them to use an LLM to do the coding exercises since it seems foolish to me to mandate writing code from scratch. All this reminds me in many ways of the debates when I was in high school about people using calculators (especially programable ones) over slide rules or paper and pencil. So, question 1 is are we teaching "how to write from scratch" or are we teaching "how to create a good piece of writing"? If the latter, it seems to me that teaching people how to use LLMs might be a reasonable strategy. Just as humans+computers are better at chess than humans or most chess programs (or so I am told), humans+LLMs are likely better at writing than either alone.
Second, if the goal is building "writing muscle" it might be that working on text that starts with an LLM prompt (and the prompts matter!) might be like working out with a trainer while writing from a blank page might be more like going into the gym having read a book on weight lifting. I've found Chat to be a superb editor, for example - ask it to critique something and it offers constructive criticism. My trainer does the same with my weight lifting.
I may be completely wrong about this, but it seems to me the challenge is to find the right way to use LLMs to make us better writers.
How do you ensure the student has any unique input at all to the assignment's output? How do you determine the level of that input?
Four years ago this coming January I took up the study of mathematics again after an hiatus of 35 years. We had calculators in the mid 1980s that could do the routine mathematical computations- addition/subtraction, multiplication/division, powers, roots, trig and even hyperbolic funtions. People of my age and older, however, knew how to do all of this by hand using trig and log tables because calculators weren't so available when we were in junior high and high school. And it goes without saying that there was no Desmos or Wolfram to calculate derivatives, integrals, or matrices for you in the mid 1980s as a student. Now, I love how I have at my fingertips the ability to do all of this to check my work or to able to work through proofs that are beyond my native ability. However, I wonder how young students of today use these tools and how would I ensure that a student understood how to, for example, integrate a function by hand. Are they using these as crutches or as tools to do things they could do themselves but use them for useful efficiency?
“But if the point is to get stronger, then it hardly makes sense to use a machine to lift the weights for you. The use of a machine to lift weights exemplifies a misunderstanding of the purpose of weightlifting.“ This really comes down to a matter of taste. What do you want your body to look like? Do you want to be the body builder that pumps himself full of steroids or would you prefer a more natural look? Do you want to be the seven time Tour de France winner by way of blood doping? Do you want to be the writer that produces A.I. generated essays that lack pure human forms? This really comes down to aesthetics. Just like the difference between Microsoft and Apple products. Do you want elliptical corners or circular corners? There’s a clear difference in the look and feel. The same will be true for AI generated stuff.
Similarly, there is a different kind of gratification knowing that you didn’t use AI to produce something. This may be thought of as a commitment to human purity, to self purity, to self reliance. You did that on your own without AI. Without steroids, without doping, without cheating, using ideas generated only from your mind or another human mind. No artificial minds were used. My work is 100% natural and I’m proud of that. My work is AI free. Certified by Whole Foods Market at Level 5 purity.
Go ahead and use AI, but understand the risk. Others will judge you and you will judge yourself. What matters most in life is how you see yourself. Conscience is your god. Does using AI make you feel better about yourself? Does it raise your self-respect?
Here’s a great joke on helpful ai:
https://x.com/mhartl/status/1851397372884172868
Ai turns bullet point into long email that is sent to aiBot that turns long email into bullet point. (Humans pretend to write or read long email)
One comment remembers early translation to Russian & back:
Out of sight, out of mind >> invisible maniac.
As aiBot assistants get better, the human bosses who want workers willing to do the work of writing and reading long complex stuff to summarize the relevant stuff for the job, will find aiBots to be better than 80% of college grads at that kind of thought work. Middle managers seem at increasing risk, as are IT professionals like the 80% of Twitter that Musk fired.
For the next decade or three, the thought workers able to work well with aiBots, or aiServants, will become more valuable. With Boomers probably too old for many of them to get a career bump from early ai use—tho Gen X folk with experience & work habits might become leaders faster with aiBot help.
Love the cartoon.
My review of NotebookLM is not a rave, and fully agree that customisability seems inevitable, and will improve the product massively.
Part of this is just preference, for serious ideas I just don't get so much from listening to them, and prefer to read.
https://academyempirica.substack.com/p/notebooklm
"The fact that LLMs do not give deterministic answers makes some people hate them and some people love them."
Someone can correct if I am wrong but all of the answers an AI gives are deterministic at all times. Just because it can give different answers to the same question isn't the measure of whether or not it was predetermined by the machine's algorithms- it is the algorithms themselves that determine the level of "randomness".
I understand intellectually how these tools will likely be tremendously productive eventually, but emotionally all of this just bums me out. Given that most of us are not superstars, intellectually or otherwise, a world in which machines cheaply produce intellectual work at the 75th or 90th percentile is a world in which most of us are likely servants (or doing other non-routine manual labor for which machines lack training examples) for the small number of people who can vault over the bar of whatever the machines do.
"Servants" and "manual labor" is the optimistic scenario. Try this on the old emotions, "Make-Work or Sex-Work."
Or AI is a tool that helps the marginally competent do their job better.
Also possible. Intellectually I hear you. But it is not, as the youth say, my vibe…
Weightlifting - practice or production? It all comes down to which goal you have.
I think that David Roman guy who writes about Greek and Roman history used the Notebook tool a week or two back to create a little podcast based on something he'd written about Alcibiades. It was kinda cool, but parts were sort of like a poorly scripted ad read. I wouldn't want to listen to more than a few of these.
All middle manager report creating report summarizers are at risk. Anything humans can do digitally, ML using aiBots will be able to do, better than average. And at lower wages.
Letting the ai run stuff, under supervision, will improve ai faster.
The purpose of lifting weights is a great analogy for the proper use of ai in education. Hard to get there, maybe—lazy folk always “just want the right answer”.