12 Comments
User's avatar
Roger Sweeny's avatar

"the fact that the essay writers using ChatGPT could not remember the work they had just completed seems like a pretty good indication that they were not learning. ... On the other hand, if students were asked to write an essay independently first and then use ChatGPT, results were better."

With the very important caveat that the students must also be marked on the pre-ChatGPT essay, and that mark should matter. Otherwise, so many students will make a half-assed (or less than half-assed) try on the first essay, knowing that ChatGPT will basically write the "revised" version. Never underestimate students' ability--or inclination!---to try to get the best grade with the least effort. And not really care whether they are learning or not.

Expand full comment
Scott Gibb's avatar

According to ChatGPT, the overall tone of this post is "generally positive, curious, and opportunity-focused, with nuanced caution." I was going to say, "It doesn't lack enthusiasm," but she is more generous than I am.

Since you brought up virtue yesterday, let's talk about using AI virtuously (today).

I am developing AI-use guilt. When I feed ChatGPT one of my drafts, I request that she check it for "glaring grammatical errors" rather than just "grammatical errors." I really don't want her tempting me with her so-called improvements.

I find it difficult to resist her temptations once offered, so best not to have her offer in the first place.

Even then, I will accept one or two of her "improvements" that I end up feeling a bit guilty about after I publish. Sure, these alterations are grammatically-better as published, but they're not me. They're stylistically different from what I would have done.

So this leaves me with a feeling of AI-use guilt. Wouldn't I rather my essay reflect me -- a uniquely imperfect person -- than a combination of her and me even if grammatically better.

This can be difficult trade-off.

So let's ask, "To thy own self be true, even when using AI?"

Yes please.

I say, be careful what you ask her for. She can be difficult to resist.

If I accept any stylistic changes, I feel an urge to give her credit for each change. Otherwise, I'm left with a tinge of guilt. I'm left thinking, "that piece isn't all me and I didn't admit to it."

Thoughts?

Expand full comment
Kurt's avatar

Good writing is organized thinking and it reflects the voice of the author. When it comes together, it's deeply engaging. AI can get the organizational part, but the voice part...so far for me...isn't there. Maybe I could program for my voice; I don't know.

I spent my career writing technical reports. I see a bright future for technical report writing with AI. Gawd...I wish I'd had AI then.

Expand full comment
Tom Grey's avatar

Examples here are crucial for accurate comments.

Expand full comment
Scott Gibb's avatar

Good point. Will include next time.

Expand full comment
Roger Sweeny's avatar

When you wrote a paper in high school and the teacher returned it, replacing "it don't" with "it doesn't" and similar corrections, did you say, "No, that's not me" or did you say, "Okay, now I'm learning to do better."

You seem to be saying now that you can't get any better; you are stuck at the "uniquely imperfect person" you now are. That changing wording to something that might be "grammatically better" is a betrayal of the person you are now.

Expand full comment
Scott Gibb's avatar

To thy own self be true.

I’m talking more about subtle alterations in style and voice here — deviations from normal grammar. Normal grammar is what ChatGPT tempts us with. She doesn’t know who I am or what I’m feeling.

Obviously there are ways of crafting sentences that differ from normal grammar. There are new ways of saying things that she’s not familiar with.

Knowing when to deviate from ChatGPT’s suggestions requires intuition and confidence, which can and should be developed if one is to use AI uniquely and effectively.

So no, I’m not saying you can’t get ANY better, I’m saying we GET BETTER through practice and experience — by trying our best, experiencing the consequences, and striving towards better judgment.

This is a grey area. I’m not talking about black and white grammar issues, though one could argue that you should leave it as “it don’t” if you really don’t understand the difference between “it don’t” and “it doesn’t.” Right?

Rule of thumb: If you’re not in a hurry, don’t accept changes you don’t understand. Better yet, try not to be in a hurry. Be quick, but don’t hurry.

I’m talking about voice. See Kurt’s comment. Why would you want to be like everyone else?

Don’t be a slave to AI. Be yourself.

Expand full comment
Roger Sweeny's avatar

Sure. If you're writing, "It don't matter at all" to get the reader to notice by being deliberately ungrammatical, or if you like to start some sentences with "But" to point out you're disagreeing with something in the previous sentence, do it!

But be careful.

Expand full comment
Tom Grey's avatar

On ai writing essays, it seems obvious that the teacher should require the print and a digital file. Then send the digital file into the same, or different, or standard ai to create a 20 question test on the content of the essay, and adjacent issues that should have been thought about in writing the essay. Then the student gets the questions about the essay they wrote/ submitted. Losing points from the essay quality grade for the wrong answers to the personalized test created by the ai based on the essay.

Learning requires thinking, which might be fun & hard, like chess or other games, but is often merely mildly interesting & hard and required for the class. That’s the usual expectation and experience of most students for most classes. Minimum non-fun thinking work for the desired grade.

Recently I was reading other stuff and cane across the hyperproductivity post, tho not remembering it was Newman. Agents maybe should be renamed, now, as Aigents, to make it clear when one is talking about a human travel agent, or a travel aigent, rather than a travel ai agent. “Eye-gent”.

It sounds great, and quite a bit like the development of aigentic library for well defined tasks, much like most computer language keywords are compiled assembly language sequences whose behavior is well defined. I’m sure that this bottom up style of aigents using other aigents will become a backend workhorse for LLM aigents to use so as to fulfill the human master wishes. Just like the digital genie so many of us want, “your wish is my command”. And the aigent ecosystem can do anything and everything digital that a boss could tell an employee.

A big part of many future office jobs will be finding out what digital services the customers are willing to pay for.

Expand full comment
Dallas E Weaver's avatar

Is AI, by its very design, not able to be truly innovative when it can't separate its own hallucinations from reality? Being modeled on the human brain, when we have significant problems distinguishing nonsense from reality, and AI may be the same, but with only a small number of instances relative to the number of human brains, where the crazy responders are isolated in some cases.

I visualize an AI trained in Astrology from around the world, using all the thousand-year-old reference works from cultures and religions of the past (China, India, the Middle East, the Americas, etc.). Would that AI ever imagine a heliocentric universe by adding a third dimension to the 2-D night sky? It could do an excellent job of making irrelevant correlations between the 2-D representations of the universe and human affairs, which are all nonsense.

Expand full comment
luciaphile's avatar

It’s cool how they’ve used AI to detect and prevent fraud in Minnesota with respect to various welfare programs ;-). These things are really tough to predict or audit, but AI cuts through the noise and finds the signal. Saving billions of dollars.

Expand full comment
luciaphile's avatar

I mean, yeah, it’s a pretty obvious use case, more so than building energy—guzzling data centers so 3rd graders can fake the writing of “My Favorite Animal” with the Main Idea assigned to the correct paragraph - but it’s doubly cool since people sometimes do the obvious dumb thing, you know?

Expand full comment