Links to Consider, 1/23
Ed West on "nepo baby"; Mark McNeilly on how to use ChatGPT; Rob Henderson linkfest; Ethan Mollick on using ChatGPT in the classroom; Simon Cooke on government accountability
Nepotism is a big problem in journalism, the subject of a brilliant piece this week by my mum.
I joke, obviously — she’s writing about something else, but the subject has been bubbling away the past year with the rise of the neologism ‘nepo baby’
I speculate that if you were to look at various professions—business, academia, law, medicine, what have you—the proportion of folks who are nepo babies would be highest in politics.
Mark McNeilly writes about how to use ChatGPT, giving examples. He concludes,
Hopefully, these examples should offer you some idea of the potential of ChatGPT to give you a quantum leap of productivity, creativity and learning in almost any area you can imagine. Whether you choose to avail yourself of these new superpowers is up to you.
Pointer from Rob Henderson. I recommend the article and its links. Refusing to delve into ChatGPT today is like refusing to delve into the World Wide Web in 1994.
Rob Henderson writes (same post),
The basic masculinity archetype (source):
Must be a fighter and a winner
Must be a provider and protector
Must have mastery and control of one’s feelings at all times
This basic blueprint of masculinity can be seen across all cultures—albeit in varying forms—suggesting it is evolved and adaptive. In an interview, the lead author of the study stated that these findings imply that, “If you break any of those rules, you are not a man.”
I have students prompt the AI to write essay about a class concept (see the paper above for the exact prompt). It will be their job to give the AI suggestions for improvement. They’ll paste in both the original essay, their suggestions, and the final output. The process pushes them to think critically about the content and articulate their thoughts for improvement in a clear and concise manner. They may need to seek out additional information to fill the gaps the AI essay might be missing or double check on the “facts” that the AI presents. This should help improve the understanding of major class concepts, as well as illustrate the limits of current LLM tools.
This is a much more constructive reaction of an educator to ChatGPT than “Ban it.”
governments have set up inspection services - Ofsted, Care Quality Commission, etc. - that seek to ensure published standards are met by organisations. While this is a form of accountability, it is not especially responsive and, as with boards, there’s not much evidence of executive accountability as a result of inspections. Also, the process is largely programmed rather than reactive and the inspection takes a broad view of the operation rather than the sort of narrow focus on outcomes that we see as accountability in commercial environments.
Creating genuine accountability is not a simple task. In Designing a Better Regulatory State, I may be too hand-wavy about how the Chief Auditor is supposed to get the job done. Cooke is very insightful on the challenge of bringing accountability to government.
substacks referenced above:
@
@
@
@
@
"The basic masculinity archetype (source):
Must be a fighter and a winner
Must be a provider and protector
Must have mastery and control of one’s feelings at all times"
Sounds an awful lot like Joyce Benenson's Warriors and Worriers, a fairly information dense book, or perhaps it just seemed that way because lots of it is information and theory you don't encounter in most places.
I am a lot less optimistic about the value of using ChatGPT as any sort of tool. It often summarizes information incorrectly, resulting in highly unreliable results. Looking through e.g. Mark McNeilly's article, most of the uses were things that would be difficult to correct if one didn't know the answers already, or trivial to produce if one did. At best, one could use it as a generator of say an agenda if one found it quicker to check and edit ChatGPT's work than simply type out the agenda for one's self. Using ChatGPT for other fact gathering purposes is roughly equivalent to watching TV or going to the local bar and listening to the most confident sounding person on a given topic. Wikipedia is probably more accurate in most cases.
I think much of the optimism, including from Cowan, stems from a deep misunderstanding of what ChatGPT is and what it does: it is a machine that uses prediction based on its training data to predict what the next word should be, and thus generates text that is plausible sounding. It is pretty neat how it processes language, but it is not giving correct or reasoned statements, only something plausible enough that it falls under "something a person might say or believe". That is before the extra layers of hard coded rules about what it is allowed to say, and how to say it. Or considerations of what the training data is: apparently ChatGPT can be prompted to admit that it leans heavily socialist because its training material was socialist.
So, hooray, we have made the equivalent of the TV talking head who sounds believable until he delves into a subject we are knowledgeable in, and we realize he is talking through his hat. Then we have to wonder what else it is deeply wrong about, and what kind of effects that will have on those who don't notice it is full of falsehoods.