13 Comments

"The basic masculinity archetype (source):

Must be a fighter and a winner

Must be a provider and protector

Must have mastery and control of one’s feelings at all times"

Sounds an awful lot like Joyce Benenson's Warriors and Worriers, a fairly information dense book, or perhaps it just seemed that way because lots of it is information and theory you don't encounter in most places.

Expand full comment
author

I agree that it's an information dense book

Expand full comment

I wonder if "information dense" is not so much objective as it is a subjective feeling resulting from something being "new information dense".

Expand full comment

I am a lot less optimistic about the value of using ChatGPT as any sort of tool. It often summarizes information incorrectly, resulting in highly unreliable results. Looking through e.g. Mark McNeilly's article, most of the uses were things that would be difficult to correct if one didn't know the answers already, or trivial to produce if one did. At best, one could use it as a generator of say an agenda if one found it quicker to check and edit ChatGPT's work than simply type out the agenda for one's self. Using ChatGPT for other fact gathering purposes is roughly equivalent to watching TV or going to the local bar and listening to the most confident sounding person on a given topic. Wikipedia is probably more accurate in most cases.

I think much of the optimism, including from Cowan, stems from a deep misunderstanding of what ChatGPT is and what it does: it is a machine that uses prediction based on its training data to predict what the next word should be, and thus generates text that is plausible sounding. It is pretty neat how it processes language, but it is not giving correct or reasoned statements, only something plausible enough that it falls under "something a person might say or believe". That is before the extra layers of hard coded rules about what it is allowed to say, and how to say it. Or considerations of what the training data is: apparently ChatGPT can be prompted to admit that it leans heavily socialist because its training material was socialist.

So, hooray, we have made the equivalent of the TV talking head who sounds believable until he delves into a subject we are knowledgeable in, and we realize he is talking through his hat. Then we have to wonder what else it is deeply wrong about, and what kind of effects that will have on those who don't notice it is full of falsehoods.

Expand full comment

I'm really unimpressed so far too. Maybe more mediocre white collar workers are in trouble, but I can't see using it in my industry.

Expand full comment

I wonder if something like this might be used to compose standardized work, say, device / product manuals and/or set-up foldouts, where certain words and phrases are used fairly consistently across multiple items? If, in fact, humans presently write those things. Sometimes I wonder....

I have no doubt that automation of written work will find a place where it's quality & content management suffices, and becomes the norm.

It certainly couldn't do worse than the English translation mini-manual for the 1970s Seiko watch that my ex brought back from VN.... :)

Expand full comment

AI has already destroyed the translation industry apparently. It's work isn't that good, but its good enough, especially for all those legally required translations nobody is going to read.

Expand full comment

*its* quality

Expand full comment

Last para.: the Gell-Mann Amnesia Effect as applied to non-human text generators?

Expand full comment

Precisely. This is going to be murder on the minds of the young and ignorant, those who don't know enough (or are confident enough in their knowledge) to question what they are told by ChatGPT. The system is an accidental falsehood/indoctrination machine, and it is disturbing how many people don't understand that, people who should know better. The machine doesn't even tell us its sources so that we can check how accurately it is representing them, or what range of sources it consults. That might be very difficult to do, so I don't really blame the creators, but at the same time it should make us extremely skeptical about using it for reference.

Expand full comment

I absolutely agree that ChatGPT is a falsehood machine; however, it *can't* tell you its sources, because it doesn't "reason" across sources. It's a purely statistical model. Which, IMO, is *why* it's dangerous.

Expand full comment

Oh yea, totally agree.

Expand full comment

Arnold;

I'm concerned you are not reasonably evaluating nepotism. While the word has only negative connotations in a society that idolizes apparent meritocracy, a more rational understanding suggests that children raised in the context of a community, field, career - actually start with a ~15 year educational head start as well as a number of other significant factors (such as single minded purpose, and productive physical and social capital) that create legitimate claims of merit. An extensive head start combined with some natural aptitude is indeed merit in pure form. Perhaps on occasion, extraordinary talent can overcome it; and certainly in many cases gross negligence undermines and makes a mockery of privilege.

However, on the whole, I do not expect that a system which punishes or worse, forbids, the development of a career passed on parent to child will cause individuals, the profession, or the overall community to flourish. Local and tacit knowledge is too precious to be squandered so.

Expand full comment