5 Comments

I’ve been a giant fan of Ben Thompson for many years, since the start of his Stratechery newsletter. And this article about “Sydney” is getting a lot of attention. But I don’t quite get it. What’s so interesting about it? Is it that people had been thinking of AI as a replacement for search or a simple replacement for, say, lawyers, or writing high school essays, and that Thompson highlights that it might be something altogether different? I enjoyed the article, and I’m interested in what Thompson thinks, and anything that surprises him is notable. But why exactly is this article considered such a “must read”?

Expand full comment

Some other related chatbot thoughts, under-covered so far as I can tell (which is not saying much: for a number of reasons I've reduced my internet commentary consumption and participation by approx 99%.)

1. "Corpus derived from the entire internet ... " - Lol, no way (to be fair, they are more accurate in the fine print.) But obviously the 'entire' internet (e.g., Twitter) mostly consists of low quality crud that doesn't look anything like a Wikipedia entry, high rubric-scoring standardized test essay, middle-brow magazine article, or prestigious paper opinion piece.

Excluding that crud is obviously a good thing to do, but the consequence is that the voice of that "particular persona" which early generation chatbots will project will not "sound like the internet", which will predictably result in the same depressing dynamics regarding analogous complaints of institutional demographics not "looking like America".

Thompson says chatbot output will tend to reflect the style, viewpoints, biases, and agendas of those particular kinds of people who are overrepresented as authors of the corpus ('scribe privilege'), but he wisely ducks the matter of what we already know about who those people are, e.g., about 90% of actively contributing Wikipedia editors are male. And that's well into the progressively enlightened 21st century, but a lot of the non-crud corpus is old. The NYT looked at over 7,000 works of fiction from 1950 to 2018 in "Just How White Is the Book Industry" and found 95% of the authors to be white.

So, one just knows without even having to look (I haven't and feel no need to) that someone has already combined chatbot with 'mansplain' to coin 'manbot' or 'botsplain' or whatever, and that if "chatbots so white" hasn't already hit critical mass to go viral then it will soon. Though in this case they aren't talking about 'European Ancestry' but instead using 'white' in that completely idiotic and culturally corrosive way (e.g., you ain't non-white if you are into, "the scientific method, rational linear thinking, cause and effect relationships ... ") reflected in the NMAAHC "Aspects of Assumptions of Whiteness" (which was only a prominent graphical depiction of content from the progressive-vanguard-consensus, which originated elsewhere, and with an unfortunately long history.)

To be fair, chatbot output does indeed sound 'so white', but only if you are using """white""" in this idiotic way, because chatbots are currently aiming at something like the typical "Spinozan Ideal" professional tone of objective neutrality, and the idiots have declared the underlying assumption of the very possibility of of objective neutrality to be """whiteness""".

For this and other reasons (e.g., anti-plagiarism-detector output tweakers) there will soon be many attempts to generate lots of different chatbot tones and styles, and by using a variety of techniques. One technique could be selective restrictions or expansions of the training corpus, and "everything before 1900" would be interesting. Another technique could be "tone and style translation", which reminds of me that Classic Key & Peele gag,"Luther, Obama's Anger Translator."

In our fallen age in which large portions of the population have been encouraged and trained to deploy "tone filters" and close their minds to messages received in 'so white', 'mansplaining', or other problematic styles, one could see how chatbot developers might imagine there to be real merit in efforts to help chatbots generate tailored-and-translated-tone outputs might even have some merit to them. My advice would be to avoid doing so at all costs, run, run for your lives.

The trouble is that these efforts would be worked by groups of people with demographics that do not "look like America". And even perfectly innocent attempts to deal with the 'chatbots so white' problem and imitate the tone of personae resembling those belonging to protected classes will produce perfectly innocent but inevitable errors. But with the ammunition of those demographics, the usual suspects will use those instances as fodder for the national pastime of pretending to be outrageously offended in the most conspicuous and antisocial manner possible, by means of propagating an intentional misinterpretation of those errors as clear evidence of bigoted mockery being baked into the code on purpose from the start. Best not to take even one step down that treacherous path.

2. Many commenters have pointed out that, impressive as some of the output of LLM weighted-autocompletion chatbots seems to be, a gigantic statistical analysis and 'forecasting' engine is not on its own a genuine example of or adequate substitute for an 'intelligence' of any kind that is able to model concepts and follow internal logic to process and relate those concepts. There has been a class of response arguing from indifference of indistinguishability of output, "Does how it get there matter if the output looks the same as what an intelligence would say?"

It matters, because, being humans, we are assuming humans to be "intelligent" and capable of thinking logically in a genuinely rigorous, objective, empirically-grounded manner about concepts and so forth.

And some humans are indeed capable of doing that, it's pretty impressive.

However, we are talking about writing here. Writings on the *internet*, people. Even with most of the crud filtered out, the part of the 'corpus' in which humans are communicating with each other at anything approaching 'logical engine' levels of rigor is a profoundly tiny portion of it.

Because that's not what writing, or talking, or human-to-human communication is for. And not to get insanely abstract or mathematical about it, but there exists a very deep problem deriving from the gap between various idealizations of logic on the one hand, and the implicit set of kinda-logical fuzzy 'rules' hard-wired by evolution into human brains that determines what kind of arguments (made in which contexts by which kinds of people) we tend to find persuasive or which otherwise be influenced by. This isn't just Aristotle's Rhetorical Triangle stuff. The problem goes deeper than that because what humans often perceive to be logos still isn't.

Actual logic and the human-influencing-kinda-logic are often in tension to the point of outright irreconcilable conflict, but most of the corpus is written in the framework of the latter, not the former. Training a system on that corpus is thus like eating the fruit off a poisoned tree when it comes to the question of trying to improve these things to make them actually logical and more 'intelligent' later on. Maybe trying to square this circle is why it's so easy for our successors in the Star Trek future to easily defeat computer minds by causing them to halt fatally when given simple paradoxes, nonsense, syntax errors, and so forth.

But the trouble goes deeper than than, because to the extent that most modern writing is increasingly distorted by epistemic corruption of ideological origin, that corruption is also getting embedded in systems that we hope one will be able to trust to perform in completely non-corrupted ways in other contexts. Which, sad to say, by all appearances, is increasingly less likely for actual human 'intelligences'. Taking a close look at the kinds of firewalls and filters the engineers will develop to get around these corrupting influences would be quite fascinating indeed.

Expand full comment

I wonder how these chatbots will interact with children and if this will be a good thing or not. Will chatbots exacerbate underlying psychopathology or ameliorate it? Maybe including a therapy trained learning set will help but who will select the parameters? Will there be atheistic, Buddhist, Christian, Jewish, Muslim, etc. inclinations built into these bots or will the bots evolve their own morality and metaphysics?

Expand full comment

Chatbots are so new - it's going to take some time for people to figure out how to use them properly. For years text messaging was a thing, but it was rarely use. I remember people used to say, "Why do they just text me instead of calling me?" Now people say the opposite!

Expand full comment