something we call the “sandwich” workflow. This is a three-step process. First, a human has a creative impulse, and gives the AI a prompt. The AI then generates a menu of options. The human then chooses an option, edits it, and adds any touches they like.
Doing this, they say, will save the human some of the tedious work. They draw an analogy with Copilot, which is an AI tool for software developers. I find their Copilot examples very compelling.
Tyler Cowen once offered a similar vision for human/computer cooperation in chess, with the human deciding when to take the computer’s recommendation and when to override it. My own view was that once computer chess became good enough to defeat the best human, a few months more of improvement in AI would make any human interference unhelpful. That is, human over-rides of computer recommendations would be wrong more often than right. I gather that this proved to be the case.
A use case for AI that interests me is speech-to-text conversion for podcasts. I find that somewhat helpful, but it still does not save me much time when dealing with podcasts. Come back to me when I can tell an AI “Listen to this podcast and write a concise summary, including a few excerpts that would particularly interest me.”
If Copilot were to produce software with subtle bugs that are difficult to locate and fix, then it would not save developers any time. I suspect that at this point trying to use AI to help me write would run into this sort of problem. Before seeing Noah’s post, I had been working on an essay describing some bad experience I have had with AI. Note that more recently
had a better experience with Open AI than I did. But I still suspect that a couple of the statements that the AI made with confidence were BS.By the way, both
and (Zvi Mowshowitz) have already delved into the newest thing in AI, ChatGPT. The Zvi sums up the terrible things that people have already found the chatbot willing to help with. But perhaps even more disturbing is finding that ChatGPT seems Woke. (Pointer from Tyler Cowen.) writes gave a crude IQ test to ChatGPT, and estimated a score of about 100, which is average. If somebody opens up a betting market of when it reaches genius level (130), I would guess by next July. Never underestimate the ability of AI’s to improve quickly.Here is what my essay was saying before the latest flurry of news.
Item: Less than a minute after I signed up for Instagram, it kicked me off for supposedly violating its terms of service.
Item: I gave Open AI a prompt to “Write an article about economist Arnold Kling” and it returned a fake biography.
Item: YouTube’s recommendation algorithm suddenly decided that I am young, single, and relationship-challenged.
All of these examples, which I will elaborate on below, show that artificial intelligence software still has great difficulty identifying human beings, distinguishing them from one another and distinguishing them from other AI’s.
This is likely to be a major problem in the near future. One reason that Elon Musk’s idea of charging for the Blue Check is that Twitter’s system for distinguishing real from fake accounts did not seem to be ready for prime time, and trolls decided that if they could pay eight bucks, impersonate a celebrity, and get away with it, that was worth it.
In an article for the WSJ about AI’s able to produce art and written material, Chris Mims writes,
One downside of this type of artificial creativity is the potential erosion of trust. Take online reviews, where AI is exacerbating deceptive behavior. Algorithmically generated fake reviews are on the rise on Amazon and elsewhere, says Saoud Khalifah, CEO of Fakespot, which makes a browser plug-in that flags such forgeries. While most fraudulent reviews are still written by humans, about 20% are written by algorithms, and that figure is growing, according to his company’s detection systems, he adds.
[Note: Amazon disagrees with the 20% figure, and claims that it is much lower.]
What I see coming is an arms race between AI’s that generate sophisticated forms of spam (like fake book reviews) and AI’s that can detect spam. Or between AI’s that impersonate you and AI’s that can identify the real you.
What happened to me
A friend recommended some standup comedy clips, but they were on Instagram and I wasn’t. So I downloaded the app on my phone. As soon as I created an account, I went to my computer and tried to search for the clips. I was blocked, with a very hostile message from Instagram reminding me about its terms of service. I thought How can I be in jail? I haven’t even tried to post anything! Am I on some sort of blacklist?
There was a button I could click to “complain to customer service,” and I clicked on it. It directed me to verify my email. Then it directed me to verify my phone number. Then it said it might take a day, but my access to Instagram would be restored. Seconds later, it restored my access.
My best guess is that Instagram’s digital sentry system was activated because I signed in from a new device (a computer rather than the phone) quickly after creating my account. Social media companies need AI sentries, but the AI’s are going to make both Type I errors (failing to recognize a mischievous actor) and Type II errors (kicking out innocent users).
Open AI’s response to write an article about “economist Arnold Kling” was to scramble to put together four paragraphs apparently based on several other libertarian-leaning economists. That would have been ok if it had been up front about saying that these were economists who are similar to me, and giving their names. Instead, it attached my name to what in fact were biographical details of other people.
It said that I was born in New York City and raised in the Bronx, when in fact I was born and raised in St. Louis. It claimed that I received my Ph.D from Harvard in 1974, even though I actually got my degree from MIT in 1980. It also said with a straight face that I wrote The Myth of the Rational Voter, which is by Bryan Caplan, as well as Against School: How the Public Education System Cripples Our Kids, which is by the late John Taylor Gatto.
In short, Open AI was lying about me. Apparently, it did not know it was lying. That leads me to be skeptical that AI’s can spot the lies of other entities, either human or artificial. [UPDATE: Ben Thompson caught the ChatGPT lying about Thomas Hobbes.]
YouTube’s recommendation algorithm used to work for me. Occasionally, to unwind, I liked to watch videos about the Beatles or about sports heroes from the 1960s. YouTube’s feed became very adept at finding content for me, like Mike Pachelli’s instructional videos for playing Beatles songs on guitar or videos featuring Bob Gibson, Sandy Koufax, Wilt Chamberlain, Joe Namath, or Coach Hank Stram (“65 toss power trap. Ya-ha-ha-ha-ha!”). It even figured out that I was a fan of the old American Football League, allowing me to revisit memories of Cookie Gilchrist, Lance Alworth, Bobby Bell, and many others.
Then one day, my feed became dominated by relationship videos. I could learn how to spot a narcissist, how to tell if a woman really wants me for a long-term relationship, how to satisfy a woman in bed. So many relationship videos. What did I do? How did YouTube decide that I was no longer a nostalgic Boomer but instead an insecure young man trying to navigate the dating scene?
Actually, I was still seeing a few recommendations for interviews with George Martin and World Series highlights featuring the Gibson-Brock-McCarver Cardinals. So I became a sort of cross-generational amalgam—part grandfather, part Gen Z.
Maybe if I wait a few months, AI’s will have improved to the point where they can save me time in the way that Smith and “roon” foresee. But for now I am from Missouri, the show-me state.
AI mistakenly credited Richard Hanania with “Myth of the Rational Voter” as well. Bryan said he might have to rethink his stance on AI risk.
https://twitter.com/bryan_caplan/status/1598777482127192065?s=46&t=J50CF2-TWyKPsOEz6U0zog
What you are seeing in these results is an outgrowth of how data at the big tech companies is much more about groups than individuals. These AI models from OpenAI and Google aren't trained to know about specific individuals. In fact, they don't want to know about them - that could get the researchers in trouble (see the issues with Github's CoPilot stealing code without attribution from legitimate projects). So right now, the more ridiculous the response when asked about individuals, the better.
Another way to think about it is that AI researchers want the AIs to "hallucinate" so that there is more randomness, which can lead to more creativity. So a way to think about asking for a biography of someone is that the AI will return a alternate history version of that person that, while wrong, still kinda-sorta has the same style and features.
All of this weirdness is a feature right now, not a bug. It's how the AI researchers are learning to train the system to write prose, chats, poetry, etc. by being a bit loose (but not too loose) with the data.
I expect based on what I've read that more training that includes individuals styles will soon be happening. Midjourney, an text-to-image AI, already allows for things like "in the style of Picasso", so someday soon you may be able to say "in the style of Arnold Kling". It may even know who you are by then, too! However, given current pushback on targeted advertising, don't be too surprised that it doesn't "know" you. Again, that's a feature that helps keep the researchers safe from trouble. It may be a long time before we breach that wall...