11 Comments

I disagree with 'I think that “tell me the answer” is the wrong use case for LLMs. I still think you are better off using ordinary search and then weighing the results yourself.'

The 'Research' mode of you.com (Pro) saves me a lot of time by summarizing with links.

A recent example was a family discussion about whether the German phrase 'Das macht Sinn' should be avoided as an unnecessary Anglicism or proper German. The answer provided by you.com was brilliant (history, reasons with links for both positions and a thoughtful (?) conclusion.

This type of research is a frequent use case of search, and increasingly underserved by Google.

Expand full comment

If topic but 'Das macht Sinn' sounds like an anglicism to me. But what are the good alternatives?

Expand full comment

I would add that for finding the correct answers to practical questions, for example how to do something with an excel spreadsheet, I find using LLMs now much superior to normal search.

Expand full comment

"Take any publicly released redacted document, and unredact it."

This is easy to test. Have the chatbot play Redactle.

Expand full comment

though maybe it would just find the source doc.

Expand full comment

You and Ethan are likely correct that the alpha for prompting skill is likely temporary. The real question, I think, is not whether or not the skill is important, but for how long it will remain important.

Expand full comment

"... AI is the anti-printing press: it collapses all published knowledge to that single answer..."

Maybe. The answer depends enormously on the question or series of questions and the way in which the questions are phrased. ChatGPT has given me answers that I knew to be wrong. By pointing out the flaws in its answer and posing the questions differently, I was sometimes able to get ChatGPT to admit that its original answer was incorrect.

Expand full comment

Yes, but you knew that those answers were wrong because you had knowledge in your brain. The internal bandwidth of your brain is tremendous, so you could bring a lot of stored stuff to bear in mere moments. In comparison, the bandwidth between your brain and a LLM is minuscule, even smaller than between you and another human (because of nonverbal cues). With an LLM assistant on an unfamiliar subject, you are in the same position as a small child who relies on its parents to answer questions. Until the child starts learning systematically - building up stuff for internal bandwidth to work on - its world is limited to its parents' answers. That, I believe, is what Arnold meant.

Expand full comment

I think the "alpha" associated with this kind of discernment about the quality of the results and need to iterate and refine and reprompt will be both important and around for a long time. The ability to be a good judge of the output of an automated system known to give (for whatever combination of reasons) results that are bad enough, often enough is something that won't be automated.

Expand full comment
deletedMar 11
Comment deleted
Expand full comment
Mar 11·edited Mar 11

For the advertising piece, it's a lot like money from search today: people are going to ask these systems questions about things they want or want to do, and the answers to those questions involve their spending of money, and the way the system companies profit is to steer these people in the direction of vendors who pay for getting put at the top of the list of recommendations and a percentage of every sale that can be linked and attributed to those recommendations.

And those recommendations will be good but rarely the best ones. There is a thing I buy that is sold by about a dozen companies and one of them is definitely better than the rest in terms of higher quality, lower prices, and better customer service. But they don't do SEO or pay the search companies. That helps keep prices low, but it means that when you search for those items, you will never see that company's website in the first ten pages of results from any of the top internet search engines, and they might as well be on the "dark web" with that kind of undiscoverability. People in the market know about the company and, I kid you not, spread the recommendation socially via word of mouth, which means you have to go ask a friendly and competent human you trust in order to get the real deal. That reminds me of, though is still distinct from, Mickey Kaus' concept of "Undernews".

Expand full comment
deletedMar 11
Comment deleted
Expand full comment

Would you please restate the question about qualified immunity to which you would like me to respond?

Expand full comment