Discussion about this post

User's avatar
Christopher B's avatar

We usually forgive people for providing bad information because we have the expectation they've done their best to sift through the information readily available to them before formulating an answer, and we usually adjust our expectations for the situation we put them in. If I randomly ask my wife when her next appointment is I'll more than likely be satisfied with a vague answer like 'next month' or even a wrong answer like 'next Tuesday' when it's actually on Wednesday but if she's looking at her calendar I would expect a precise and correct answer. I'd say we're using a similar heuristic for the accuracy of AI answers but our expectations are far higher because we tend to imagine that AI is sifting through mountains of information looking for the nugget that we want, not acting like a giant random number generator creating an answer that we'll find most pleasing.

Richard Fulmer's avatar

My experience with LLMs is that they often provide bad - or, at least, misleading - answers at first. If you know enough to push back with facts and logic they’ll provide much better results. The problem is knowing enough to be able to push back or at least to ask clarifying questions. I can do that in some areas, but not at all in others.

29 more comments...

No posts

Ready for more?