Russ Roberts' piece on the media and Israel
Why Books Don't Work by Andy Matuschak https://andymatuschak.org/books/
It gave a B+ to my recent Substack post, which I think is a fair grade. The feedback was: "It could be improved with more specific examples and a deeper analysis of the implications of the differences between LLM and human cognition." The essay perhaps assumes too much familiarity on the part of the reader with the two subject matters, which are rap lyrics and human & LLM cognition.
The essay also isn't really an op-ed style essay, given its subject matter, but I am not sure why that should be relevant to the grader.
The link is here: https://davefriedman.substack.com/p/beyond-the-beat-how-ai-interprets
FWIW, a human who read the same post commented that I was "off base," so I suppose that person would have given it an F.
I've used it to grade one of my favorite essays: https://danluu.com/startup-tradeoffs/. I agree with the assessment.
Overall Grade: 9/10
This essay stands out for its clear and well-structured arguments, balanced engagement with multiple viewpoints, and demonstration of intellectual humility. It avoids overgeneralizations and respects the complexity of the subject. The author's approach of blending personal experiences with broader industry data and trends adds depth to the analysis. Improvements could include perhaps a more direct engagement with counterarguments or a deeper exploration of the psychological and lifestyle factors that influence the decision to join a startup or a large company. Nonetheless, the essay is an exemplary piece in terms of reasoned argument and fair representation of diverse perspectives.
I asked ChatGPT to critique the first chapter of a novel I'm working on. It's comments were absolutely on the money - I would venture to say more detailed and insightful than any human editor I've worked with. A humbling experience.
I'd like to see how it handles something like this...
I'd like to see it take a shot at some famous (or infamous) Supreme Court opinions and dissents, perhaps sanitized to exclude the legal citation coding and footnotes if necessary for effective grading. I wonder if the bots are smart enough yet to pick up on all the "professionally and politely formulated incoherence" at the heart of many recent opinions.
I purposely gave it what I thought was a poorly reasoned piece.
The original was in the Tallahassee Democrat but I think it was gated after a few articles per month so I found a place that copied it. This deals with the murder of Dan Markel, a Florida State law prof.
The overall grade was a C-, actually a little better than I thought it would give. Grade inflation is all around us.
Has anyone asked the grader to rewrite an essay such that it would get a better grade?
BIG concern: The ChatGPT models are constantly being updated/learning. Some of the info that they "learn from" has and will come from it's own outputs. Feedback time. Not sure how that will be deal with.
I recall thinking I had learned some things I didn't know and wouldn't have guessed from this article. But I read it years ago and don't have a strong opinion about its findings. Still, it seems like it would be a good candidate for memory-holing so might as well subject it to a test while it's still up.
I nominate one of Helen Andrews' revising-the-revisionists pieces (I think, at this point, she would have to be granted the honor of being the revisionist). Maybe this one: https://www.theamericanconservative.com/how-fake-history-gets-made/
I had never heard of the incidents discussed, and feel it is unlikely that I *could* learn the truth of them from a google search at this date; and harbor a similar doubt about what your computer program would be able to do with presumably a whole lot of unreliable verbiage to mine. The truth seems more and more inconvenient, and an invention circa 2023 seems unlikely to be proof against that unfortunate fact. But I am by no means confident about any of it.
Generally something rubs me the wrong way when there's a terrible event, like the one in Tulsa - and the response is: "This event is not enough to sow perpetual hatred with - we want our own terrible event like that, let's see what we've got to work with!"
Please try some of these: paulgraham.com/articles.html
Did you write a substack on how you programmed your grader? If so, I missed it.
I predict that GPT will have difficulty when the essay in question is truthful but not sincere. I am curious as to what it makes of Matt Labash's _Living Like a Liberal_ https://www.washingtonexaminer.com/weekly-standard/living-like-a-liberal
Another challenge is the interactive visualisation essay, such as this one by Bret Victor about interactive visualisation. http://worrydream.com/LadderOfAbstraction/ I think that ChatGPT ought to be able to give us much, much better interactive visualisations without all the hard work Bret alludes to, by giving us an 'explainer' to go with our explanations -- something a reader can use to ask us what we really meant when we wrote something, and get that bit correct.
Dan Williams maybe coined a great phrase, Marketplace of Rationalizations. Importantly different than misinformation, and a better explanation for so much false belief. Especially among Dems.
Since I believe the heart decides, but tells the brain to rationalize, this article confirms that bias.
I’ve run several essays/opinion pieces in a perfunctory way just to quickly see what it would do, including articles from Heather Heying, David Friedman, Aaron Renn, and an opinion piece from Mother Jones. Like you, I find the emphasis on neutrality to be irritating, after all people *are* expressing an opinion. Also, it doesn’t always assign a letter grade, giving Heying’s article 4/10, Renn’s a C+, and saying the Mother Jones piece “would receive a low score for balanced argumentation.” I liked that it considered an array of factors, such as counterarguments, fallacies (if present), assumptions, etc. Here's an essay suggestion - https://www.notonyourteam.co.uk/p/winning-through-social-dysfunction
It would be interesting to see if ChatGPT is as concerned with balance when grading a left leaning essay.