Reading a non-fiction book from cover-to-cover is not efficient. I used to say that I read books “from the outside in.” I look at the book flap to find out about the author, who wrote the blurbs, and the subject matter of the book. Then I read the introduction and conclusion in order to get the main ideas. If I have read something by a different author that seems relevant, I look for that author in the index, and I head to those pages.
If and when I do get around to reading a nonfiction book in linear fashion, I use a “Stop, Look, and Listen” approach. Every time I come to an interesting insight, I stop and think about it. I try to put it into my own words. I might highlight a passage or make a note that I could use in writing a review of the book.
I should note that very few nonfiction books are so superbly written that I read them to appreciate the author’s craft. Winston Churchill’s speeches, yes. Maybe Tom Wolfe. Maybe David Halberstam.
I would not say that any of my own published books are superbly crafted. I put more effort into my macro memoir.
I have adapted my reading to incorporate AI. AI is not useful for very recent books, because the AI will not have read them, and its information based on comments available on the Web is very sketchy. But for older books, it works fine.
Once again, I believe in “Stop, Look, and Listen.” I start by asking the AI to summarize the key themes of the book. For each theme that the AI lists, I stop and try to put it into my own words. I test my understanding by feeding my words into the AI, in order get confirmation that my interpretation is correct. Another way that I ensure understanding is to suggest possible examples or ask the AI to provide examples.
ChatGPT or Claude usually will ask me if I want to explore something further. Often, I will say yes, so that I can get deeper into the topic.
I will prompt the AI to provide me with critiques of the author’s main themes. And I will prompt the AI to suggest other readings on the topic. I will then ask the AI for summaries of these other readings. I call this “rabbit holing,” because I can find myself starting with one topic and ending up in a very different place.
I should emphasize that using an AI and the “Stop, Look, and Listen” approach takes more time and effort than just asking the AI to write a summary. The point is to think about the ideas, not just skim the AI’s bullet points and pretend that now I understand the book.
I think that most books can be “read” using AI in this manner. I could be wrong about that, and it could be that I miss out by not reading a book in linear fashion. Or even if I am correct, I may be making authors feel discouraged about writing books.
But I look at it this way. In a 300-page book, how many insights does an author really offer. 15? Maybe 20?
Often, what you remember about a book can be reduced to a tweet, or just a bumper sticker. So when I’ve finished a half-hour conversation with an AI about a book, if I have a solid handle on five key points, I am ahead of the game.
One of my professors told me that only 10% of any non-fiction book was worth reading, and the trick was finding that 10% with as little effort as possible.
So I said, "You've published about a dozen books. Is only 10% worth reading?"
His reply was "Maybe over them all about 4-5% in each book because I tend to repeat the important stuff."
So I asked, "Why not just write less of the 90%?"
His answer, "Most of your readers need to be led to the important stuff or they'll never get it."
So I said, "Aren't your readers mostly other academics?"
And he said, "Yes." and smiled.
I worry that the condensed-insights-only version stripped of the context of how that insight was formed may be more easily forgotten. Especially for somebody who is now thinking .... oh Goodie! With this technique I can now learn 10x as many books! I suspect that some things are learnt better when the reader has some regular rest sleeping as usual while the new information is going in. Some students have this problem .... they can cram for a test but next year remember nothing of what they supposedly learned. Whatever deep connections the ideal learner would have made, tying the new material to the things they already know didn't happen. They are as unprepared for a heavy course that is taught with serious pre-requisites as the people who never took the course they tested well in.
I don't know of any studies on how knowledge fades, but we ought to be able to find an interested cognitive science student and get him or her interested in your AI teaching experiment and this approach to reading books and learning.