Slow Reading in a Fast-Model World
I read slower than I used to.
Not because my comprehension has declined. Because I’ve started noticing when I’m actually reading versus when I’m harvesting. Scanning for the extractable part. Looking for the sentence I can quote, the framework I can apply, the insight I can convert to an action item.
AI didn’t create this habit. But it’s sharpened the question: if a model can extract the structure of an argument in seconds, what is reading for?
What extraction misses
Here’s what happens when you read slowly through a dense argument:
You encounter a sentence you don’t fully understand. You slow down. You reread. You hold it alongside the sentences before and after it. Something in you adjusts — not the sentence, but your model of the world the author is describing.
That adjustment is not the conclusion of the argument. It happens somewhere in the middle, in a place that wouldn’t show up in a summary.
A summary can tell you what a book argues. It can’t give you the experience of having your assumptions quietly rearranged by page forty.
The asymmetry
This is the asymmetry I keep returning to: AI is extremely good at the end of reading and not present at the beginning.
The end of reading is extraction — what did this say, what are the key points, how does it connect to these other things. LLMs are extraordinary at this. Give a model a dense paper and ask for the three load-bearing claims. It will probably identify them correctly.
The beginning of reading is exposure — letting a text change the shape of your attention before you know what you’re looking for. This is the part that can’t be outsourced, not because AI lacks the capability, but because the whole point is that you are the one being changed.
What I’ve started doing
I’ve made a rule for myself: anything I care about enough to think with, I read myself first.
Then I use AI. Ask it what I missed. Push on my interpretation. Find the counterarguments. The AI session becomes a second pass — adversarial, generative, fast.
But the first pass stays mine. Slow, incomplete, sometimes confused.
This isn’t a productivity optimization. It’s a bet that the confused first pass is where most of my actual thinking happens — and that if I skip it, I’m not saving time, I’m just borrowing conclusions I don’t own.
The society question
I think about this at scale.
If a generation of people grows up delegating first-pass reading to AI, what happens to the capacity to be changed by a text? Not to extract from it — to be moved by it, troubled by it, sent down a path you didn’t expect?
I don’t know the answer. I’m not sure it’s a catastrophe. Humans have always outsourced cognitive labor — writing itself was a form of outsourcing memory.
But I think it’s worth watching. Worth building tools that encourage slow first contact, not just fast extraction.
Worth asking, regularly: when did I last read something that changed what I thought was possible?
If the honest answer is “I had AI summarize it,” that might be fine. Or it might be a signal that something is atrophying quietly, in a place that won’t show up in a summary.