👋 Hey {{first_name|there}},
Why this matters / where it hurts
You’ve probably seen this in production. A list endpoint feels fine. Page 1 is fast. Page 5 is okay. Then someone exports data, scrolls forever, or a batch job crawls the whole dataset. Suddenly p95 drifts up, CPU climbs, and the database starts doing work that feels pointless.
I used to treat pagination like a UI detail. Offset and limit, done. It looked clean in the ORM. Then a customer with a big dataset hit page 2000 and the query started walking past thousands of rows just to throw them away. The endpoint did not get a little slower. It fell off a cliff. Incidents at odd hours followed.
If you read last week’s Queries and Indexes With Predictable Latency, this is the next step. An index can be perfect and your endpoint can still melt if pagination forces the database to scan and discard rows on deep pages.
We’ll make pagination a design choice, not an afterthought. One pattern, one sheet per endpoint, and a few defaults that keep p95 stable as data grows.
🧭 Mindset shift
From: “Offset pagination is fine, we can optimize later.”
To: “Pagination is an access path. Design it for growth from day one.”
Offset pagination looks innocent. The database still has to find the first N rows, then walk past N, then return the next page. At page 2000 with page size 50, you are effectively asking it to walk past 99,950 rows. Even if it uses an index, it is doing work you do not keep.
Keyset pagination is the opposite. You keep a cursor, usually the last seen key, and ask for the next slice. The database seeks to a position and continues. It scales with page size, not with page number. That is the difference.
Two rules that keep this simple
Use stable ordering with a unique tie breaker, usually
created_atplusid.Move from “page number” to “cursor token,” even if the UI still shows “next.”