For the Next and Prev buttons you can use a WHERE
clause instead of OFFSET
.
Example (using LIMIT 10
- Sample data explained below): You are on some page which shows you 10 rows with the ids [2522,2520,2514,2513,2509,2508,2506,2504,2497,2496]
. This in my case is created with
select *
from link l
order by l.id desc
limit 10
offset 999000
For the next page you would use
limit 10
offset 999010
getting rows with ids [2495,2494,2493,2492,2491,2487,2483,2481,2479,2475]
.
For the previous page you would use
limit 10
offset 998990
getting rows with ids [2542,2541,2540,2538,2535,2533,2530,2527,2525,2524]
.
All above queries execute in 500 msec. Using the "trick" suggested by Sanj it still takes 250 msec.
Now with the given page with minId=2496
and maxId=2522
we can create queries for the Next and Last buttons using the WHERE
clause.
Next button:
select *
from link l
where l.id < :minId -- =2496
order by l.id desc
limit 10
Resulting ids: [2495,2494,2493,2492,2491,2487,2483,2481,2479,2475]
.
Prev button:
select *
from link l
where l.id > :maxId -- =2522
order by l.id asc
limit 10
Resulting ids: [2524,2525,2527,2530,2533,2535,2538,2540,2541,2542]
.
To reverse the order you can use the query in a subselect:
select *
from (
select *
from link l
where l.id > 2522
order by l.id asc
limit 10
) sub
order by id desc
Resulting ids: [2542,2541,2540,2538,2535,2533,2530,2527,2525,2524]
.
These queries execute in "no time" (less than 1 msec) and provide the same result.
You can not use this solution to create page numbers. But i don't think you are going to output 200K page numbers.
Test data:
Data used for the example and benchmarks has been created with
CREATE TABLE `link` (
`id` INT(11) NOT NULL AUTO_INCREMENT,
`url` VARCHAR(255) NOT NULL,
`website` VARCHAR(100) NULL DEFAULT NULL,
`state` VARCHAR(10) NULL DEFAULT NULL,
`type` VARCHAR(100) NULL DEFAULT NULL,
`prio` VARCHAR(100) NULL DEFAULT NULL,
`change` VARCHAR(100) NULL DEFAULT NULL,
`last` VARCHAR(100) NULL DEFAULT NULL,
PRIMARY KEY (`id`),
UNIQUE INDEX `url` (`url`)
) COLLATE='utf8_general_ci' ENGINE=InnoDB;
insert into link
select i.id
, concat(id, '-', rand()) url
, rand() website
, rand() state
, rand() `type`
, rand() prio
, rand() `change`
, rand() `last`
from test._dummy_indexes_2p23 i
where i.id <= 2000000
and rand() < 0.5
where test._dummy_indexes_2p23
is a table containing 2^23 ids (about 8M). So the data contains about 1M rows randomly missing every second id. Table size: 228 MB