23

I have a table with a column abc carrying the unix timestamp (eg. 13898161481435) and I want to run a between dates select.

It would be not efficient to do a

where TO_CHAR(TO_TIMESTAMP(abc / 1000), 'DD/MM/YYYY') > '14/01/2014 00:00:00' and ..;

which would convert every record.

Rather do something like
where abc > ('14/01/2014 00:00:00' tobigint()) and abc < ...

But I cant find any reference, though for the reverse case.

javadude
  • 1,763
  • 1
  • 19
  • 38

4 Answers4

36

Try this

WHERE abc > extract(epoch from timestamp '2014-01-28 00:00:00')

PostgreSQL Docs

Vignesh Kumar A
  • 27,863
  • 13
  • 63
  • 115
  • 1
    You could use this also for the exact date with a simple modification like `select extract(epoch from now())` – Balbinator Apr 15 '20 at 13:06
4

You do not need to convert it to char to compare it.

WHERE to_timestamp(abc/1000) > timestamp '2014-01-28 00:00:00'

I don't think that conversion would be very inefficient because timestamps are stored internally in a similar format to epoch secs (admittedly with a different origin and resolution).

If you really want to go the other way:

WHERE abc > extract(epoch from timestamp '2014-01-28 00:00:00')
harmic
  • 28,606
  • 5
  • 67
  • 91
3

Interesting observation though, while

select count(*) from cb.logs where to_timestamp(timestmp/1000) > timestamp '2014-01-15 00:00:00' and to_timestamp(timestmp/1000) < timestamp '2014-01-15 23:59:59';

takes almost 10 seconds (my db with 1,5 mill records), the below only 1,5 sec

select count(*) from cb.logs where (timestmp > (select extract(epoch from timestamp '2014-01-15 00:00:00') * 1000) and timestmp < (select extract(epoch from timestamp '2014-01-15 23:59:59') * 1000));

and the below about 1sec

select count(*) from cb.logs where (timestmp > extract(epoch from timestamp '2014-01-15 00:00:00') * 1000) and (timestmp < extract(epoch from timestamp '2014-01-15 23:59:59') * 1000);

to count ~40.000 records

Most likely because the division I would say.

javadude
  • 1,763
  • 1
  • 19
  • 38
  • The first query cannot use an index defined on the column `timestmp` whereas the other two can. Check the execution plan. –  Feb 01 '14 at 08:56
  • I'm with horse. There is no way any arithmetic operation could make that difference. Would be interesting to see 'explain analyse' for those cases. – harmic Feb 01 '14 at 09:27
2

1

select count(*) from cb.logs where to_timestamp(timestmp/1000) > timestamp '2014-01-15 00:00:00' and to_timestamp(timestmp/1000) < timestamp '2014-01-15 23:59:59';  
8600ms

"Aggregate  (cost=225390.52..225390.53 rows=1 width=0)"
"  ->  Seq Scan on logs  (cost=0.00..225370.34 rows=8073 width=0)"
"        Filter: ((to_timestamp(((timestmp / 1000))::double precision) > '2014-01-15 00:00:00'::timestamp without time zone) AND (to_timestamp(((timestmp / 1000))::double precision) < '2014-01-15 23:59:59'::timestamp without time zone))"

2

select count(*) from cb.logs where (timestmp > (select extract(epoch from timestamp '2014-01-15 00:00:00') * 1000) and timestmp < (select extract(epoch from timestamp '2014-01-15 23:59:59') * 1000));
1199ms
"Aggregate  (cost=209245.94..209245.95 rows=1 width=0)"
"  InitPlan 1 (returns $0)"
"    ->  Result  (cost=0.00..0.01 rows=1 width=0)"
"  InitPlan 2 (returns $1)"
"    ->  Result  (cost=0.00..0.01 rows=1 width=0)"
"  ->  Seq Scan on logs  (cost=0.00..209225.74 rows=8073 width=0)"
"        Filter: (((timestmp)::double precision > $0) AND ((timestmp)::double precision < $1))"
javadude
  • 1,763
  • 1
  • 19
  • 38