1

I have a table (Postgresql 9.6) like this

CREATE TABLE m_trade ( 
    "alias" Character Varying( 32 ),
    "ts" Bigint NOT NULL,
    "side" Character( 1 ),
    "price" Double Precision,
    "qty" Bigint );

with 50 000 000 rows.

After creating timescaledb extension -

CREATE EXTENSION IF NOT EXISTS timescaledb CASCADE;

I create hyper_table from empty table -

CREATE TABLE d_trade ( 
    "alias" Character Varying( 32 ),
    "ts" Bigint NOT NULL,
    "side" Character( 1 ),
    "price" Double Precision,
    "qty" Bigint );


SELECT create_hypertable('d_trade', 'ts', chunk_time_interval => 86400 , number_partitions =>31);

after this on d_trade table I get

INDEX "d_trade_ts_idx"  and 

TRIGGER ts_insert_blocker BEFORE INSERT  

INSERT INTO
    "public"."d_trade"( "alias", "price", "qty", "side", "ts" )
select "alias", "price", "qty", "side", "ts" from m_trade

Now if i try to insert row to m_trade table by

INSERT INTO
    "public"."m_trade"( "alias", "price", "qty", "side", "ts" )
VALUES
    ('TESTALS', 16000, 5, 2, 1545307519)

it will take 1-1.5 ms but this one will delay 4-5 ms

INSERT INTO
    "public"."d_trade"( "alias", "price", "qty", "side", "ts" )
VALUES
    ('TESTALS', 16000, 5, 2, 1545307519)

How can I optimize insert speed on d_trade table?

TmTron
  • 17,012
  • 10
  • 94
  • 142

1 Answers1

0

How are you measuring this insert performance? If you are using something like EXPLAIN, timescaledb requires additional overhead that may render this comparison ineffective. Try turning on \timing instead. Also, timescaledb is better than postgresql at scale, so I would try running batch statements to see if you still get the same results as above.