0

I have a batch insert statement.

Suppose I have a query:

BEGIN BATCH
INSERT INTO abc (col1,col2,col3,col4) VALUES (1,'xyz',99,632);
INSERT INTO abc (col1,col2,col3,col4) VALUES (1,'xyz',79,632);
APPLY BATCH;

It is only inserting the first statement rather than inserting the value present in second statement.

NOTE: col1 is the clustered key and col4 is the partition key.

How we can make sure the last insert statement is getting saved in db?

anonymous
  • 483
  • 2
  • 8
  • 24

1 Answers1

0

I answered a similar question here: Cassandra batch statement - Execution order

Basically, you can't enforce any order of operations in a batch statement (unless you force a write-timestamp). And if col1 and col4 are really your keys, then you should only expect one row to be written anyway (as they have the same key values, and primary keys in Cassandra are unique). So it looks like your first statement must be "winning" the "last write wins" race every time.

Additionally, BATCH statements are designed for atomically writing the same data to multiple tables. They're not really designed to support writing different data to the same table. And in this case, you're writing new col2 and col3 values to the same PRIMARY KEY, so I'm not sure what you're trying to accomplish here.

I.E. if 79 is really the value you want in col3 where col4=632 and col1=1, then why bother writing 99 to col3 at all?

Community
  • 1
  • 1
Aaron
  • 55,518
  • 11
  • 116
  • 132