0

I have a problem that I can't understand. I have 3 node (RF:3) in my cluster and my nodes hardware is pretty good. Now there are 60 - 70 million rows and 3000 columns data in my cluster so i want to query specific data approximately 265000 rows and 4 columns, i use default fetch size, I can get 5000 lines of data per second up to 55000 lines of data after that my data retrieval speed drops.

I think this situation will be solved from the cassandra.yaml file, do you have any idea what I can check?

  • You'll need to provide a bit more info. A friendly note on how to ask good questions. The general guidance is that you (a) provide a good summary of the problem that includes software/component versions, the full error message + full stack trace; (b) describe what you've tried to fix the problem, details of investigation you've done; and (c) minimal sample code that replicates the problem. Cheers! – Erick Ramirez Aug 05 '22 at 23:36
  • “I think this situation will be solved from the cassandra.yaml file“ I doubt it. Usually this happens because partitions are too big or the query is trying to pull too much data back at once. Have a look at this table using `nodetool tablehistograms`. – Aaron Aug 06 '22 at 17:24
  • Firstly thanks for your comment, I checked that and there was a no latency about that but too big partition size for example max value 53142810146 bytes is it problem ? –  Aug 08 '22 at 06:47

0 Answers0