I got a Cassandra cluster that only uses one node (because I only got one server and do a comparison). So I got a time-series table that is 43 GB big and every query I run is very slow. My question is, why is 43GB to much for one node in a cluster with only one node, when 43GB in one node in a cluster with more nodes would be ok?
Does Cassandra use RAM and CPU of every server in the cluster, even when a query only needs one node? That's my idea but I am not sure...
I hope somebody is able to help here,
Thank you!
Edit: My table:
CREATE TABLE table(
num int,
part_key int,
val1 int, val2 float, val3 text, ...,
PRIMARY KEY((part_key), num)
);
num is the number of the record. There are 300-400 values and like 10 000 000 records. Right now the database is ca. 60GB (43GB was from yesterday) and even the INSERT queries time out. If I set time-out higher the Server service crashes.