It takes 6 seconds to return json of 9000 datapoints. I have approximately 10GB of Data in 12 metrics say x.open, x.close...
Data Storage pattern:
Metric : x.open
tagk : symbol
tagv : stringValue
Metric : x.close
tagk : symbol
tagv : stringValue
My Configurations are on Cluster Setup as follows
Node 1: (Real 16GB ActiveNN) JournalNode, Namenode, Zookeeper, RegionServer, HMaster, DFSZKFailoverController, TSD
Node 2: (VM 8GB StandbyNN) JournalNode, Namenode, Zookeeper, RegionServer
Node 3: (Real 16GB) Datanode, RegionServer, TSD
Node 4: (VM 4GB) JournalNode, Datanode, Zookeeper, RegionServer the setup is for POC/ Dev not for production.
Wideness of timestamp is like, one datapoint each for a day for each symbol under easch metric from 1980 to today..
If the above statement is confusing ( My 12 metrics would get 3095 datapoints added everyday in a continuous run one for each symbol.)
Cardinality for tag values in current scenario is 3095+ symbols
Query Sample: http://myIPADDRESS:4242/api/query?start=1980/01/01&end=2016/02/18&m=sum:stock.Open{symbol=IBM}&arrays=true
Debugger Result: 8.44sec; datapoints retrieved 8859; datasize: 55kb
Data writing speed is also slow, it takes 6.5 hours to write 2.2 million datapoints. Am I wrong somewhere with Configurations or expecting much ?
Method for Writing: Json objects via Http
Salting Enabled: Not yet