0

I have indexed about large number of documents which contains near about 100 millions of data through apache solr 6.1.0 version. After that when I try to access banana dashboard it shows blank pie charts and bar charts on banana dashboard. Following error has been shown in console when I try to access the banana dashboard page:

Uncaught Error: Load timeout for modules: jquery.flot.pie,jquery.flot.selection,jquery.flot.time,jquery.flot.stack,jquery.flot.stackpercent,panels/histogram/interval

For small amount of data it loads successfully. But for a bigger amount of data it is giving me this error.
I have indexed these whole 100 million data in one single core of solr. After doing lot of google search I have found one solution to index such amount of use Sharding concept. Due to Sharding, data gets divided into different cores but after doing sharding also there is no option to show different shard cores data on the single banana dashboard.

So how to achieve the target of indexing such a large number of data into apache solr, and after that, the same data should be shown on banana dashboard. I have to index more than 100 million data. But Before that at least this should work for 100 million data.

Please help me out of this problem. Thanks in advance.

baudsp
  • 4,076
  • 1
  • 17
  • 35
  • Pretty sure the easiest solution would be to thrown more hardware at the problem. – baudsp Oct 05 '16 at 14:08
  • @baudsp : Thanks for quick reply but can we do any settings inside the code to support the above scenario without changing any hardware settings. – Nikhil Malpure Oct 06 '16 at 13:14
  • @baudsp: Sir , we have already provided 64GB RAM to machine and for Solr service we have provided 8GB of JVMX. Do we need to provide more hardware space to that particular machine.? And, Is increasing hardware space every time would be feasible solution ? – Nikhil Malpure Mar 20 '17 at 11:39
  • Honestly, I don't know. I don't have enough experience with solr to help you more than my first comment (most of my experience is with Elasticsearch, another Lucene-based system). – baudsp Mar 21 '17 at 11:36
  • From what I experienced with Elasticsearch, solutions could be: Perhaps you could try to increase the max heap size (you'll have to look in the solr doc for the preferred values) or other configuration tuning. Also it might be a data management problem, you could need to chose which data to index and which to discard (if the data is time-based, you'll have to define a retention period). Lucene can also be CPU or disk-bound, which means that RAM is not the only limiting factor. Also there might be an option to increase the timeout, so that can also be an option. – baudsp Mar 21 '17 at 11:39

0 Answers0