0

I have a 2 data node mysql-cluster with 16gb memory and 10gb datamemory. I am trying to add two new data nodes on solaris 5.10 64 bit 16g ram machines. But I can't start the data nodes with more than 4Gb heap.

I have checked maxshm memory, max address memory, ulimit, os architecture, ndbd architecture and they all seem to be fit to be able to run.

Does Solaris have any other parameters to check for more heap space per process?

fsniper
  • 121
  • 3
  • What error message do you get ? Are you using ZFS ? – jlliagre Dec 03 '13 at 23:11
  • this post can help you, http://serverfault.com/questions/156063/what-resource-limit-is-java-encountering-on-my-solaris-server, maybe your problem is you don't have enough virtual memory – c4f4t0r Dec 04 '13 at 00:30
  • I don't get any error message. But a 4gb core dump. @jlliagre. I am not sure of the file system. I can't reach the os right now. But I will check ASAP. – fsniper Dec 04 '13 at 11:51
  • @c4f4t0r: I have 4gb swap memory and swap -s reports 20gb virtual memory free. There are 2 nodes with same memory outline and ndbd are running fine there. – fsniper Dec 04 '13 at 11:52
  • stupid questione, your mysql software are 32 bit or 64 bit? – c4f4t0r Dec 04 '13 at 13:51
  • Please add to your question the output of `file core`, `swap -s`, `swap -l` and `echo ::memstat | mdb -k` (the latter as root). – jlliagre Dec 04 '13 at 14:19
  • @c4f4t0r: Actually it is not a silly question. But I have double checked ndbd with file and it absolutely points to 64bit. But I am a bit confused cause some other op, tried java with xms 10gb and there were no problems. – fsniper Dec 04 '13 at 14:34
  • Sorry everyone, it turns out the customer has allocated us a zone with 16GB memory but with a shared memory configuration (I have no idea how and why). So whenever other zones decide to use memory we can not allocate enough memory to run ndbd. – fsniper Dec 05 '13 at 09:46

0 Answers0