1

we operate for our customer a server with a single mongo instance, gradle, postgres and nginx running on it. The problem is we had massiv performance problmes until mongodump is running. The mongo queue is growing and no data be queried. The next problem is the costumer want not invest in a replica-set or a software update (mongod 3.x).

Has somebody any idea how i clould improve the performance.

command to create the dump:

mongodump -u ${MONGO_USER} -p ${MONGO_PASSWORD} -o ${MONGO_DUMP_DIR} -d ${MONGO_DATABASE} --authenticationDatabase ${MONGO_DATABASE} > /backup/logs/mongobackup.log

tar cjf ${ZIPPED_FILENAME} ${MONGO_DUMP_DIR}

System: 6 Cores 36 GB RAM 1TB SATA HDD + 2TB (backup NAS)

MongoDB 2.6.7

Thanks

Best regards, Markus

markus
  • 23
  • 3
  • you could use cgroups as described here: https://stackoverflow.com/questions/28168134/how-to-limit-cpu-and-ram-resources-for-mongodump – LandoR Jun 28 '18 at 13:49

2 Answers2

0

As you have heavy load, adding a replica set is a good solution, as backup could be taken on secondary node, but be aware that replica need at least three servers (you can have an master/slave/arbiter - where the last need a little amount of resources)

MongoDump makes general query lock which will have an impact if there is a lot of writes in dumped database.

Hint: try to make backup when there is light load on system.

profesor79
  • 9,213
  • 3
  • 31
  • 52
  • Thank you for your hints! I know the solution with a replica set but our customer dont like it. (increased costs ...) Im realy happy about additional solution accreting. – markus Jun 29 '16 at 08:42
0

Try with volume snapshots. Check with your cloud provider what are the options available to take snapshots. It is super fast and cheaper if you compare actual pricing used in taking a backup(RAM and CPU used and if HDD then transactions const(even if it is little)).

Sagar Kamble
  • 602
  • 4
  • 12