Our Cloudera Manager (4.7) node on production had gone awry so we installed a fresh OS on that node. We are trying to recover Cloudera Manager from backups we have of the (embedded) postgresql db. We hope by using the restored DB, CM can manage the existing cluster with the existing configurations.
We are doing a few POCs in which we are trying to port the cloudera manager to a new server with the steps outlined as below. (Eventually we will install CM on the same node)
- install cloudera-server-daemons cloudera-server
- install cloudera-server-db
- sudo service cloudera-server-db start => this creates the basic roles ; regenerates passwords etc.
- so from our pg_dumpall foo.sql we removed the initial statements which created the roles and passwords and the database.
pql -U cloudera-scm -h localhost -p 7432 -f foo.sql postgres
.This completed successfully. - On each on node in the cluster change the /etc/cloudera-scm-agent/config.ini to point to the new node
- sudo service cloudera-server start . => we were expecting the CM to pick up the configs and just load up. However it takes us the installer page
- Install free edition. Either search for ips or we see the hosts available.
- Next it updates the cdh packages on each node in the cluster and asks us for installation of services.
- After this the process is a little unclear. However we did manage to assign roles to the appropriate nodes for eg. HDFS using the same root dir it was not formatted and everything seems ok. However all our configuration is missing. This seems to suggest that the CM did not read off the restored DB.
The above steps do not seem to be the right way of restoring the state of the cloudera manager. This Reference possibly lists a seamless way to do this. By following the steps mentioned in the link we still cannot get CM to read off the restored DB. Can someone point to the right steps please ? Any help appreciated.