2

I configured Accumulo 1.7.0 with Hadoop 2.6.0 (HDFS) and Zookeeper 3.4.6, all works good, but i want to know how to restore an instance.

Thanks !!!

UPDATE

The problem is that i want to recover the instance after restart the PC or stop all processes. I put the log for better understanding:

hduser@master:/opt/accumulo-1.7.0-bin/bin$ ./start-all.sh 
Starting monitor on localhost
WARN : Max open files on localhost is 1024, recommend 32768
Starting tablet servers .... done
Starting tablet server on localhost
WARN : Max open files on localhost is 1024, recommend 32768
2016-02-23 11:46:46,089 [fs.VolumeManagerImpl] WARN : dfs.datanode.synconclose set to false in hdfs-site.xml: data loss is possible on hard system reset or power loss
2016-02-23 11:46:46,092 [server.Accumulo] INFO : Attempting to talk to zookeeper
2016-02-23 11:46:46,242 [server.Accumulo] INFO : Waiting for accumulo to be initialized
2016-02-23 11:46:47,243 [server.Accumulo] INFO : Waiting for accumulo to be initialized
2016-02-23 11:46:48,246 [server.Accumulo] INFO : Waiting for accumulo to be initialized

and

hduser@master:/opt/accumulo-1.7.0-bin/bin$ ./accumulo init
2016-02-22 16:10:46,410 [fs.VolumeManagerImpl] WARN : dfs.datanode.synconclose set to false in hdfs-site.xml: data loss is possible on hard system reset or power loss
2016-02-22 16:10:46,411 [init.Initialize] INFO : Hadoop Filesystem is hdfs://master:9000
2016-02-22 16:10:46,412 [init.Initialize] INFO : Accumulo data dirs are [hdfs://master:9000/accumulo]
2016-02-22 16:10:46,412 [init.Initialize] INFO : Zookeeper server is localhost:2181
2016-02-22 16:10:46,412 [init.Initialize] INFO : Checking if Zookeeper is available. If this hangs, then you need to make sure zookeeper is running
2016-02-22 16:10:46,606 [init.Initialize] ERROR: FATAL It appears the directories [hdfs://master:9000/accumulo] were previously initialized.
2016-02-22 16:10:46,606 [init.Initialize] ERROR: FATAL: Change the property instance.volumes to use different filesystems,
2016-02-22 16:10:46,606 [init.Initialize] ERROR: FATAL: or change the property instance.dfs.dir to use a different directory.
2016-02-22 16:10:46,606 [init.Initialize] ERROR: FATAL: The current value of instance.dfs.uri is ||
2016-02-22 16:10:46,606 [init.Initialize] ERROR: FATAL: The current value of instance.dfs.dir is |/accumulo|
2016-02-22 16:10:46,606 [init.Initialize] ERROR: FATAL: The current value of instance.volumes is |hdfs://master:9000/accumulo|
WilD
  • 57
  • 9

2 Answers2

2

It should be as simple as going into the bin directory and running the appropriate labeled script.

cd accumulo-1.7.0/bin
./stop-all.sh

Then to start again:

./start-all.sh
Mike S
  • 11,329
  • 6
  • 41
  • 76
  • A word of caution, if you're deleting the Accumulo HDFS directory, you'll need to re-run `accumulo init` before re-starting Accumulo. – elserj Feb 17 '16 at 21:09
  • @elserj you have the reason, but my question is when i restart my pc a i need to restart the instance, because when i run `accumulo init`, it send error of "instance.volumes" that exits – WilD Feb 17 '16 at 22:46
  • @WilD You don't necessarily need to re-init and delete the hdfs directories just restarting your PC. You'd only want to do that if you don't want to continue in the state it was left in when you shut down your PC. – Christopher Feb 18 '16 at 00:21
  • @Christopher, so how restore the state, so after of restart the pc ?, @elserj the problem is that it send error when run `./start-all.sh`. – WilD Feb 18 '16 at 03:51
  • 1
    It's a file system, HDFS should persist after you restart your computer. All you need to do is start Accumulo and not remove anything from HDFS. Make sure you didn't configure HDFS to use /tmp – elserj Feb 18 '16 at 16:44
  • @elserj, thanks for answer i changed path "hadoop.tmp.dir" but the problem persists, sorry but my question was malformed because i want to say that i had to delete the hdfs directories for this to work. – WilD Feb 22 '16 at 21:31
  • I'm confused why you want to delete the HDFS directory. You should only run `accumulo init` once. Per this answer, you only need to run the start-all.sh script. – elserj Feb 22 '16 at 21:41
  • @elserj, i don't want to delete the HDFS directory, i said that had to delete it for its works, because if you see my log when i run `accumulo init` its show me `FATAL: or change the property instance.dfs.dir to use a different directory`. When i run `start-all.sh` its show me `INFO : Waiting for accumulo to be initialized`. – WilD Feb 23 '16 at 17:00
2

Thanks for the help, the problem was that i need set the directory where the snapshot is stored for zookeeper, because by default it is stored in "/tmp". I modify file zoo.cfg and set a new directory for "dataDir".

WilD
  • 57
  • 9