2

I have a cluster of 4 nodes. Each of them have setting dfs.datanode.du.reserved set to 5000000000.

I was under the impression that dfs.datanode.du.reserved was per-volume, but it seems it's not. Now my volumes on all of the nodes are almost 100% full and I'm quite stuck.

How can I tell Hadoop to keep some free space on each of the volumes? I am currently unable to even start the cluster anymore because some volumes are missing space. From namenode log on master:

2015-01-12 11:29:04,374 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /home/hduser/hadoop-2.4.1/hdfs/namenode/in_use.lock acquired by nodename 12718@liikennedata.novalocal
2015-01-12 11:29:04,540 ERROR org.apache.hadoop.hdfs.server.common.Storage: Failed to acquire lock on /home/hduser/hadoop-2.4.1/hdfs/d-add3/name/in_use.lock. If this storage directory is mounted via NFS, ensure that the appropriate nfs lock services are running.
java.io.IOException: No space left on device

This has been asked by others also, but I haven't found a solution (ref: https://issues.apache.org/jira/browse/HDFS-1564 )

I'm using Hadoop 2.4.1 on Ubuntus. I'm appending data to a file in HDFS at intervals - no jobs being run currently.

Here's my output from df on the master node. Similar for other nodes.

Filesystem     1K-blocks      Used Available Use% Mounted on
/dev/vda1      123853388 117620668         0 100% /
udev             4078456         8   4078448   1% /dev
tmpfs             817608       280    817328   1% /run
none                5120         0      5120   0% /run/lock
none             4088024         0   4088024   0% /run/shm
/dev/vdb       103081248  97821976         8 100% /home/hduser/hadoop-2.4.1/hdfs/d-add3
/dev/vdc       103081248  82327812  15494172  85% /home/hduser/hadoop-2.4.1/hdfs/d-add4

Thanks

UPDATE This question was flagged as possible duplicate of disk space is full by `vda` files, how to clear them? . That answer has nothing to do with Hadoop and it does not answer the question of how to tell Hadoop to keep free space in the volumes.

Lauri Peltonen
  • 1,432
  • 15
  • 29
  • Have you found a solution in the mean time? How do you start the node? – Marius Soutier Feb 10 '15 at 11:15
  • Hi. No solution so far. Starting the nodes from master with `/sbin/start-dfs.sh`. Also `start-yarn.sh` and `mr-jobhistory-daemon.sh start historyserver`(although I guess I wouldn't need them currently if I don't use any jobs). – Lauri Peltonen Feb 11 '15 at 06:45
  • Actually now that I think of it, I did find a temporary solution. I simply move data manually from one volume to another one. But of course this is not a very good solution. – Lauri Peltonen Feb 11 '15 at 14:12
  • Possible duplicate of [disk space is full by \`vda\` files, how to clear them?](https://stackoverflow.com/questions/25713773/disk-space-is-full-by-vda-files-how-to-clear-them) – T.Todua Jun 26 '19 at 09:55

0 Answers0