0

I am new to Hadoop, I config cluster follow the instruction

After config, I start HDFS daemons with /bin/start-dfs.sh

I check log file /home/deploy/hadoop/libexec/../logs/hadoop-deploy-datanode-slave1.out to make sure is run, but I see only text as below:

ulimit -a for user deploy
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 63524
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 4096
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 16384
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

Hope anyone can help?

Minh Ha Pham
  • 2,566
  • 2
  • 28
  • 43

1 Answers1

1

@Minh, you are looking at /home/deploy/hadoop/libexec/../logs/hadoop-deploy-datanode-slave1.out, instead see /home/deploy/hadoop/libexec/../logs/hadoop-deploy-datanode-slave1.log. Similarly there would be other log files in /home/deploy/hadoop/libexec/../logs/ folder.

Let me explain more about .log and .out files.

Some of the files in logs folder end with .log, and others end with .out. The .out files are only written to when daemons are starting. After daemons have started successfully, .out files are truncated. By contrasts, all log messages can be found in the .log files, including the daemon start-up messages that are sent to the .out files.

I Bajwa PHD
  • 1,708
  • 1
  • 20
  • 42