0

Have a 35 node cluster with a high number of blocks in it: ≈450K blocks per data node.

After configuration change (which contained rack reassignments and NameNode Xmx increase) HDFS became a problem. It's unable to perform copy operations on random blocks, when I try to copy a file to a different directory, often it creates _COPYING_ intermediate file and gets stuck. If I try the same file again, it mostly succeedes.

If it finally manages to successfully copy the stuck file, it gives a warning in console

WARN hdfs.DFSClient: Slow waitForAckedSeqno took 229398ms (threshold=30000ms)

What can be the cause of it?

inteloid
  • 101
  • 2

1 Answers1

0

Solved:

The MTU (Jumbo packet size) was set to 1500 bytes, changed to 9000

inteloid
  • 101
  • 2