we have HDP cluster with 7 datanodes machines
under /hadoop/hdfs/namenode/current/
we can see more then 1500
edit files
each file is around 7M
to 20M
as the following
7.8M /hadoop/hdfs/namenode/current/edits_0000000002331008695-0000000002331071883
7.0M /hadoop/hdfs/namenode/current/edits_0000000002331071884-0000000002331128452
7.8M /hadoop/hdfs/namenode/current/edits_0000000002331128453-0000000002331189702
7.1M /hadoop/hdfs/namenode/current/edits_0000000002331189703-0000000002331246584
11M /hadoop/hdfs/namenode/current/edits_0000000002331246585-0000000002331323246
8.0M /hadoop/hdfs/namenode/current/edits_0000000002331323247-0000000002331385595
7.7M /hadoop/hdfs/namenode/current/edits_0000000002331385596-0000000002331445237
7.9M /hadoop/hdfs/namenode/current/edits_0000000002331445238-0000000002331506718
9.1M /hadoop/hdfs/namenode/current/edits_0000000002331506719-0000000002331573154
9.0M /hadoop/hdfs/namenode/current/edits_0000000002331573155-0000000002331638086
7.8M /hadoop/hdfs/namenode/current/edits_0000000002331638087-0000000002331697435
7.8M /hadoop/hdfs/namenode/current/edits_0000000002331697436-0000000002331755881
8.0M /hadoop/hdfs/namenode/current/edits_0000000002331755882-0000000002331814933
9.8M /hadoop/hdfs/namenode/current/edits_0000000002331814934-0000000002331884369
11M /hadoop/hdfs/namenode/current/edits_0000000002331884370-0000000002331955341
8.7M /hadoop/hdfs/namenode/current/edits_0000000002331955342-0000000002332019335
7.8M /hadoop/hdfs/namenode/current/edits_0000000002332019336-0000000002332074498
is it possible to minimize file size by some HDFS
configuration? ( or minimize edit files numbers )
since we have small disks and the disk is now 100%
/dev/sdb 100G 100G 0 100% /hadoop/hdfs