0

What I want to do is upload a file to the HDFS server using java on another PC.

public static void main(String[] args) {
        Configuration configuration = new Configuration();
        configuration.set("fs.defaultFS","hdfs://192.168.x.2:9000");
         
configuration.set("fs.hdfs.impl",org.apache.hadoop.hdfs.DistributedFileSystem.class.getName());
         
configuration.set("fs.file.impl",org.apache.hadoop.hdfs.DistributedFileSystem.class.getName());
        
        FileSystem dfs = null;
        
        try {
            dfs = FileSystem.get(configuration);
            Path dst = new Path("/user/root");              
            Path src = new Path("E:\\hdfs_test.txt");
            dfs.copyFromLocalFile(src, dst);
        }catch(Exception e) {
            e.printStackTrace();
        }
        finally {
            if (dfs != null) {
                try {
                    dfs.close();
                } catch (IOException e) {
                    // TODO Auto-generated catch block
                    e.printStackTrace();
                }
            }
        }
}

When I run the java code I get the error below.

org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /user/root could only be written to 0 
of the 1 minReplication nodes. There are 1 datanode(s) running and 1 node(s) are excluded in this 
operation.

It seems possible to check the connection. The command to check the existence of a file returns true/false without error.

I am currently using hdfs version 3.1.4, Environment setting values are as follows.

core-site.xml

<configuration>
    <property>
            <name>fs.defaultFS</name>
            <value>hdfs://0.0.0.0:9000</value>
    </property>
</configuration>

hdfs-site.xml

<configuration>

    <property>
            <name>dfs.replication</name>
            <value>1</value>
    </property>
    <property>
            <name>dfs.name.dir</name>
            <value>file:/root/hadoop-3.1.4/hdfs/namenode</value>
    </property>
    <property>
            <name>dfs.data.dir</name>
            <value>file:/root/hadoop-3.1.4/hdfs/datanode</value>
    </property>
</configuration>

I'd really appreciate it if you let me know where to fix it.

The http page that can be accessed using the 9870 port Overview content.

Summary
Security is off.

Safemode is off.

4 files and directories, 0 blocks (0 replicated blocks, 0 erasure coded block groups) = 4 total filesystem object(s).

Heap Memory used 428.75 MB of 1.5 GB Heap Memory. Max Heap Memory is 13.92 GB.

Non Heap Memory used 64.39 MB of 65.66 MB Commited Non Heap Memory. Max Non Heap Memory is <unbounded>.

Configured Capacity:    467.96 GB
Configured Remote Capacity: 0 B
DFS Used:   28 KB (0%)
Non DFS Used:   60.68 GB
DFS Remaining:  383.45 GB (81.94%)
Block Pool Used:    28 KB (0%)
DataNodes usages% (Min/Median/Max/stdDev):  0.00% / 0.00% / 0.00% / 0.00%
Live Nodes  1 (Decommissioned: 0, In Maintenance: 0)
Dead Nodes  0 (Decommissioned: 0, In Maintenance: 0)
Decommissioning Nodes   0
Entering Maintenance Nodes  0
Total Datanode Volume Failures  0 (0 B)
Number of Under-Replicated Blocks   0
Number of Blocks Pending Deletion (including replicas)  0
Block Deletion Start Time   Mon Feb 08 14:10:10 +0900 2021
Last Checkpoint Time    Mon Feb 08 14:08:24 +0900 2021

NameNode Journal Status
Current transaction ID: 62

Journal Manager State
FileJournalManager(root=/root/hadoop-3.1.4/hdfs/namenode)   EditLogFileOutputStream(/root/hadoop-3.1.4/hdfs/namenode/current/edits_inprogress_0000000000000000062)
NameNode Storage
Storage Directory   Type    State
/root/hadoop-3.1.4/hdfs/namenode    IMAGE_AND_EDITS Active
DFS Storage Types
Storage Type    Configured Capacity Capacity Used   Capacity Remaining  Block Pool Used Nodes In Service
DISK    467.96 GB   28 KB (0%)  383.45 GB (81.94%)  28 KB   1
Matt Clark
  • 27,671
  • 19
  • 68
  • 123
JeongWon_Lee
  • 43
  • 1
  • 7
  • Run `dfs` on the server node - is the FS up and running? It it healthy? – Matt Clark Feb 08 '21 at 05:32
  • It's possible that you have some bad FileSystem permissions as well. Be sure that the HDFS server process can write to the configured data directory. – Matt Clark Feb 08 '21 at 05:34
  • / , /user , /user/root The permission for the three folders is 777. – JeongWon_Lee Feb 08 '21 at 05:36
  • Oh boy, Please delete those comments and [edit] your question to include those details. – Matt Clark Feb 08 '21 at 05:38
  • The question was rewritten, including the contents of the Overview Page. Can you confirm? – JeongWon_Lee Feb 08 '21 at 05:47
  • Did you format your node? `hadoop namenode -format` – Matt Clark Feb 08 '21 at 05:49
  • Yes, I ran the'hdfs namenode -format' command. I'll try again. – JeongWon_Lee Feb 08 '21 at 05:50
  • And to confirm you stopped the process first, and restart it with `start-dfs.sh` after formatting? – Matt Clark Feb 08 '21 at 05:54
  • What is your operating system, I wonder if you are hitting SELinux permission related issues? If `getenforce` shows as _Enforcing_, try to run `setenfore 0` to disable SELinux (just to debug) and try again. If this works we can add some rules to SELinux. Be sure to turn SELinux back on `setenforce 1` – Matt Clark Feb 08 '21 at 05:55
  • I am currently using Ubuntu after installing related packages by installing Ubuntu on Docker's container. – JeongWon_Lee Feb 08 '21 at 05:58
  • When creating a docker container, the container was created with the default option without any additional options. No options other than docker exec -it --name -h -p were added. If you need to use your PC's resources and need additional options, can you know? – JeongWon_Lee Feb 08 '21 at 06:00
  • Hmm, maybe it's related to docker and hostnames then: https://stackoverflow.com/a/58294180/1790644 – Matt Clark Feb 08 '21 at 06:00
  • Thank you so much for your kind response. – JeongWon_Lee Feb 08 '21 at 06:10

0 Answers0