What I want to do is upload a file to the HDFS server using java on another PC.
public static void main(String[] args) {
Configuration configuration = new Configuration();
configuration.set("fs.defaultFS","hdfs://192.168.x.2:9000");
configuration.set("fs.hdfs.impl",org.apache.hadoop.hdfs.DistributedFileSystem.class.getName());
configuration.set("fs.file.impl",org.apache.hadoop.hdfs.DistributedFileSystem.class.getName());
FileSystem dfs = null;
try {
dfs = FileSystem.get(configuration);
Path dst = new Path("/user/root");
Path src = new Path("E:\\hdfs_test.txt");
dfs.copyFromLocalFile(src, dst);
}catch(Exception e) {
e.printStackTrace();
}
finally {
if (dfs != null) {
try {
dfs.close();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
}
When I run the java code I get the error below.
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /user/root could only be written to 0
of the 1 minReplication nodes. There are 1 datanode(s) running and 1 node(s) are excluded in this
operation.
It seems possible to check the connection. The command to check the existence of a file returns true/false without error.
I am currently using hdfs version 3.1.4, Environment setting values are as follows.
core-site.xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://0.0.0.0:9000</value>
</property>
</configuration>
hdfs-site.xml
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.name.dir</name>
<value>file:/root/hadoop-3.1.4/hdfs/namenode</value>
</property>
<property>
<name>dfs.data.dir</name>
<value>file:/root/hadoop-3.1.4/hdfs/datanode</value>
</property>
</configuration>
I'd really appreciate it if you let me know where to fix it.
The http page that can be accessed using the 9870 port Overview content.
Summary
Security is off.
Safemode is off.
4 files and directories, 0 blocks (0 replicated blocks, 0 erasure coded block groups) = 4 total filesystem object(s).
Heap Memory used 428.75 MB of 1.5 GB Heap Memory. Max Heap Memory is 13.92 GB.
Non Heap Memory used 64.39 MB of 65.66 MB Commited Non Heap Memory. Max Non Heap Memory is <unbounded>.
Configured Capacity: 467.96 GB
Configured Remote Capacity: 0 B
DFS Used: 28 KB (0%)
Non DFS Used: 60.68 GB
DFS Remaining: 383.45 GB (81.94%)
Block Pool Used: 28 KB (0%)
DataNodes usages% (Min/Median/Max/stdDev): 0.00% / 0.00% / 0.00% / 0.00%
Live Nodes 1 (Decommissioned: 0, In Maintenance: 0)
Dead Nodes 0 (Decommissioned: 0, In Maintenance: 0)
Decommissioning Nodes 0
Entering Maintenance Nodes 0
Total Datanode Volume Failures 0 (0 B)
Number of Under-Replicated Blocks 0
Number of Blocks Pending Deletion (including replicas) 0
Block Deletion Start Time Mon Feb 08 14:10:10 +0900 2021
Last Checkpoint Time Mon Feb 08 14:08:24 +0900 2021
NameNode Journal Status
Current transaction ID: 62
Journal Manager State
FileJournalManager(root=/root/hadoop-3.1.4/hdfs/namenode) EditLogFileOutputStream(/root/hadoop-3.1.4/hdfs/namenode/current/edits_inprogress_0000000000000000062)
NameNode Storage
Storage Directory Type State
/root/hadoop-3.1.4/hdfs/namenode IMAGE_AND_EDITS Active
DFS Storage Types
Storage Type Configured Capacity Capacity Used Capacity Remaining Block Pool Used Nodes In Service
DISK 467.96 GB 28 KB (0%) 383.45 GB (81.94%) 28 KB 1