Tag Archives: hadoop

[Solved] Hadoop Error: The directory item limit is exceeded: limit=1048576 items=1048576

Problem Description:

The scheduling system failed to execute hive task, and failed to execute all the time. The error reports are as follows:

java.io.ioexception: java.net.connectexception: call from #hostname/#ip to #hostname: 10020 failed on connection exception: java.net.connectexception: connection rejected; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused

caused by: java.net.connectexception: call from #hostname/#ip to #hostname: 10020 failed on connection exception: java.net.connectexception: connection rejected; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused

CONSOLE# Ended Job = job_ 1638255473937_ 0568 with exception 'java.io.ioexception (Java. Net. Connectexception: call from #hostname/#ip to #hostname: 10020 failed on connection exception: Java. Net. Connectexception: connection denied; for more details see: http://wiki.apache.org/hadoop/ConnectionRefused )

console # failed: execution error, return code 1 from org.apache.hadoop.hive.ql.exec.mr.mapredtask. Java.net.connectexception: call from #hostname/#ip to #hostname: 10020 failed on connection exception: java.net.connectexception: connection rejected; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused

You can’t see the specific problem from this information. You can’t see the problem by looking at all the logs on the server. Finally, you can see the problem by looking at yarn’s logs

According to the dispatching system, obtain applicationid: application_1638255473937_0568, and then view the corresponding log information from HDFS.

View yarn log information:

[ hdfs@centos hadoop27]$ yarn logs -applicationId application_1638255473937_0568

Key error reporting information:

org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.FSLimitException$MaxDirectoryItemsExceededException): The directory item limit of /tmp/hadoop-yarn/staging/history/done_intermediate/hdfs is exceeded: limit=1048576 items=1048576

Error reporting reason:

There are more than 1048576 files in a single Hadoop directory. The default limit is 1048576, so you should increase the limit.

Solution 1:

Add the configuration parameter dfs.namenode.fs-limits.max-directory-items to the hdfs-site.xml configuration file, and increase the parameter value.

Push the configuration file to all nodes of the Hadoop cluster and restart the Hadoop service.

Solution 2:

If it is inconvenient to modify the configuration, restart the Hadoop cluster service. You can delete this directory first:/TMP/Hadoop yarn/staging/history/done_intermediate/hdfs

Then rebuild the directory.

hadoop fs -rm -r /tmp/hadoop-yarn/staging/history/done_intermediate/hdfs
hadoop fs -mkdir /tmp/hadoop-yarn/staging/history/done_intermediate/hdfs

The reason why the number of files in this directory exceeds the upper limit is that the Hadoop cluster did not start the jobhistory server before and did not clear the historical job log information

Extended information:

1: How to view the yarn log storage directory and log details

1: View through the history server UI interface. I’m here http://IP:8801/jobhistory )

2: View through the yarn command (the user should be consistent with the user submitting the task)

2.1: yarn application – List – appstates all

2.2: yarn logs -applicationId application_1638255473937_0568

3: Directly view the log of the HDFS path (stored in the HDFS directory, not in the user-defined log directory of CentOS system)

3.1: check the yarn-site.xml file and confirm the log configuration directory.

    <property>
        <name>yarn.nodemanager.remote-app-log-dir</name>
        <value>/data1/hadoop27/logs</value>
    </property>

3.2: view log file information

[hdfs@centos hadoop]$ hdfs dfs -ls /data1/hadoop27/logs/hdfs/logs/application_1638255473937_0568
Found 1 items
-rw-r-----   2 hdfs hdfs      66188 2021-11-30 20:24 /data1/hadoop27/logs/hdfs/logs/application_1638255473937_0568/centos.pp1.db_46654

3.3: view log details

3.3.1: yarn logs -applicationId application_1638255473937_0568 (same as 2)

3.3.2: HDFS DFS – Cat/data1/Hadoop 27/logs/HDFS/logs/application_1638255473937_0568/centos.pp1.db_46654 ## view through – Cat

3.3.3: HDFS DFS – Cat/data1/Hadoop 27/logs/HDFS/logs/application_1638255473937_0568/centos.pp1.db_46654 > tmp.log ## save the contents to tmp.log of the current directory through – cat.

3.3.4: HDFS DFS – get/data1/hadoop27/logs/HDFS/logs/application_1638255473937_0568/centos.pp1.db_46654 ## download the HDFS file to the current directory through get, and then view it.

2: HDFS operation command:

1.1: check the number of folders and files in the specified directory of HDFS.

[hdfs@centos hadoop]$ hadoop fs -count /tmp/hadoop-yarn/staging/history/done_intermediate/hdfs
           1            1048576             3253261451467 /tmp/hadoop-yarn/staging/history/done_intermediate/hdfs

The first value of 1 indicates that there is 1 folder under this directory

The second value 1048576 indicates that there is a file in this directory

The third value 3253261451467 represents the total size of all files in the directory.

[Solved] idea connect to remote Hadoop always error: org.apache.hadoop.io.nativeio.NativeIO$Windows.createDirectoryWithMode0(Ljava/lang/String;I)

Error content: org.apache.hadoop.io.nativeio.nativeio $windows.createdirectorywithmode0 (ljava/Lang/string; I)

1. There is a problem with the downloaded Hadoop/bin,

Refound some winutil.exe related to Hadoop/bin, and then replaced the old one and tried again. It was really successful!!!

Make sure the version is similar!!!

My Hadoop version of Linux is 3.1.4. Later, when connecting, I need to configure windows, which is 2.6.0. Then I keep reporting errors, so I changed it to 3.0. Be sure to choose a version similar!!!

2. Add static code blocks

static {
        try {
            System.load("D:\\install\\winutils-master\\winutils-master\\hadoop-3.0.0\\bin\\hadoop.dll");
            //We recommend using the absolute address, the path to the hadoop.dll file in the bin directory
        } catch (UnsatisfiedLinkError e) {
            System.err.println("Native code library failed to load.\n"+ e);
            System.exit(1);
        }
    }

[Solved] Hadoop Error: ERROR: Attempting to operate on yarn resourcemanager as root

When Hadoop executes start-yarn.sh, it will report an error of “error: attempting to operate on yarn ResourceManager as root”.

Method 1

sudo vim ~/.bashrc

Add the following parameters at the end:

export HDFS_NAMENODE_USER=root
export HDFS_DATANODE_USER=root
export HDFS_SECONDARYNAMENODE_USER=root
export YARN_RESOURCEMANAGER_USER=root
export YARN_NODEMANAGER_USER=root

Method 2

Add the following parameters at the top of the start-dfs.sh and stop-dfs.sh files:

HDFS_DATANODE_USER=root
HADOOP_SECURE_DN_USER=hdfs
HDFS_NAMENODE_USER=root
HDFS_SECONDARYNAMENODE_USER=root

Add the following parameters at the top of the start-yarn.sh and stop-yarn.sh files:

YARN_RESOURCEMANAGER_USER=root
HADOOP_SECURE_DN_USER=yarn
YARN_NODEMANAGER_USER=root

*The above files are in the SBIN directory of the Hadoop root directory.

*The official recommendation is to create a new independent account for yarn to start.

The most annoying error reported by Hadoop: running VirtualBox prompts 0x00000000 error “the 0x00000000 memory referenced by the 0x00000000 instruction cannot be written?

Solve the problem that the VirtualBox virtual machine memory 0x00000000 cannot be written?

Run VirtualBox to prompt 0x00000000 error “0x00000000 instruction refers to 0x00000000 memory, which cannot be written?

 

1. If it is a win7 system, most of the modified DLL files obtained from the search

2. Focus on win10 system:

Generally, the exception occurs because the Hyper-V in windows is not closed and VirtualBox conflicts with hyper-v.

Open the CMD command prompt in windows (it needs to be executed as an administrator), and enter the following command to close hyper-v

bcdedit /set hypervisorlaunchtype off

Then restart the computer.

 

If you want to reopen Hyper-V later, you can use the following command

bcdedit /set hypervisorlaunchtype auto

 

If it still doesn’t work, you can install its VirtualBox Extension Pack before installing VirtualBox:

https://www.virtualbox.org/wiki/Downloads   VirtualBox x.x.x Oracle VM VirtualBox Extension Pack

(x.x.x here corresponds to the version of VirtualBox you downloaded)

 

Original text: https://coding.imooc.com/learn/questiondetail/182256.html

 

Hadoop command error: permission problem [How to Solve]

Error when root executes Hadoop command:

[root@vmocdp125 conf]# hadoop fs -ls /user/
[INFO] 17:50:42 main [RetryInvocationHandler]Exception while invoking getFileInfo of class ClientNamenodeProtocolTranslatorPB over vmocdp127.test.com/172.16.145.127:8020. Trying to fail over immediately.144
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException): Operation category READ is not supported in state standby
	at org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:87)
	at org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.checkOperation(NameNode.java:1932)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1313)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3861)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1076)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:843)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2151)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2147)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:422)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2145)

	at org.apache.hadoop.ipc.Client.call(Client.java:1427)
	at org.apache.hadoop.ipc.Client.call(Client.java:1358)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
	at com.sun.proxy.$Proxy9.getFileInfo(Unknown Source)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:771)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:497)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:252)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104)
	at com.sun.proxy.$Proxy10.getFileInfo(Unknown Source)
	at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2116)
	at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1315)
	at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1311)
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
	at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1311)
	at org.apache.hadoop.fs.Globber.getFileStatus(Globber.java:57)
	at org.apache.hadoop.fs.Globber.glob(Globber.java:252)
	at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1655)
	at org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:326)
	at org.apache.hadoop.fs.shell.Command.expandArgument(Command.java:235)
	at org.apache.hadoop.fs.shell.Command.expandArguments(Command.java:218)
	at org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:201)
	at org.apache.hadoop.fs.shell.Command.run(Command.java:165)
	at org.apache.hadoop.fs.FsShell.run(FsShell.java:287)
	at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
	at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
	at org.apache.hadoop.fs.FsShell.main(FsShell.java:340)
Found 6 items
drwxrwx---   - ambari-qa hdfs          0 2016-09-14 19:48 /user/ambari-qa
drwxr-xr-x   - hcat      hdfs          0 2016-09-14 20:09 /user/hcat
drwx------   - hdfs      hdfs          0 2016-09-20 18:51 /user/hdfs
drwxr-xr-x   - hive      hdfs          0 2016-09-14 20:11 /user/hive
drwxr-xr-x   - hdfs      hdfs          0 2016-09-20 18:49 /user/ocetl
drwxrwxr-x   - spark     hdfs          0 2016-09-14 20:00 /user/spark

 

Change to hdfs to execute the command can be: sudo -u hdfs hadoop fs -ls /user

[root@vmocdp125 conf]# sudo -u hdfs hadoop fs -ls /user
Found 6 items
drwxrwx---   - ambari-qa hdfs          0 2016-09-14 19:48 /user/ambari-qa
drwxr-xr-x   - hcat      hdfs          0 2016-09-14 20:09 /user/hcat
drwx------   - hdfs      hdfs          0 2016-09-20 18:51 /user/hdfs
drwxr-xr-x   - hive      hdfs          0 2016-09-14 20:11 /user/hive
drwxr-xr-x   - hdfs      hdfs          0 2016-09-20 18:49 /user/ocetl
drwxrwxr-x   - spark     hdfs          0 2016-09-14 20:00 /user/spark

[Solved] hadoop:hdfs.DFSClient: Exception in createBlockOutputStream

hadoop ran the task fine, then moved hadoop-dir to a different location and reported the following error.

java.io.EOFException: Premature EOF: no length prefix available
at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:1492)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1155)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1088)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:514)
15/03/24 18:26:40 INFO hdfs.DFSClient: Abandoning BP-1909118226-192.168.19.234-1427110524238:blk_1073762363_21550
15/03/24 18:26:40 INFO hdfs.DFSClient: Excluding datanode 192.168.21.24:50010
copy from: /root/zenggq/jn2/data2w/t0.head_2000 to /recom1000/t0.head_2000
15/03/24 18:26:41 INFO hdfs.DFSClient: Exception in createBlockOutputStream
java.io.EOFException: Premature EOF: no length prefix available
at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:1492)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1155)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1088)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:514)
15/03/24 18:26:41 INFO hdfs.DFSClient: Abandoning BP-1909118226-192.168.19.234-1427110524238:blk_1073762365_21552
15/03/24 18:26:41 INFO hdfs.DFSClient: Excluding datanode 192.168.21.23:50010
15/03/24 18:26:41 INFO hdfs.DFSClient: Exception in createBlockOutputStream
java.io.IOException: Bad connect ack with firstBadLink as 192.168.21.24:50010
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1166)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1088)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:514)
15/03/24 18:26:41 INFO hdfs.DFSClient: Abandoning BP-1909118226-192.168.19.234-1427110524238:blk_1073762366_21553
15/03/24 18:26:41 INFO hdfs.DFSClient: Excluding datanode 192.168.21.24:50010
15/03/24 18:26:41 INFO hdfs.DFSClient: Exception in createBlockOutputStream
java.io.EOFException: Premature EOF: no length prefix available
at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:1492)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1155)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1088)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:514)
15/03/24 18:26:41 INFO hdfs.DFSClient: Abandoning BP-1909118226-192.168.19.234-1427110524238:blk_1073762367_21554
15/03/24 18:26:41 INFO hdfs.DFSClient: Excluding datanode 192.168.19.236:50010
15/03/24 18:26:41 INFO hdfs.DFSClient: Exception in createBlockOutputStream
java.io.EOFException: Premature EOF: no length prefix available
at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:1492)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1155)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1088)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:514)
15/03/24 18:26:41 INFO hdfs.DFSClient: Abandoning BP-1909118226-192.168.19.234-1427110524238:blk_1073762368_21555
15/03/24 18:26:41 INFO hdfs.DFSClient: Excluding datanode 192.168.21.30:50010
15/03/24 18:26:41 WARN hdfs.DFSClient: DataStreamer Exception
java.io.IOException: Unable to create new block.
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1100)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:514)
15/03/24 18:26:41 WARN hdfs.DFSClient: Could not get block locations. Source file “/recom1000/t1.head_2000” – Aborting…
Exception in thread “main” java.io.EOFException: Premature EOF: no length prefix available
at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:1492)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1155)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1088)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:514)
15/03/24 18:26:41 ERROR hdfs.DFSClient: Failed to close file /recom1000/t1.head_2000
java.io.EOFException: Premature EOF: no length prefix available
at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:1492)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1155)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1088)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:514)
[root@master jn2]#

 

After looking around for answers no, one said the datanode process was not there and the other said the firewall was left off. Turns out I had no problem with either of those.

Then I deleted the data directory under hadoop-dir. Then reformatted the namenode

hadoop namenode -format

And then the then was fine

Hadoop Connect hdfs Error: could only be replicated to 0 nodes instead of minReplication (=1).

Hadoop connect hdfs error: could only be replicated to 0 nodes instead of minReplication (=1). There are 3 datanode(s) running and 3 node(s) are excluded in this operation:

        FileSystem fileSystem=null;
	    //Initialize before executing @Test
	    @Before
	    public void init() throws IOException, InterruptedException, URISyntaxException {
	        fileSystem=FileSystem.get(new URI("hdfs://master:8020"), new Configuration(), "root");

	    }
	    @Test
	    public void write(){
	    	try {
	    		FSDataOutputStream fdos = fileSystem.create(new Path("/testing/file01.txt"), true);
	            fdos.writeBytes("Test text for the txt file");
	            fdos.flush();
	            fdos.close();
	            fileSystem.close();
			} catch (Exception e) {
			}
	    }

The error is as follows

org.apache.hadoop.ipc.RemoteException(java.io.IOException): File  could only be replicated to 0 nodes instead of minReplication (=1).  There are 3 datanode(s) running and 3 node(s) are excluded in this operation.
	at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1733)
	at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:265)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2496)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:828)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:506)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:447)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989)
	at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:845)
	at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:788)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:422)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1807)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2455)

	at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1481)
	at org.apache.hadoop.ipc.Client.call(Client.java:1427)
	at org.apache.hadoop.ipc.Client.call(Client.java:1337)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:227)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
	at com.sun.proxy.$Proxy12.addBlock(Unknown Source)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:440)

Finally, it was found that the computer used could not be directly connected to other data nodes of Hadoop, so the network environment was modified and the problem was solved

[Solved] Hadoop Error: Input path does not exist: hdfs://Master:9000/user/hadoop/input

Problem Description:

org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does not exist: hdfs://Master:9000/user/hadoop/input
    at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:323)
    at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:265)
    at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:387)
    at org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:301)
    at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:318)
    at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:196)
    at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290)
    at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
    at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287)
    at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1308)
    at org.apache.hadoop.examples.Grep.run(Grep.java:78)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
    at org.apache.hadoop.examples.Grep.main(Grep.java:103)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
    at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
    at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:74)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:136)

Problem Analysis: Input path does not exist

Solution: Create input directory in distributed environment

hdfs dfs -mkdir -p /user/hadoop

hdfs dfs -mkdir input

hdfs dfs -put ./*.xml input

hadoop jar ………..