Geeks, please accept the hero post of 2021 Microsoft x Intel hacking contest>>>
After running Hadoop for a long time, if you run stop-dfs.sh (or stop all. SH) , you will find the following similar errors:
Stopping namenodes on [localhost]
localhost: no namenode to stop
localhost: no datanode to stop
Stopping secondary namenodes [localhost]
localhost: no secondarynamenode to stop
At this time, accessing Hadoop is still valid. Viewing the file system, you can still access it through port 50070, and stop all after start all has no effect, which means can’t control Hadoop at all at this time
The most common reason for this problem is that Hadoop stops based on mapred and DFS process number on datanode and the default process number is saved in/tmp , linux will delete the files in this directory every other period of time (usually about one month or seven days) by default. Therefore, after deleting the Hadoop root namenode. PID, Hadoop root namenode. PID, Hadoop root secondarynamenode. PID and other PID files, the namenode can’t find the two processes on the datanode
There are two other reasons for this problem
Environment variable $Hadoop_ PID_ Dir changed after you started Hadoop
Execute stop-dfs.sh and other commands with another user identity
solution:
permanent solution: Modify $Hadoop_ Home/etc/Hadoop/hadoop-env.sh file, the export Hadoop_ PID_ DIR=${HADOOP_ PID_ ${Hadoop> of dir} _ PID_ Dir} the path is modified to your own specified directory, so that Hadoop will save the relevant PID process files in the specified directory to avoid being automatically deleted by Linux. For example:
export HADOOP_ PID_ DIR=/usr/local/hadoop/pids/
solutions to problems found:
At this time, we can’t stop the process through script, but we can stop it manually. We can find all the process numbers of Hadoop through PS – EF | grep Java | grep Hadoop, kill them forcibly (kill – 9 process number), and then execute the start-dfs.sh, start-yarn.sh and other commands to start Hadoop. In the future, stop-dfs.sh and other commands will not take effect
Similar Posts:
- [Solved] Hadoop Error: ERROR: Attempting to operate on yarn resourcemanager as root
- [Solved] Hadoop runs start-dfs.sh error: attempting to operate on HDFS as root
- [Solved] Hadoop3 Install Error: there is no HDFS_NAMENODE_USER defined. Aborting operation.
- [Solved] Call to localhost/127.0.0.1:9000 failed on connection exception:java.net.ConnectException
- How to Solve MYSQL error “no directory, logging in with home = -“
- [Solved] HDFS Filed to Start namenode Error: Premature EOF from inputStream;Failed to load FSImage file, see error(s) above for more info
- [Solved] Phoenix startup error: issuing: !connect jdbc:phoenix:hadoop162:2181 none…
- ssh: Name or service not known
- [Solved] hadoop:hdfs.DFSClient: Exception in createBlockOutputStream
- Troubleshooting of nginx error under Windows: createfile() “XXX / logs / nginx. PID” failed