Tag Archives: stop-dfs

[Hadoop 2. X] after Hadoop runs for a period of time, stop DFS and other operation failure causes and Solutions

Geeks, please accept the hero post of 2021 Microsoft x Intel hacking contest>>>

After running Hadoop for a long time, if you run stop-dfs.sh (or stop all. SH) , you will find the following similar errors:

Stopping namenodes on [localhost]
localhost: no namenode to stop
localhost: no datanode to stop
Stopping secondary namenodes [localhost]
localhost: no secondarynamenode to stop

At this time, accessing Hadoop is still valid. Viewing the file system, you can still access it through port 50070, and stop all after start all has no effect, which means can’t control Hadoop at all at this time

The most common reason for this problem is that Hadoop stops based on mapred and DFS process number on datanode and the default process number is saved in/tmp , linux will delete the files in this directory every other period of time (usually about one month or seven days) by default. Therefore, after deleting the Hadoop root namenode. PID, Hadoop root namenode. PID, Hadoop root secondarynamenode. PID and other PID files, the namenode can’t find the two processes on the datanode

There are two other reasons for this problem

Environment variable $Hadoop_ PID_ Dir changed after you started Hadoop

Execute stop-dfs.sh and other commands with another user identity

solution:

permanent solution: Modify $Hadoop_ Home/etc/Hadoop/hadoop-env.sh file, the export Hadoop_ PID_ DIR=${HADOOP_ PID_ ${Hadoop> of dir} _ PID_ Dir} the path is modified to your own specified directory, so that Hadoop will save the relevant PID process files in the specified directory to avoid being automatically deleted by Linux. For example:

export HADOOP_ PID_ DIR=/usr/local/hadoop/pids/

solutions to problems found:

At this time, we can’t stop the process through script, but we can stop it manually. We can find all the process numbers of Hadoop through PS – EF | grep Java | grep Hadoop, kill them forcibly (kill – 9 process number), and then execute the start-dfs.sh, start-yarn.sh and other commands to start Hadoop. In the future, stop-dfs.sh and other commands will not take effect