[Solved] Hadoop runs start-dfs.sh error: attempting to operate on HDFS as root

Error message:

Starting namenodes on [master]
ERROR: Attempting to operate on hdfs namenode as root
ERROR: but there is no HDFS_NAMENODE_USER defined. Aborting operation.

Starting datanodes
ERROR: Attempting to operate on hdfs datanode as root
ERROR: but there is no HDFS_DATANODE_USER defined. Aborting operation.

Starting secondary namenodes
ERROR: Attempting to operate on hdfs secondarynamenode as root
ERROR: but there is no HDFS_SECONDARYNAMENODE_USER defined. Aborting operation.

Starting journal nodes
ERROR: Attempting to operate on hdfs journalnode as root
ERROR: but there is no HDFS_JOURNALNODE_USER defined. Aborting operation.

Starting ZK Failover Controllers on NN hosts
ERROR: Attempting to operate on hdfs zkfc as root
ERROR: but there is no HDFS_ZKFC_USER defined. Aborting operation.

reason:

Start the service with the root account, but it is not predefined

Solution:

*This step needs to be performed on each machine, or it can be modified on one machine first, and then synchronized to other machines by SCP

1. Modify start-dfs.sh and stop-dfs.sh

cd /home/hadoop/sbin
vim start-dfs.sh
vim stop-dfs.sh

Add the following to the header:

HDFS_ZKFC_USER=root
HDFS_JOURNALNODE_USER=root
HDFS_NAMENODE_USER=root
HDFS_SECONDARYNAMENODE_USER=root
HDFS_DATANODE_USER=root
HDFS_DATANODE_SECURE_USER=root
#HADOOP_SECURE_DN_USER=root

2. Modify start-yarn.sh and stop-yarn.sh

cd /home/hadoop/sbin
vim start-yarn.sh
vim stop-yarn.sh

Add the following to the header:

#HADOOP_SECURE_DN_USER=root
HDFS_DATANODE_SECURE_USER=root
YARN_NODEMANAGER_USER=root
YARN_RESOURCEMANAGER_USER=root

3. Synchronize to other machines

cd /home/hadoop/sbin
scp * c2:/home/hadoop/sbin
scp * c3:/home/hadoop/sbin
scp * c4:/home/hadoop/sbin

Similar Posts: