Error message:
Starting namenodes on [master] ERROR: Attempting to operate on hdfs namenode as root ERROR: but there is no HDFS_NAMENODE_USER defined. Aborting operation. Starting datanodes ERROR: Attempting to operate on hdfs datanode as root ERROR: but there is no HDFS_DATANODE_USER defined. Aborting operation. Starting secondary namenodes ERROR: Attempting to operate on hdfs secondarynamenode as root ERROR: but there is no HDFS_SECONDARYNAMENODE_USER defined. Aborting operation. Starting journal nodes ERROR: Attempting to operate on hdfs journalnode as root ERROR: but there is no HDFS_JOURNALNODE_USER defined. Aborting operation. Starting ZK Failover Controllers on NN hosts ERROR: Attempting to operate on hdfs zkfc as root ERROR: but there is no HDFS_ZKFC_USER defined. Aborting operation.
reason:
Start the service with the root account, but it is not predefined
Solution:
*This step needs to be performed on each machine, or it can be modified on one machine first, and then synchronized to other machines by SCP
1. Modify start-dfs.sh and stop-dfs.sh
cd /home/hadoop/sbin vim start-dfs.sh vim stop-dfs.sh
Add the following to the header:
HDFS_ZKFC_USER=root HDFS_JOURNALNODE_USER=root HDFS_NAMENODE_USER=root HDFS_SECONDARYNAMENODE_USER=root HDFS_DATANODE_USER=root HDFS_DATANODE_SECURE_USER=root #HADOOP_SECURE_DN_USER=root
2. Modify start-yarn.sh and stop-yarn.sh
cd /home/hadoop/sbin vim start-yarn.sh vim stop-yarn.sh
Add the following to the header:
#HADOOP_SECURE_DN_USER=root HDFS_DATANODE_SECURE_USER=root YARN_NODEMANAGER_USER=root YARN_RESOURCEMANAGER_USER=root
3. Synchronize to other machines
cd /home/hadoop/sbin scp * c2:/home/hadoop/sbin scp * c3:/home/hadoop/sbin scp * c4:/home/hadoop/sbin
Similar Posts:
- [Solved] Hadoop Error: ERROR: Attempting to operate on yarn resourcemanager as root
- [Solved] Hadoop3 Install Error: there is no HDFS_NAMENODE_USER defined. Aborting operation.
- [Hadoop 2. X] after Hadoop runs for a period of time, stop DFS and other operation failure causes and Solutions
- [Solved] Call to localhost/127.0.0.1:9000 failed on connection exception:java.net.ConnectException
- [Solved] hadoop:hdfs.DFSClient: Exception in createBlockOutputStream
- [Solved] HDFS Filed to Start namenode Error: Premature EOF from inputStream;Failed to load FSImage file, see error(s) above for more info
- JAVA api Access HDFS Error: Permission denied in production environment
- Error: Could not find or load main class org.apache.hadoop.mapreduce.v2.app.MRAppMaster
- Hadoop Connect hdfs Error: could only be replicated to 0 nodes instead of minReplication (=1).
- [Solved] Hbase Exception: java.io.EOFException: Premature EOF: no length prefix available