When Hadoop executes start-yarn.sh, it will report an error of “error: attempting to operate on yarn ResourceManager as root”.
Method 1
sudo vim ~/.bashrc
Add the following parameters at the end:
export HDFS_NAMENODE_USER=root export HDFS_DATANODE_USER=root export HDFS_SECONDARYNAMENODE_USER=root export YARN_RESOURCEMANAGER_USER=root export YARN_NODEMANAGER_USER=root
Method 2
Add the following parameters at the top of the start-dfs.sh and stop-dfs.sh files:
HDFS_DATANODE_USER=root HADOOP_SECURE_DN_USER=hdfs HDFS_NAMENODE_USER=root HDFS_SECONDARYNAMENODE_USER=root
Add the following parameters at the top of the start-yarn.sh and stop-yarn.sh files:
YARN_RESOURCEMANAGER_USER=root HADOOP_SECURE_DN_USER=yarn YARN_NODEMANAGER_USER=root
*The above files are in the SBIN directory of the Hadoop root directory.
*The official recommendation is to create a new independent account for yarn to start.
Similar Posts:
- [Solved] Hadoop runs start-dfs.sh error: attempting to operate on HDFS as root
- [Solved] Hadoop3 Install Error: there is no HDFS_NAMENODE_USER defined. Aborting operation.
- [Hadoop 2. X] after Hadoop runs for a period of time, stop DFS and other operation failure causes and Solutions
- The key technologies in hadoop-3.0.0 configuration yarn.nodemanager.aux -Services item
- [Solved] Call to localhost/127.0.0.1:9000 failed on connection exception:java.net.ConnectException
- [Solved] Hadoop Error: The directory item limit is exceeded: limit=1048576 items=1048576
- Error: Could not find or load main class org.apache.hadoop.mapreduce.v2.app.MRAppMaster
- [Solved] /bin/bash: /us/rbin/jdk1.8.0/bin/java: No such file or directory
- JAVA api Access HDFS Error: Permission denied in production environment
- [Solved] HDFS Filed to Start namenode Error: Premature EOF from inputStream;Failed to load FSImage file, see error(s) above for more info