Run the map reduce program and report the following error:
Container exited with a non-zero exit code 127. Error file: prelaunch.err /bin/bash: /us/rbin/jdk1.8.0/bin/java: No such file or directory
Yan’s Java path is not configured or the wrong path is configured.
Change Java_home to the correct path, for example:
The yarn-env.sh of each machine should be modified. You can also use the SCP command to synchronize the yarn-env.sh to other machines.
Suppose there are four machines C1, C2, C3 and C4, which are currently on machine C1:
cd hadoop/etc/hadoop scp yarn-env.sh c2:/hadoop/etc/hadoop scp yarn-env.sh c3:/hadoop/etc/hadoop scp yarn-env.sh c4:/hadoop/etc/hadoop
*After modification, Hadoop will not take effect until it is restarted
- [Solved] Hadoop runs start-dfs.sh error: attempting to operate on HDFS as root
- Error: Could not find or load main class org.apache.hadoop.mapreduce.v2.app.MRAppMaster
- [Solved] Error: JAVA_HOME is not set and could not be found.
- HDFS Operate hadoop Error: Command not Found [How to Solve]
- [Hadoop 2. X] after Hadoop runs for a period of time, stop DFS and other operation failure causes and Solutions
- The key technologies in hadoop-3.0.0 configuration yarn.nodemanager.aux -Services item
- Solution to the problem of unable to load native Hadoop Library in MAC development
- [Solved] JPype Error: FileNotFoundError: [Errno 2] No such file or directory: ‘/usr/lib/jvm’
- HDFS problem set (1), use the command to report an error: com.google.protobuf.servicee xception:java.lang.OutOfMemoryError :java heap space
- [Solved] Hadoop Error: ERROR: Attempting to operate on yarn resourcemanager as root