Why are there a series of technical challenges behind “OMG buy it”>>>
Only personal practice, if there is something wrong, welcome to exchange
I. causes
Execute the following two basic HDFS commands to report an error
1 hdfs dfs -get /home/mr/data/* ./
2 hdfs dfs -ls /home/mr/data/*
These are two normal HDFS commands. How can I report an error?Then open the HDFS command to see the problem
Second, analysis
1) Use the following command to find the path of the HDFS command
1 which hdfs
Open the script with VIM HDFS, and find that Hadoop will be used when executing with HDFS DFS_ CLIENT_ Opts configuration item. The configuration item is usually set in the directory/etc/Hadoop/conf/hadoop-env.sh by searching
Open hadoop-env.sh script and find that the configuration item adopts the default configuration, namely 256M
2) After checking, there are 1W + small files in the/home/MR/data directory, but the size is only about 100m. It is speculated that the metadata is too large due to the size of the file data, resulting in insufficient memory when loading to the client ( guess may not be correct, you are welcome to give a correct explanation )
Third, solutions
Increase Hadoop_ CLIENT_ The configuration of opts can solve the problem in both forms
1 export HADOOP_CLIENT_OPTS="-Xmx1024m $HADOOP_CLIENT_OPTS"
2 hdfs dfs -get /home/mr/data/* ./
3
4 HADOOP_CLIENT_OPTS="-Xmx1024m" hdfs dfs -get /home/mr/data/* ./
In addition, you can modify the configuration permanently by modifying the value in hadoop-env.sh
Similar Posts:
- Error: Could not find or load main class org.apache.hadoop.mapreduce.v2.app.MRAppMaster
- [Solved] /bin/bash: /us/rbin/jdk1.8.0/bin/java: No such file or directory
- hdfs dfs -rm -r cause GC overhead limit exceeded
- [Solved] Hadoop Error: The directory item limit is exceeded: limit=1048576 items=1048576
- Namenode Initialize Error: java.lang.IllegalArgumentException: URI has an authority component
- [Solved] hadoop:hdfs.DFSClient: Exception in createBlockOutputStream
- [Solved] Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses
- [Solved] Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses
- Hadoop Connect hdfs Error: could only be replicated to 0 nodes instead of minReplication (=1).
- [Solved] Exception in thread “main“ java.net.ConnectException: Call From