Error message:
Failure Info:Job initialization failed: java.io.IOException: Split metadata size exceeded 10000000. Aborting job job_201205162059_1073852 at org.apache.hadoop.mapreduce.split.SplitMetaInfoReader.readSplitMetaInfo(SplitMetaInfoReader.java:48) at org.apache.hadoop.mapred.JobInProgress.createSplits(JobInProgress.java:817) at org.apache.hadoop.mapred.JobInProgress.initTasks(JobInProgress.java:711) at org.apache.hadoop.mapred.JobTracker.initJob(JobTracker.java:4028) at org.apache.hadoop.mapred.EagerTaskInitializationListener$InitJob.run(EagerTaskInitializationListener.java:79) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662)
Error reason: the size of the job.splitmetainfo file of this job exceeds the limit
1. Job.splitmetadata info, which records metadata information of split, job split —– > HDFS block && amp; slave node
The storage path is located at ${Hadoop. TMP. Dir}/mapred/staging/${user. Name} /. Staging/jobid/
2. The parameter mapreduce.jobtracker.split.metainfo.maxsize controls the maximum size of the file. The default is 10000000 (10m)
Similar Posts:
- [Solved] Hadoop running jar package error: java.lang.exception: java.lang.arrayindexoutofboundsexception: 1
- [Solved] Tez Compression codec com.hadoop.compression.lzo.LzoCodec not found.
- Hive Error: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
- [Solved] Invalid character found in method name. HTTP method names must be tokens
- [Solved] Hadoop Error: Input path does not exist: hdfs://Master:9000/user/hadoop/input
- [Solved] Flume startup error: org.apache.flume.FlumeException: Failed to set up server socket
- Error: Could not find or load main class org.apache.hadoop.mapreduce.v2.app.MRAppMaster
- [Solved] MapReduce Output PATH error: Exception in thread “main” org.apache.hadoop.mapred.FileAlreadyExistsException: Output directory file:/D:/output already exists
- Solve kylin error: java.lang.arrayindexoutofboundsexception: – 1
- [Solved] Hadoop running jar package error: xception in thread “main” java.lang.ClassNotFoundException: Filter