Error message:
Failure Info:Job initialization failed: java.io.IOException: Split metadata size exceeded 10000000. Aborting job job_201205162059_1073852 at org.apache.hadoop.mapreduce.split.SplitMetaInfoReader.readSplitMetaInfo(SplitMetaInfoReader.java:48) at org.apache.hadoop.mapred.JobInProgress.createSplits(JobInProgress.java:817) at org.apache.hadoop.mapred.JobInProgress.initTasks(JobInProgress.java:711) at org.apache.hadoop.mapred.JobTracker.initJob(JobTracker.java:4028) at org.apache.hadoop.mapred.EagerTaskInitializationListener$InitJob.run(EagerTaskInitializationListener.java:79) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662)
Error reason: the size of the job.splitmetainfo file of this job exceeds the limit
1. Job.splitmetadata info, which records metadata information of split, job split —– > HDFS block && amp; slave node
The storage path is located at ${Hadoop. TMP. Dir}/mapred/staging/${user. Name} /. Staging/jobid/
2. The parameter mapreduce.jobtracker.split.metainfo.maxsize controls the maximum size of the file. The default is 10000000 (10m)