mapred.map.tasks 如何影响map的个数

来源:互联网 发布:苹果间谍软件 编辑:程序博客网 时间:2024/05/16 03:43

且具体到底产生多少个分片(split)  因为多少个map 是有关系。(此处是根据新的API来分析,因为新的API 终究要调用到旧的API来做具体的动作)

可能会说这个值 是系统根据文件大小 和根据文件分片大小 算出来的,那具体是如何算出来的呢,我们根据源码 一步一步来分析

打开Job类

首先Job.submit()

public void submit() throws IOException, InterruptedException,
                              ClassNotFoundException {
    ensureState(JobState.DEFINE);
    setUseNewAPI();
   
    // Connect to the JobTracker and submit the job
    connect();
    info = jobClient.submitJobInternal(conf); //此处用到JobClient的submitJobInternal 方法 看下面源码2

    super.setJobID(info.getID());
    state = JobState.RUNNING;
   }

 

源码2 JobClient.submitJobInternal()

我们看  此方法中的 下面一段

 // Create the splits for the job
          FileSystem fs = submitJobDir.getFileSystem(jobCopy);
          LOG.debug("Creating splits at " + fs.makeQualified(submitJobDir));
          int maps = writeSplits(context, submitJobDir);  //在此处计算 具体有多少个map 紧接着看下面源码3
          jobCopy.setNumMapTasks(maps);

源码3 writeSplits()

private int writeSplits(org.apache.hadoop.mapreduce.JobContext job,
      Path jobSubmitDir) throws IOException,
      InterruptedException, ClassNotFoundException {
    JobConf jConf = (JobConf)job.getConfiguration();
    int maps;
    if (jConf.getUseNewMapper()) {//新API
      maps = writeNewSplits(job, jobSubmitDir); //见下面源码5
    } else {//旧API
      maps = writeOldSplits(jConf, jobSubmitDir);
    }
    return maps;
  }

源码5 writeNewSplits()

 int writeNewSplits(JobContext job, Path jobSubmitDir) throws IOException,
      InterruptedException, ClassNotFoundException {
    Configuration conf = job.getConfiguration();
    InputFormat<?, ?> input =
      ReflectionUtils.newInstance(job.getInputFormatClass(), conf);

    List<InputSplit> splits = input.getSplits(job); //我们需要关注的是这一行 调用input 中的getSplits 方法 我们会用FileInputFormat的getSplits方法来做实例 看源码6
    T[] array = (T[]) splits.toArray(new InputSplit[splits.size()]);

    // sort the splits into order based on size, so that the biggest
    // go first
    Arrays.sort(array, new SplitComparator());
    JobSplitWriter.createSplitFiles(jobSubmitDir, conf,
        jobSubmitDir.getFileSystem(conf), array);
    return array.length;
  }

 

源码6 FileInputFormat.getSplit

public List<InputSplit> getSplits(JobContext job
                                    ) throws IOException {
    long minSize = Math.max(getFormatMinSplitSize(), getMinSplitSize(job));
    long maxSize = getMaxSplitSize(job);

    // generate splits
    List<InputSplit> splits = new ArrayList<InputSplit>();
    List<FileStatus>files = listStatus(job);
    for (FileStatus file: files) {
      Path path = file.getPath();
      FileSystem fs = path.getFileSystem(job.getConfiguration());
      long length = file.getLen();
      BlockLocation[] blkLocations = fs.getFileBlockLocations(file, 0, length);
      if ((length != 0) && isSplitable(job, path)) {
        long blockSize = file.getBlockSize();
        long splitSize = computeSplitSize(blockSize, minSize, maxSize); //这才是关键 看源码7

        long bytesRemaining = length;
        while (((double) bytesRemaining)/splitSize > SPLIT_SLOP) {
          int blkIndex = getBlockIndex(blkLocations, length-bytesRemaining);
          splits.add(new FileSplit(path, length-bytesRemaining, splitSize,
                                   blkLocations[blkIndex].getHosts()));
          bytesRemaining -= splitSize;
        }
       
        if (bytesRemaining != 0) {
          splits.add(new FileSplit(path, length-bytesRemaining, bytesRemaining,
                     blkLocations[blkLocations.length-1].getHosts()));
        }
      } else if (length != 0) {
        splits.add(new FileSplit(path, 0, length, blkLocations[0].getHosts()));
      } else {
        //Create empty hosts array for zero length files
        splits.add(new FileSplit(path, 0, length, new String[0]));
      }
    }
   
    // Save the number of input files in the job-conf
    job.getConfiguration().setLong(NUM_INPUT_FILES, files.size());

    LOG.debug("Total # of splits: " + splits.size());
    return splits;
  }

 

源码7 computeSplitSize

  protected long computeSplitSize(long blockSize, long minSize,
                                  long maxSize) {
    return Math.max(minSize, Math.min(maxSize, blockSize));


  }所以从上面源码的解读,我们可以看出来,这个参数 mapred.map.tasks 的设置对具体的mapreduce 对输入进行分片产生一定的作用,因为具体产生多少分片,多少个map

是根据三个参数来决定的 一个是dfs.block.size 另外一个是mapred.map.tasks  还有一个 就是 mapred.min.split.size 但一般情况下如果不设置 mapred.map.tasks 的情况下 则会根据其它两个参数来决定,但一般情况下 mapred.min.split.size 参数我们也不设置,所以 dfs.block.size 自然就是我们默认的分片大小,如果mapred.min.split.size 大于dfs.block.size 则系统分片就会大于文件系统 块的大小,从而map的个数也会相应的减少。