Spark源码分析之Driver的分配启动和executor的分配启动

来源:互联网 发布:coc2017城墙升级数据 编辑:程序博客网 时间:2024/06/05 01:57

继上一篇我们讲到创建SparkContext对象的时候,创建了TaskScheduler对象,并通过ClientEndPoint中发送RegisterApplication消息向Master注册Application,在Master接收到这个消息后,将会作出下面的动作
1. 构建ApplicationInfo对象
2. 执行registerApplication(),将applicationInfo添加到队列中
3. persistenceEngine.addApplication(app),对applicationInfo对象进行持久化
4. 向ClientEndPoint发送RegisteredApplication消息
5. 调用schedule()方法,启动Driver和executor
前面3点我们都已经在之前的博客中讲过

现在我们从schedule()方法开始

private def schedule(): Unit = {    if (state != RecoveryState.ALIVE) { return }    // Drivers take strict precedence over executors    //对worker节点进行打乱,使得Driver程序能够更加均衡的分布在集群中的各个Worker上    val shuffledWorkers = Random.shuffle(workers) // Randomization helps balance drivers    //为每一个应用程序的driver分配资源并在worker上启动driver    for (worker <- shuffledWorkers if worker.state == WorkerState.ALIVE) {      for (driver <- waitingDrivers) {        if (worker.memoryFree >= driver.desc.mem && worker.coresFree >= driver.desc.cores) {          launchDriver(worker, driver)          waitingDrivers -= driver        }      }    }    startExecutorsOnWorkers()  }

在schedule方法中首先会找出一个worker,在这个worker上面启动Driver进程,然后再调用startExecutorsOnWorkers方法在worker上启动executor进程,其中launchDriver和我们后面要将的在worker上launchExecutor逻辑是一样的,所以这里我们就直接介绍startExecutorOnWorkers()方法了

private def startExecutorsOnWorkers(): Unit = {   // Right now this is a very simple FIFO scheduler. We keep trying to fit in the first app   // in the queue, then the second app, etc.   //对于每一个还没有完全分配好CPU的app进行Executor的分配   for (app <- waitingApps if app.coresLeft > 0) {     //每一个Executor需要分配的CPU的数量     val coresPerExecutor: Option[Int] = app.desc.coresPerExecutor     // Filter out workers that don't have enough resources to launch an executor     //过滤掉那些没有足够的资源启动一个executor的worker     val usableWorkers = workers.toArray.filter(_.state == WorkerState.ALIVE)       .filter(worker => worker.memoryFree >= app.desc.memoryPerExecutorMB &&         worker.coresFree >= coresPerExecutor.getOrElse(1))       .sortBy(_.coresFree).reverse     val assignedCores = scheduleExecutorsOnWorkers(app, usableWorkers, spreadOutApps)     // Now that we've decided how many cores to allocate on each worker, let's allocate them     //分配算法已经决定好了在哪个worker上分配多少cores,下面我们开始进行executor的分配     for (pos <- 0 until usableWorkers.length if assignedCores(pos) > 0) {       //为executor分配worker上的资源       allocateWorkerResourceToExecutors(         app, assignedCores(pos), coresPerExecutor, usableWorkers(pos))     }   }}

在这个方法中会为每一个application启动一个executor,首先需要从workers中筛选出那些满足条件的worker用来启动executor,需要满足的条件如下:
1. worker中的剩余的空闲的内存需要大于application中指定的每一个executor所需要的内存
2. worker中的剩余的空闲的cpu的核数必须大于application中指定的每一个executor所需要的cpu的核数(如果启动的时候没有指定coresPerExecutor,那么就认为这个值是1)

筛选出满足条件的workers后,调用scheduleExecutorsOnWorkers(app, usableWorkers, spreadOutApps)这个方法计算出在哪一个worker上启动几个executor

private def scheduleExecutorsOnWorkers(      app: ApplicationInfo,      usableWorkers: Array[WorkerInfo],      spreadOutApps: Boolean): Array[Int] = {    //每一个executor上的core的数量    val coresPerExecutor = app.desc.coresPerExecutor    //一个executor上最小的cpu的数量,为什么这么说呢,因为在一个executor上的cpu的cores不一定就是coresPerExecutor    //还有可能是coresPerExecutor的整数倍    val minCoresPerExecutor = coresPerExecutor.getOrElse(1)    //如果没有设置coresPerExecutor则代表对于当前的application只能在一个worker上启动该app的一个executor    val oneExecutorPerWorker = coresPerExecutor.isEmpty    val memoryPerExecutor = app.desc.memoryPerExecutorMB    val numUsable = usableWorkers.length    //记录worker上已经分配的cpu的核数    val assignedCores = new Array[Int](numUsable)    //记录worker上已经分配的executor的数量    val assignedExecutors = new Array[Int](numUsable)    //app实际会分配的CPU的数量,取app剩余的还没有被分配的CPU的数量和可以使用的workers的cpu数量总和之间的最小值    var coresToAssign = math.min(app.coresLeft, usableWorkers.map(_.coresFree).sum)    /*判断当前的这个worker是否可以用来启动一个当前app的executor*/    /**      * 1.主要判断是否还有core没有被分配      * 2.判断worker上的core的数量是否充足      * 3.if 新建一个executor 判断worker上的内存是否足够,判断executor的数量是否达到了上限      * 4.不是 新建一个executor 那么就只需要判断1和2      */    def canLaunchExecutor(pos: Int): Boolean = {      //只要该APP还有没有分配的CPU,并且该cpu的数量要大于executor中允许的最少的cpu的数量      //这一点怎么理解呢,假设还需要分配的cpu的数量是1,但是coresPerExecutor为2      //那显然是没有办法分配0.5个executor来满足这个app的需求的      val keepScheduling = coresToAssign >= minCoresPerExecutor      //判断该worker上是否还有足够的cores      //worker上剩余的cores减去已经被分配出去的cores      val enoughCores = usableWorkers(pos).coresFree - assignedCores(pos) >= minCoresPerExecutor      // If we allow multiple executors per worker, then we can always launch new executors.      // Otherwise, if there is already an executor on this worker, just give it more cores.      //当允许在同一个worker上启动同一个app的多个executor,或者在该worker上还没有启动过该app的executor的时候      //我们就会创建一个新的executor,否则我们就会只是在原来的exeutor上面增加他的cpuCore      val launchingNewExecutor = !oneExecutorPerWorker || assignedExecutors(pos) == 0      //在worker上新建一个Executor的情况      if (launchingNewExecutor) {        //该worker上已经被分配的内存        val assignedMemory = assignedExecutors(pos) * memoryPerExecutor        //worker上剩余的内存减去被分配的内存需要大于将要被分配的内存        val enoughMemory = usableWorkers(pos).memoryFree - assignedMemory >= memoryPerExecutor        val underLimit = assignedExecutors.sum + app.executors.size < app.executorLimit        keepScheduling && enoughCores && enoughMemory && underLimit      } else {        // We're adding cores to an existing executor, so no need        // to check memory and executor limits        keepScheduling && enoughCores      }    }    //canLaunchExecutor无非就是判断worker上的cores和memory的数量是否满足条件    var freeWorkers = (0 until numUsable).filter(canLaunchExecutor)    //开始在freeWorkers上面分配executor    while (freeWorkers.nonEmpty) {      freeWorkers.foreach { pos =>        var keepScheduling = true        //再次判断是否可以启动一个executor        while (keepScheduling && canLaunchExecutor(pos)) {          //修改该app已经分配的cores和worker上已经被分配的cores          coresToAssign -= minCoresPerExecutor          assignedCores(pos) += minCoresPerExecutor          /*            * 假如我们只允许在一个worker上启动一个executor,那么设置该worker上被分配的executor的数量为1            * 否则就是在原来的executors的数量上加上1           */          if (oneExecutorPerWorker) {            assignedExecutors(pos) = 1          } else {            assignedExecutors(pos) += 1          }          /*          * 如果我们采用的是spreadOutApps这个算法,就意味着我们需要尽可能的将executor分配到足够多的          * worker上,此时就应该设置keepScheduling设置为false,结束在该executors上的分配          *          * 如果我们采用的不是spreadOutApps这个算法,就意味着我们需要一直在这个worker上分配这个executor          * 这个时候我们就需要设置keepScheduling为true,让其一直循环,一直到该worker上的资源不满足再继续分配          * executor了          * */          if (spreadOutApps) {            keepScheduling = false          }        }      }      //因为最外层是一个while循环,所以这边在过滤一遍,如果分配结束了,那么canLaunchExecutor就会返回false      //得到的freeworkers必然是一个空,最外层的循环就结束了      freeWorkers = freeWorkers.filter(canLaunchExecutor)    }    //返回每个worker上应该被分配的cores    assignedCores  }

在这个方法中将会使用canLaunchExecutor来筛选出可以为当前的这个app启动executor的worker,在canLaunchExecutor中,我们将会进行如下的判断:
1. 该app是否还有cpu需要分配,并且这个待分配的cpu的数量还得大于一个executor上可以分配的最小的cpu的数量(原因见代码注释)
2. worker上是否还有足够的cpu满足一个executor需要的cpu的数量
接着我们需要判断当前的这个app是否允许在一个worker上为其分配多少个executor

如果在当前的executor上已经启动过一个executor,现在只是使这个executor的cpu的数量再增加CoresPerExecutor数量个,那么就只需要根据keepScheduling && enoughCores来判断就可以了,不需要再考虑内存,否则还得考虑worker上的内存能否满则启动一个executor

在筛选出可以用来启动executors的workers后,进行worker上资源的分配,分配的信息都记载在assignedCores数组中,这个数组的长度就是 useableWorkers的长度,记录在第几个worker上分配几个cpu,如果oneExecutorPerworker,那么就会启动对应个数的executor,否则只会启动一个executor,例如assignedCores为{1,2,3}就代表在第一个worker上分配1个Cpu,1个executor,在第2个worker上分配2个cpu,2个executor(coresPerExecutor=1),依次类推

注意到最后有一个参数spreadOutApps,当其为true,设置keepScheduling为false的时候,就意味着结束了while循环,开始进行foreach循环,也就是开始在下一个worker上进行分配了,while循环同样也是只执行一次,然后就又在下一个worker上进行分配,这就意味着,采用spreadOutApps这个参数,就是将executor尽可能地分配到较多的workers上去,相反则将executor启动到较少的worker上去

这个方法最后就会返回分配好的信息assignedCores(在第几个worker上分配几个cpu启动几个executor)

scheduleExecutorsOnWorkers方法执行后,返回到startExecutorsOnWorkers上,开始根据分配好的信息,去对应的worker上启动executor,对于每一个需要分配executor的worker调用allocateWorkerResourceToExecutors()方法

private def allocateWorkerResourceToExecutors(      app: ApplicationInfo,      //在这个worker上需要分配的cores      assignedCores: Int,      //每一个executor需要的cores      coresPerExecutor: Option[Int],      worker: WorkerInfo): Unit = {    // If the number of cores per executor is specified, we divide the cores assigned    // to this worker evenly among the executors with no remainder.    // Otherwise, we launch a single executor that grabs all the assignedCores on this worker.    //在这个worker上需要分配的executors的数量    val numExecutors = coresPerExecutor.map { assignedCores / _ }.getOrElse(1)    //将要在这个worker上每一次需要分配的cores的数量    val coresToAssign = coresPerExecutor.getOrElse(assignedCores)    for (i <- 1 to numExecutors) {      //给app添加一个executor      val exec = app.addExecutor(worker, coresToAssign)      //启动executor      launchExecutor(worker, exec)      //修改app的状态为RUNNING      app.state = ApplicationState.RUNNING    }  }

首先会计算在当前的worker上需要启动多少个executor,就是assignedCores/coresPerExecutor,如果后者没有指定coresPerExecutor根据前面的scheduleExecutorsOnWorkers中的规定就是允许在一个worker上启动当前app的多个executor,那么numExecutors就是1

然后会计算在一个executor上会有几个cpu,这个也是取决于coresPerExecutor有没有被设置,如果设置了,那就是coresPerExecutor,如果没有被设置,那么就是在这个worker上只会启动一个executor,其cpu的数量当然就是分配给worker的cpu的总数量

最后调用launchExecutor(worker, exec)在worker上启动executor,在launchExecutor中,将会向worker发送一个LaunchExecutor的消息,在Woker端接收到该消息的时候,处理该消息的代码如下

case LaunchExecutor(masterUrl, appId, execId, appDesc, cores_, memory_) =>    if (masterUrl != activeMasterUrl) {     logWarning("Invalid Master (" + masterUrl + ") attempted to launch executor.")    } else {     try       logInfo("Asked to launch executor %s/%d for %s".format(appId, execId, appDesc.name))       // Create the executor's working directory       //创建executor的工作目录       val executorDir = new File(workDir, appId + "/" + execId)       if (!executorDir.mkdirs()) {         throw new IOException("Failed to create directory " + executorDir)       }       // Create local dirs for the executor. These are passed to the executor via the       // SPARK_EXECUTOR_DIRS environment variable, and deleted by the Worker when the       // application finishes.       val appLocalDirs = appDirectories.get(appId).getOrElse {         Utils.getOrCreateLocalRootDirs(conf).map { dir =>           val appDir = Utils.createDirectory(dir, namePrefix = "executor")           Utils.chmod700(appDir)           appDir.getAbsolutePath()         }.toSeq       }       appDirectories(appId) = appLocalDirs       val manager = new ExecutorRunner(         appId,         execId,         appDesc.copy(command = Worker.maybeUpdateSSLSettings(appDesc.command, conf)),         cores_,         memory_,         self,         workerId,         host,         webUi.boundPort,         publicAddress,         sparkHome,         executorDir,         workerUri,         conf,         appLocalDirs, ExecutorState.LOADING)       //val executors = new HashMap[String, ExecutorRunner]       //因为设置的时候可以允许一个Worker启动一个app的多个executor,所以这里的hashMap的       //key为appId+"/"+execId       executors(appId + "/" + execId) = manager       manager.start()       //重新计算当前的worker的已经被使用的cores和memory       coresUsed += cores_       memoryUsed += memory_       //向master告知executor状态改变       sendToMaster(ExecutorStateChanged(appId, execId, manager.state, None, None))     catch {       case e: Exception => {         logError(s"Failed to launch executor $appId/$execId for ${appDesc.name}.", e)         if (executors.contains(appId + "/" + execId)) {           executors(appId + "/" + execId).kill()           executors -= appId + "/" + execId         }         sendToMaster(ExecutorStateChanged(appId, execId, ExecutorState.FAILED,           Some(e.toString), None))       }     }}

先判断master是否可以使用,如果可以,接下来就会为executor创建executor的工作目录和临时目录,接下来会创建一个executorRunner对象,然后调用executorRunner对象的start方法,其源码如下:

private[worker] def start() {     workerThread = new Thread("ExecutorRunner for " + fullId) {       override def run() { fetchAndRunExecutor() }     }     workerThread.start()     // Shutdown hook that kills actors on shutdown.     shutdownHook = ShutdownHookManager.addShutdownHook { () =>       killProcess(Some("Worker shutting down")) }}

在该方法中,首先创建一个线程,然后启动该线程,该线程启动的时候会调用fetchAndRunExecutor()方法

 private def fetchAndRunExecutor() {    try {      // Launch the process      //通过应用程序的信息和环境配置创建构造器builder      val builder = CommandUtils.buildProcessBuilder(appDesc.command, new SecurityManager(conf),        memory, sparkHome.getAbsolutePath, substituteVariables)      val command = builder.command()      logInfo("Launch command: " + command.mkString("\"", "\" \"", "\""))      //在构造器中指定执行目录等信息      builder.directory(executorDir)      builder.environment.put("SPARK_EXECUTOR_DIRS", appLocalDirs.mkString(File.pathSeparator))      // In case we are running this from within the Spark Shell, avoid creating a "scala"      // parent process for the executor command      builder.environment.put("SPARK_LAUNCH_WITH_SCALA", "0")      // Add webUI log urls      //添加监控页面的输入日志地址信息      val baseUrl =        s"http://$publicAddress:$webUiPort/logPage/?appId=$appId&executorId=$execId&logType="      builder.environment.put("SPARK_LOG_URL_STDERR", s"${baseUrl}stderr")      builder.environment.put("SPARK_LOG_URL_STDOUT", s"${baseUrl}stdout")      //启动构造器,创建CoarseGrainedExecutorBackend进程      process = builder.start()      val header = "Spark Executor Command: %s\n%s\n\n".format(        command.mkString("\"", "\" \"", "\""), "=" * 40)      // Redirect its stdout and stderr to files      val stdout = new File(executorDir, "stdout")      stdoutAppender = FileAppender(process.getInputStream, stdout, conf)      val stderr = new File(executorDir, "stderr")      Files.write(header, stderr, UTF_8)      stderrAppender = FileAppender(process.getErrorStream, stderr, conf)      // Wait for it to exit; executor may exit with code 0 (when driver instructs it to shutdown)      // or with nonzero exit code      //启动executor进程,并等待其退出,当结束时需要向worker发送退出状态的信息      val exitCode = process.waitFor()      state = ExecutorState.EXITED      val message = "Command exited with code " + exitCode      //向worker发送ExecutorStateChanged消息      worker.send(ExecutorStateChanged(appId, execId, state, Some(message), Some(exitCode)))    } catch {      case interrupted: InterruptedException => {        logInfo("Runner thread for executor " + fullId + " interrupted")        state = ExecutorState.KILLED        killProcess(None)      }      case e: Exception => {        logError("Error running executor", e)        state = ExecutorState.FAILED        killProcess(Some(e.toString))      }    }  }

在该方法中首先根据应用程序的信息和环境变量构造一个进程构建器,紧接着通过该构造器的start方法创建一个进程,注意此时该进程还没有启动,然后重定向该进程的输入输出到指定的文件中,最后通过process.waitFor()启动CoarseGrainedExecutorBackend进程,注意此时当前的这个线程将会被挂起,直到CoarseGrainedExecutorBackend进程结束,然后执行线程中后面的代码,结束后,还需要向Worker发送ExecutorStateChanged消息,从代码中可以看出,此时是子啊CoarseGrainedExecutorBackend进程结束后发送的该消息,也就是告诉Worker这个executor进程退出了,在Worker中处理该消息的时候们将会调用handleExecutorStateChanged方法

//handleExecutorSatateChanged中的参数executorStateChanged其实就是定义一个类封装了传递过去的5个参数case executorStateChanged @ ExecutorStateChanged(appId, execId, state, message, exitStatus) =>      handleExecutorStateChanged(executorStateChanged)
private[worker] def handleExecutorStateChanged(executorStateChanged: ExecutorStateChanged):    Unit = {    //首先向Master发送executorStateChanged消息    sendToMaster(executorStateChanged)    val state = executorStateChanged.state    //def isFinished(state: ExecutorState): Boolean = Seq(KILLED, FAILED, LOST, EXITED).contains(state)    if (ExecutorState.isFinished(state)) {      val appId = executorStateChanged.appId      val fullId = appId + "/" + executorStateChanged.execId      val message = executorStateChanged.message      val exitStatus = executorStateChanged.exitStatus      //fullId的格式为appId+"/"+execId      executors.get(fullId) match {        case Some(executor) =>          logInfo("Executor " + fullId + " finished with state " + state +            message.map(" message " + _).getOrElse("") +            exitStatus.map(" exitStatus " + _).getOrElse(""))          //从hashMap中移除与这个fullid对应的executorRunner对象          executors -= fullId          //将结束的executor记录到finishedExecutors中          finishedExecutors(fullId) = executor          trimFinishedExecutorsIfNecessary()          //回收worker中的cores和memory          coresUsed -= executor.cores          memoryUsed -= executor.memory        case None =>          logInfo("Unknown Executor " + fullId + " finished with state " + state +            message.map(" message " + _).getOrElse("") +            exitStatus.map(" exitStatus " + _).getOrElse(""))      }      //随着每一次的executor的退出,有可能就代表着一个application的结束,需要回收application      maybeCleanupApplication(appId)    }  }

在这里,首先会向Master发送一个executorStateChanged消息,关于这个消息的处理,我们在后面再将,现在我们先把这个方法分析完,接着如果executor是finished的状态,那么我们会构造一个fullId,去executors(HashMap[String,ExecutorRunner])中查找相应的executorRunner,然后把这个executorRunner从map中移除,并将这个executor放入finishedExecutors这个map中,最后会回收worker中的内存和cpu,并且如果executor的状态是finished的话,那么很有可能就代表着一个application的结束,会调用maybeCleanupApplication(appId)这个方法对Application进行清除

private def maybeCleanupApplication(id: String): Unit = {   val shouldCleanup = finishedApps.contains(id) && !executors.values.exists(_.appId == id)   if (shouldCleanup) {     finishedApps -= id     appDirectories.remove(id).foreach { dirList =>       logInfo(s"Cleaning up local directories for application $id")       dirList.foreach { dir =>         Utils.deleteRecursively(new File(dir))       }     }     shuffleService.applicationRemoved(id)   } }

只要finishedApps中包含这个appid,并且map(String,ExecutorRunner)中没有这个appid的executorRunner存在,那么就认为这个app结束了,此时需要进行一些清理

现在我们看看在handleExecutorStateChanged中向master发送executorStateChanged消息后,master是如何处理的

case ExecutorStateChanged(appId, execId, state, message, exitStatus) => {    //根据appId先获取到Application,然后再从中根据execId获取到executor    val execOption = idToApp.get(appId).flatMap(app => app.executors.get(execId))    execOption match {      case Some(exec) => {        val appInfo = idToApp(appId)        exec.state = state        if (state == ExecutorState.RUNNING) { appInfo.resetRetryCount() }        //向driver发送ExecutorUpdated的消息        exec.application.driver.send(ExecutorUpdated(execId, state, message, exitStatus))        //Executor的任务执行完毕的话,那么就从appInfo中移除该executor,也从worker中移除该executor        if (ExecutorState.isFinished(state)) {          if (!appInfo.isFinished) {            appInfo.removeExecutor(exec)          }          exec.worker.removeExecutor(exec)          val normalExit = exitStatus == Some(0)          // Only retry certain number of times so we don't go into an infinite loop.          //如果不是正常退出的话,那么就判断,如果反复执行失败就removeApplication          if (!normalExit) {            //当前这个app尝试重启的次数还没有达到上限的话,就进行新的调度,分配新的executor            if (appInfo.incrementRetryCount() < ApplicationState.MAX_NUM_RETRY) {              schedule()            } else {              val execs = appInfo.executors.values              //如果execs中不存在正在运行的executor的话,那么就移除掉当前这个application,宣告执行失败              if (!execs.exists(_.state == ExecutorState.RUNNING)) {                logError(s"Application ${appInfo.desc.name} with ID ${appInfo.id} failed " +                  s"${appInfo.retryCount} times; removing it")                removeApplication(appInfo, ApplicationState.FAILED)              }            }          }        }      }      case None =>        logWarning(s"Got status update for unknown executor $appId/$execId")    }  }
  1. 根据appId从 ApplicationInfo中获取executorDesc信息,然后再通过execId获取与其对应的executorDesc
  2. 很明显这里发送过来的executor的state肯定是finished的状态,然后就会从appInfo和worker中移除该executor
  3. 针对finished状态,判断是不是正常退出,如果不是正常退出则需要尝试重新为app调用schedule分配资源启动executor
    但是如果重试的次数已经大于了上限阈值,那么就判断该appInfo中是否还有处于运行状态的executor,如果没有的话就可以宣告这个app执行失败了,清除掉该application

至此,当在一个Spark集群上面采用standalone的模式,启动Driver和executor的所有过程就分析完毕了。总结如下图:
这里写图片描述

总结如下:
case RegisterApplication:
1. 创建一个applicationInfo对象
2. registerApplication(将application加入到队列中)
3. 持久化application
4. 发送RegisteredApplication消息到ClientEndPoint,ClientPoint接收到消息后设置registered=true
5. schedule()

在schedule()中
1. 先打乱可以使用的workers,主要看在这个worker上内存和cpu的cores满不满足启动一个driver满足的话则在这个worker上启动一个driver,直到找到一个合适的worker,然后launchDriver
2. 调用startExecutorsOnWorkers

startExecutorsOnWorkers
1. 第一步筛选出可以使用的workers,主要看内存和cpu
2. 调用shceduleExecutorsOnWorkers得到要分配的信息
先获取当前的app还有多少cpuCore没有被分配
筛选出可以用来启动executor的workers
在筛选出来的worker上面进行executor的分配的信息的记载在assignedCores中
3. 根据上面得到的要分配的信息,调用allocateWorkerResourceToExecutors()在每一个worker上面分配资源启动executor
allocateWorkerResourceToExecutors()
1. 计算在当前的worker上启动几个executor
2. 计算在当前的worker上一个executor会分配几个cpuCores
3. 调用launchExecutor进行启动executor

[Master]
launchExecutor
首先,发送LaunchExecutor消息给Worker,worker接收到消息后的处理如下
1. 创建一个ExecutorRunner
2. 调用ExecutorRunner对象的start方法,start方法中将会启动一个线程进行executor进程的启动,并且会阻塞等待executor进程的结束,但是主线程,在调用start方法后,将会向Master发送一个ExecutorStateChanged消息,传递过去的executor的state为LOADING
3. 在start方法中执行fetchAndRunExecutor方法,启动一个CoarseGrainedExecutorBackend进程
4. 因为调用的是process.waitFor()方法,所以会阻塞当前的线程,等待这个进程结束
5. 待其结束的时候,向Woker发送ExecutorStatesChanged消息
6. Worker接收到这个消息后,调用handleExecutorStateChanged方法,继续向Master发送这个消息
然后由于上面第5步发送的消息的state是EXITED,是属于finished的,所以还会,判断当前的application是否已经执行结束,如果是则需要清理相应的工作目录,执行maybeCleanUpApplication方法
7. 在Master接收到消息后由于是FINISHED的状态,那么会判断executor是否异常退出,如果是异常退出,将会重启,如果不是异常退出,那么将会回收该executor并且

其次向Driver端的ClientEndPoint发送一个ExcutorAdded消息,CleintEndpoint处理的消息如下:sendToMaster(ExecutorStateChanged(appId, id, ExecutorState.RUNNING, None, None))向master发送一个ExecutorStateChanged消息,并且状态为RUNNING
阅读全文
1 0