Spark 启动脚本分析

来源:互联网 发布:aop面向切面编程 编辑:程序博客网 时间:2024/06/11 09:45

一般的启动集群途径是执行位于SPARK_HOME/sbin目录下的start-all.sh脚本文件,我们来看一下这里边都有些什么猫腻(强烈建议,查看本文之前先去学习一下shell的编程基础)。

start-all.sh:

这个文件是一个总控脚本,它会调用其他的脚本来完成整个集群的启动。

#!/usr/bin/env bash# Licensed to ... --省略授权信息# Start all spark daemons.--启动所有spark守护进程# Starts the master on this node.--启动本节点上的Master进程# Starts a worker on each node specified in conf/slaves--为每一个配置在conf/slaves文件中的节点启动Worker进程if [ -z "${SPARK_HOME}" ]; then  --如果SPARK_HOME变量未定义,则设置环境变量  export SPARK_HOME="$(cd "`dirname "$0"`"/..; pwd)"fiTACHYON_STR=""while (( "$#" )); do   --遍历启动命令后面跟的参数列表case $1 in    --with-tachyon)    --是否设置启动tachyon      TACHYON_STR="--with-tachyon"      ;;  esacshiftdone# Load the Spark configuration --加载执行Spark配置脚本spark-config.sh. "${SPARK_HOME}/sbin/spark-config.sh"# Start Master    --执行start-master.sh脚本,启动Master进程"${SPARK_HOME}/sbin"/start-master.sh $TACHYON_STR# Start Workers   --启动所有slaves节点"${SPARK_HOME}/sbin"/start-slaves.sh $TACHYON_STR

我们看到它里边调用了start-config.sh 和 start-master.sh 和 start-slaves.sh,并且我们还可以在命令行中指定是否启动Tachyon。

接下来看看start-config.sh

start-config.sh

# Licensed to ...# included in all the spark scripts with source command# should not be executable directly# also should not be passed any arguments, since we need original $*--使用源命令将所有的spark脚本包含进来 --不要直接执行--也不要传递任何参数,因为我们需要原始的 $*。注:$* 是以将所有向脚本传递的参数拼接成一个字符串# symlink and absolute path should rely on SPARK_HOME to resolve--符号链接和绝对路径的解析应该依赖SPARK_HOMEif [ -z "${SPARK_HOME}" ]; then   --和上一个文件一样,设置SPARK_HOME变量  export SPARK_HOME="$(cd "`dirname "$0"`"/..; pwd)"fiexport SPARK_CONF_DIR="${SPARK_CONF_DIR:-"${SPARK_HOME}/conf"}"--设置SPARK_CONF_DIR环境变量# Add the PySpark classes to the PYTHONPATH:--添加PySpark类到PYTHONPATH环境变量export PYTHONPATH="${SPARK_HOME}/python:${PYTHONPATH}"export PYTHONPATH="${SPARK_HOME}/python/lib/py4j-0.9-src.zip:${PYTHONPATH}"

由上边可以看出来,这个文件主要用来设置几个重要的环境变量:
SPARK_CONF_DIR: spark配置文件目录
PYTHONPATH: python环境的路径

SPARK_HOME/python目录下的文件:

python

pyspark

start-master.sh

#!/usr/bin/env bash# Starts the master on the machine this script is executed on. --启动执行这个脚本所在机器上的Master进程。if [ -z "${SPARK_HOME}" ]; then  export SPARK_HOME="$(cd "`dirname "$0"`"/..; pwd)"fi# NOTE: This exact class name is matched downstream by SparkSubmit.--注:这个准确的类名由SparkSubmit匹配。# Any changes need to be reflected there.--任何改动应该反映在这里CLASS="org.apache.spark.deploy.master.Master"if [[ "$@" = *--help ]] || [[ "$@" = *-h ]]; then  --打印使用说明  echo "Usage: ./sbin/start-master.sh [options]"  pattern="Usage:"  pattern+="\|Using Spark's default log4j profile:"  pattern+="\|Registered signal handlers for"  "${SPARK_HOME}"/bin/spark-class $CLASS --help 2>&1 | grep -v "$pattern" 1>&2  exit 1fiORIGINAL_ARGS="$@"  --原始命令行参数,也就是start-all中的变量TACHYON_STRSTART_TACHYON=falsewhile (( "$#" )); docase $1 in    --with-tachyon)      if [ ! -e "${SPARK_HOME}"/tachyon/bin/tachyon ]; then        echo "Error: --with-tachyon specified, but tachyon not found."        exit -1      fi      START_TACHYON=true      ;;  esacshiftdone. "${SPARK_HOME}/sbin/spark-config.sh"  --加载执行Spark配置脚本spark-config.sh. "${SPARK_HOME}/bin/load-spark-env.sh" --加载执行Spark配置脚本spark-env.shif [ "$SPARK_MASTER_PORT" = "" ]; then  SPARK_MASTER_PORT=7077  --设置spark master节点的通信默认端口: 7077fiif [ "$SPARK_MASTER_IP" = "" ]; then  SPARK_MASTER_IP=`hostname` --设置spark master节点的IP地址,从hostname中获取fiif [ "$SPARK_MASTER_WEBUI_PORT" = "" ]; then  SPARK_MASTER_WEBUI_PORT=8080  --设置spark master节点上 Web UI 的通信默认端口: 8080,如果8080被占用,可以在这里进行修改fi"${SPARK_HOME}/sbin"/spark-daemon.sh start $CLASS 1 \  --ip $SPARK_MASTER_IP --port $SPARK_MASTER_PORT --webui-port $SPARK_MASTER_WEBUI_PORT \  $ORIGINAL_ARGSif [ "$START_TACHYON" == "true" ]; then --如果设置启动tachyon  "${SPARK_HOME}"/tachyon/bin/tachyon bootstrap-conf $SPARK_MASTER_IP  "${SPARK_HOME}"/tachyon/bin/tachyon format -s  --格式化tachyon  "${SPARK_HOME}"/tachyon/bin/tachyon-start.sh master --调用tachyon-start.sh 脚本fi

可以看到最终是调用了spark-daemon.sh这个脚本。上边的脚本,在真实环境中执行时,应该是这样的:

xxx/spark-daemon.sh start "org.apache.spark.deploy.master.Master" --ip 10.101.211.128 --port 7077 --webui-port 8081 --with-tachyon 

那么就来看一下这个spark-daemon这个脚本是个什么鬼。

spark-daemon.sh

#!/usr/bin/env bash## Licensed to ...# Runs a Spark command as a daemon. --运行一个spark命令作为守护进程## Environment Variables  --环境变量##   SPARK_CONF_DIR  Alternate conf dir. Default is ${SPARK_HOME}/conf.#   SPARK_LOG_DIR   Where log files are stored. ${SPARK_HOME}/logs by default.#   SPARK_MASTER    host:path where spark code should be rsync'd from#   SPARK_PID_DIR   The pid files are stored. /tmp by default.#   SPARK_IDENT_STRING   A string representing this instance of spark. $USER by default -- 一个代表当前spark实例的字符串,默认是 $USER变量的值,即当前登录人的用户名#   SPARK_NICENESS The scheduling priority for daemons. Defaults to 0. --守护进程的调度优先级,默认是0。##usage="Usage: spark-daemon.sh [--config <conf-dir>] (start|stop|submit|status) <spark-command> <spark-instance-number> <args...>"# if no args specified, show usage --spark-daemon命令必须带参数,否则展示使用说明if [ $# -le 1 ]; then  echo $usage  exit 1fiif [ -z "${SPARK_HOME}" ]; then  export SPARK_HOME="$(cd "`dirname "$0"`"/..; pwd)"fi. "${SPARK_HOME}/sbin/spark-config.sh"# get arguments# Check if --config is passed as an argument. It is an optional parameter. --检查可选参数--config是否已经在参数列表中# Exit if the argument is not a directory.  --如果这个参数不是一个目录,退出if [ "$1" == "--config" ]then  shift  conf_dir="$1"  if [ ! -d "$conf_dir" ]  --判断--config后边跟的参数是否指向一个目录  then    echo "ERROR : $conf_dir is not a directory"    echo $usage    exit 1  else    export SPARK_CONF_DIR="$conf_dir"  fi  shiftfioption=$1shiftcommand=$1shiftinstance=$1shiftspark_rotate_log (){    log=$1;    num=5;    if [ -n "$2" ]; then    num=$2    fi    if [ -f "$log" ]; then # rotate logs    while [ $num -gt 1 ]; do        prev=`expr $num - 1`        [ -f "$log.$prev" ] && mv "$log.$prev" "$log.$num"        num=$prev    done    mv "$log" "$log.$num";    fi}. "${SPARK_HOME}/bin/load-spark-env.sh"if [ "$SPARK_IDENT_STRING" = "" ]; then  export SPARK_IDENT_STRING="$USER"fiexport SPARK_PRINT_LAUNCH_COMMAND="1"# get log directoryif [ "$SPARK_LOG_DIR" = "" ]; then  --判断spark的日志目录变量是否已经设置  export SPARK_LOG_DIR="${SPARK_HOME}/logs"fimkdir -p "$SPARK_LOG_DIR" --创建spark日志目录touch "$SPARK_LOG_DIR"/.spark_test > /dev/null 2>&1TEST_LOG_DIR=$?if [ "${TEST_LOG_DIR}" = "0" ]; then  rm -f "$SPARK_LOG_DIR"/.spark_testelse  chown "$SPARK_IDENT_STRING" "$SPARK_LOG_DIR"fiif [ "$SPARK_PID_DIR" = "" ]; then  --判断SPARK_PID_DIR变量是否已定义  SPARK_PID_DIR=/tmpfi# some variableslog="$SPARK_LOG_DIR/spark-$SPARK_IDENT_STRING-$command-$instance-$HOSTNAME.out"pid="$SPARK_PID_DIR/spark-$SPARK_IDENT_STRING-$command-$instance.pid"# Set default scheduling priorityif [ "$SPARK_NICENESS" = "" ]; then    export SPARK_NICENESS=0firun_command() {  mode="$1"  shift  mkdir -p "$SPARK_PID_DIR"  if [ -f "$pid" ]; then    TARGET_ID="$(cat "$pid")"    if [[ $(ps -p "$TARGET_ID" -o comm=) =~ "java" ]]; then      echo "$command running as process $TARGET_ID.  Stop it first."      exit 1    fi  fi  if [ "$SPARK_MASTER" != "" ]; then    echo rsync from "$SPARK_MASTER"    rsync -a -e ssh --delete --exclude=.svn --exclude='logs/*' --exclude='contrib/hod/logs/*' "$SPARK_MASTER/" "${SPARK_HOME}"  fi  spark_rotate_log "$log"  echo "starting $command, logging to $log"  case "$mode" in    (class)      nohup nice -n "$SPARK_NICENESS" "${SPARK_HOME}"/bin/spark-class $command "$@" >> "$log" 2>&1 < /dev/null &      newpid="$!"      ;;    (submit)      nohup nice -n "$SPARK_NICENESS" "${SPARK_HOME}"/bin/spark-submit --class $command "$@" >> "$log" 2>&1 < /dev/null &      newpid="$!"      ;;    (*)      echo "unknown mode: $mode"      exit 1      ;;  esac  echo "$newpid" > "$pid"  sleep 2  # Check if the process has died; in that case we'll tail the log so the user can see  if [[ ! $(ps -p "$newpid" -o comm=) =~ "java" ]]; then    echo "failed to launch $command:"    tail -2 "$log" | sed 's/^/  /'    echo "full log in $log"  fi}case $option in  --判断命令类型,是启动还是停止或者是其他的  (submit)    run_command submit "$@"    ;;  (start)    run_command class "$@"    ;;  (stop)    if [ -f $pid ]; then      TARGET_ID="$(cat "$pid")"      if [[ $(ps -p "$TARGET_ID" -o comm=) =~ "java" ]]; then        echo "stopping $command"        kill "$TARGET_ID" && rm -f "$pid"      else        echo "no $command to stop"      fi    else      echo "no $command to stop"    fi    ;;  (status)    if [ -f $pid ]; then      TARGET_ID="$(cat "$pid")"      if [[ $(ps -p "$TARGET_ID" -o comm=) =~ "java" ]]; then        echo $command is running.        exit 0      else        echo $pid file is present but $command not running        exit 1      fi    else      echo $command not running.      exit 2    fi    ;;  (*)    echo $usage    exit 1    ;;esac
0 0