Spark 官方文档(3)——Standalone 模式

来源:互联网 发布:python 跨平台 安卓ios 编辑:程序博客网 时间:2024/06/06 03:19

Spark版本:1.6.2

Spark除了支持Mesos和Yarn集群管理,还提供了一种standalone简单的部署模式。你可以手动启动一个master和多个worker构建standalone集群或者通过Spark官方脚本(后面详细介绍)启动。standalone可以在单台机器运行。

在集群上安装Spark Standalone

在集群的每个节点安装同一版本的spark程序,用户可以下载Spark官方release版本或者自己进行编译。

手动启动集群

你可以通过以下脚本启动master节点程序:

<code class="hljs sql has-numbering" style="display: block; padding: 0px; color: inherit; box-sizing: border-box; font-family: 'Source Code Pro', monospace;font-size:undefined; white-space: pre; border-radius: 0px; word-wrap: normal; background: transparent;">./sbin/<span class="hljs-operator" style="box-sizing: border-box;"><span class="hljs-keyword" style="color: rgb(0, 0, 136); box-sizing: border-box;">start</span>-master.sh</span></code><ul class="pre-numbering" style="box-sizing: border-box; position: absolute; width: 50px; top: 0px; left: 0px; margin: 0px; padding: 6px 0px 40px; border-right-width: 1px; border-right-style: solid; border-right-color: rgb(221, 221, 221); list-style: none; text-align: right; background-color: rgb(238, 238, 238);"><li style="box-sizing: border-box; padding: 0px 5px;">1</li></ul><ul class="pre-numbering" style="box-sizing: border-box; position: absolute; width: 50px; top: 0px; left: 0px; margin: 0px; padding: 6px 0px 40px; border-right-width: 1px; border-right-style: solid; border-right-color: rgb(221, 221, 221); list-style: none; text-align: right; background-color: rgb(238, 238, 238);"><li style="box-sizing: border-box; padding: 0px 5px;">1</li></ul>

启动后,master会输出spark://HOST: PORT URL,可以用于连接worker节点。你也可以在master的web UI管理节点查看该网址,默认是http://localhost:8080。 
类似的,也要启动worker节点并连接到master节点,如下所示:

<code class="hljs lasso has-numbering" style="display: block; padding: 0px; color: inherit; box-sizing: border-box; font-family: 'Source Code Pro', monospace;font-size:undefined; white-space: pre; border-radius: 0px; word-wrap: normal; background: transparent;"><span class="hljs-built_in" style="color: rgb(102, 0, 102); box-sizing: border-box;">.</span>/sbin/start<span class="hljs-attribute" style="box-sizing: border-box;">-slave</span><span class="hljs-built_in" style="color: rgb(102, 0, 102); box-sizing: border-box;">.</span>sh <span class="hljs-subst" style="color: rgb(0, 0, 0); box-sizing: border-box;"><</span>master<span class="hljs-attribute" style="box-sizing: border-box;">-spark</span><span class="hljs-attribute" style="box-sizing: border-box;">-URL</span><span class="hljs-subst" style="color: rgb(0, 0, 0); box-sizing: border-box;">></span></code><ul class="pre-numbering" style="box-sizing: border-box; position: absolute; width: 50px; top: 0px; left: 0px; margin: 0px; padding: 6px 0px 40px; border-right-width: 1px; border-right-style: solid; border-right-color: rgb(221, 221, 221); list-style: none; text-align: right; background-color: rgb(238, 238, 238);"><li style="box-sizing: border-box; padding: 0px 5px;">1</li></ul><ul class="pre-numbering" style="box-sizing: border-box; position: absolute; width: 50px; top: 0px; left: 0px; margin: 0px; padding: 6px 0px 40px; border-right-width: 1px; border-right-style: solid; border-right-color: rgb(221, 221, 221); list-style: none; text-align: right; background-color: rgb(238, 238, 238);"><li style="box-sizing: border-box; padding: 0px 5px;">1</li></ul>

一旦启动worker节点后,可以通过web UI查看worker节点的CPU和内存等信息。最后列出可以对master或worker传入的配置参数:

参数描述-h HOST, –host HOST监听的主机名-p PORT, –port PORT服务占用的端口号,master默认7077–webui-port PORTweb UI的端口号,master默认8080,worker默认8081-c CORES, –cores CORES允许Spark应用使用的CPU核心数,仅适用于worker-m MEM –memory MEM允许spark应用使用的内存数,默认是RAM减去1G,仅适用于worker-d DIR, –work-dir DIR扩展空间和job输出日志的目录,默认SPARK_HOME/work,仅适用于worker–properties-file FILE自定义的配置文件路径,默认 conf/spark-defaults.conf

集群启动脚本

使用脚本启动standalone集群前,需要在conf目录下创建slaves文件,并将集群内所有worker的hostname写入该文件,一行一个。若该文件不存在,则启动脚本默认使用一台机器启动。由于master与worker通信基于ssh,需要配置无密码访问。也可以设置 SPARK_SSH_FOREGROUND变量(yes 或者no),若是yes则需要在前端分别设置worker密码。 
当你设置玩slaves文件后,可以使用SPARK_HOME/sbin目录下的各个脚本启停集群:

  • sbin/start-master.sh - Starts a master instance on the machine the script is executed on.
  • sbin/start-slaves.sh - Starts a slave instance on each machine specified in the conf/slaves file.
  • sbin/start-slave.sh - Starts a slave instance on the machine the script is executed on.
  • sbin/start-all.sh - Starts both a master and a number of slaves as described above.
  • sbin/stop-master.sh - Stops the master that was started via the bin/start-master.sh script.
  • sbin/stop-slaves.sh - Stops all slave instances on the machines specified in the conf/slaves file.
  • sbin/stop-all.sh - Stops both the master and the slaves as described above.

以上脚本需要在master节点执行。用户可以通过conf/spark-env.sh设置集群环境变量,并拷贝到所有worker节点,设置包括以下内容:

参数描述SPARK_MASTER_IP绑定masterIP地址SPARK_MASTER_PORT指定master 端口号,默认7077SPARK_MASTER_WEBUI_PORT指定master web UI端口号,默认8080SPARK_MASTER_OPTS使用键值对配置master属性,形式如下:-Dkey=valueSPARK_LOCAL_DIRSSpark扩展目录,用于存储输出文件和RDD存储,可以指定多个目录,并通过逗号分隔SPARK_WORKER_CORESTotal number of cores to allow Spark applications to use on the machine (default: all available cores).SPARK_WORKER_MEMORYTotal amount of memory to allow Spark applications to use on the machine, e.g. 1000m, 2g (default: total memory minus 1 GB); note that each application’s individual memory is configured using its spark.executor.memory property.SPARK_WORKER_PORTStart the Spark worker on a specific port (default: random).SPARK_WORKER_WEBUI_PORTPort for the worker web UI (default: 8081).SPARK_WORKER_INSTANCES每个机器运行worker的个数。默认值是1,若设置比1大,需要同时设置SPARK_WORK_CORES,不然每个worker都会尝试使用所有CPU。SPARK_WORKER_DIRDirectory to run applications in, which will include both logs and scratch space (default: SPARK_HOME/work).SPARK_WORKER_OPTSConfiguration properties that apply only to the worker in the form “-Dx=y” (default: none). See below for a list of possible options.SPARK_DAEMON_MEMORYMemory to allocate to the Spark master and worker daemons themselves (default: 1g).SPARK_DAEMON_JAVA_OPTSJVM options for the Spark master and worker daemons themselves in the form “-Dx=y” (default: none).SPARK_PUBLIC_DNSThe public DNS name of the Spark master and workers (default: none).

SPARK_MASTER_OPTS支持以下系统属性

参数默认值描述spark.deploy.retainedApplications200可显示的已完成应用的最大数值spark.deploy.retainedDrivers200可显示的已完成driver最大数量spark.deploy.spreadOuttrueWhether the standalone cluster manager should spread applications out across nodes or try to consolidate them onto as few nodes as possible. Spreading out is usually better for data locality in HDFS, but consolidating is more efficient for compute-intensive workloads.spark.deploy.defaultCores(infinite)若没设置spark.cores.max,本参数确定了应用可用内核数目。spark.worker.timeout60master监听worker的timeout值

SPARK_WORKER_OPTS支持以下系统属性

参数默认值描述spark.worker.cleanup.enabledfalse设置standalone模式下是否定期清理应用数据spark.worker.cleanup.interval1800控制清理周期,默认30分钟spark.worker.cleanup.appDataTtl7*24*3600每个worker上面保存应用执行目录的时间

连接集群

可以通过SparkContext设置master或通过脚本连接Spark集群,若使用交互shell连接,命令如下 
./bin/spark-shell --master spark://IP:PORT 
同时可以指定–total-executor-cores \

启动Spark应用

spark-submit脚本用于想spark集群提交应用。对于standalone集群支持两种deploy模式。client模式下,driver和应用提交客户端在同一个进程中;cluster模式driver在某个worker上面执行。若使用Spark submit提交应用,driver在某个worker上面执行,通过–jars指定依赖的jar包。当指定–supervise时,集群在spark应用返回值非0时自动重新启动。可以通过以下命令kill一个应用

<code class="hljs ruby has-numbering" style="display: block; padding: 0px; color: inherit; box-sizing: border-box; font-family: 'Source Code Pro', monospace;font-size:undefined; white-space: pre; border-radius: 0px; word-wrap: normal; background: transparent;">./bin/spark-<span class="hljs-class" style="box-sizing: border-box;"><span class="hljs-keyword" style="color: rgb(0, 0, 136); box-sizing: border-box;">class</span> <span class="hljs-title" style="box-sizing: border-box; color: rgb(102, 0, 102);">org</span>.<span class="hljs-title" style="box-sizing: border-box; color: rgb(102, 0, 102);">apache</span>.<span class="hljs-title" style="box-sizing: border-box; color: rgb(102, 0, 102);">spark</span>.<span class="hljs-title" style="box-sizing: border-box; color: rgb(102, 0, 102);">deploy</span>.<span class="hljs-title" style="box-sizing: border-box; color: rgb(102, 0, 102);">Client</span> <span class="hljs-title" style="box-sizing: border-box; color: rgb(102, 0, 102);">kill</span> <span class="hljs-inheritance" style="box-sizing: border-box;"><<span class="hljs-parent" style="box-sizing: border-box;">master</span></span> <span class="hljs-title" style="box-sizing: border-box; color: rgb(102, 0, 102);">url</span>> <span class="hljs-inheritance" style="box-sizing: border-box;"><<span class="hljs-parent" style="box-sizing: border-box;">driver</span></span> <span class="hljs-title" style="box-sizing: border-box; color: rgb(102, 0, 102);">ID</span>></span></code><ul class="pre-numbering" style="box-sizing: border-box; position: absolute; width: 50px; top: 0px; left: 0px; margin: 0px; padding: 6px 0px 40px; border-right-width: 1px; border-right-style: solid; border-right-color: rgb(221, 221, 221); list-style: none; text-align: right; background-color: rgb(238, 238, 238);"><li style="box-sizing: border-box; padding: 0px 5px;">1</li></ul><ul class="pre-numbering" style="box-sizing: border-box; position: absolute; width: 50px; top: 0px; left: 0px; margin: 0px; padding: 6px 0px 40px; border-right-width: 1px; border-right-style: solid; border-right-color: rgb(221, 221, 221); list-style: none; text-align: right; background-color: rgb(238, 238, 238);"><li style="box-sizing: border-box; padding: 0px 5px;">1</li></ul>

监控和日志

Spark提供web界面监控集群上面的应用程序,包括集群和job的统计信息。默认端口号是8080,可以自行配置。默认的job日志输出到每个slave节点的SPARK_HOME/work下。

HA 高可用

由于master有可能存在单点故障,若master宕机,则集群不可用,默认两种高可用的解决方案:

基于ZooKeeper的备用Master

同时启动多个master连接同一个ZooKeeper实例,ZooKeeper会选举一个座位leader。当Zookeeper发觉leader不可用时,自动进行切换。由于集群的信息,包括Worker, Driver和Application的信息都已经持久化到文件系统,因此在切换的过程中只会影响新Job的提交,对于正在进行的Job没有任何的影响。加入ZooKeeper的集群整体架构如下图所示。相关配置如下

System propertyMeaningspark.deploy.recoveryModeSet to ZOOKEEPER to enable standby Master recovery mode (default: NONE).spark.deploy.zookeeper.urlThe ZooKeeper cluster url (e.g., 192.168.1.100:2181,192.168.1.101:2181).spark.deploy.zookeeper.dirThe directory in ZooKeeper to store recovery state (default: /spark).

基于文件系统的单点恢复

主要用于开发或测试环境。当spark提供目录保存spark Application和worker的注册信息,并将他们的恢复状态写入该目录中,这时,一旦Master发生故障,就可以通过重新启动Master进程(sbin/start-master.sh),恢复已运行的spark Application和worker的注册信息。 
基于文件系统的单点恢复,主要是在spark-env里对SPARK_DAEMON_JAVA_OPTS设置:

System propertyMeaningspark.deploy.recoveryModeSet to FILESYSTEM to enable single-node recovery mode (default: NONE).spark.deploy.recoveryDirectoryThe directory in which Spark will store recovery state, accessible from the Master’s perspective.
0 0