spark配置详解

来源:互联网 发布:安卓纪元 矩阵潜袭 编辑:程序博客网 时间:2024/06/14 00:02
spark-submit 参数:Usage: spark-submit [options] <app jar | python file> [app options]Options:  --master              | MASTER_URL         spark://host:port, mesos://host:port, yarn, or local.  --deploy-mode         | DEPLOY_MODE   driver运行之处,client运行在本机,cluster运行在集群  --class               | CLASS_NAME          应用程序包的要运行的class  --name                | NAME                 应用程序名称  --jars                | JARS                 用逗号隔开的 driver 本地jar包列表以及executor类路径,   --conf PROP=VALUE     |      Arbitrary Spark configuration property.    --py-files            | PY_FILES         用逗号隔开的放置在Python应用程序PYTHONPATH上的.zip, .egg, .py文件列表  --files               | FILES               用逗号隔开的要放置在每个executor工作目录的文件列表  --properties-file     | FILE      设置应用程序属性的文件放置位置,默认是conf/spark-defaults.conf  --driver-memory       | MEM         driver内存大小,默认512M  --driver-java-options | driver 的java选项  --driver-library-path | driver 的库路径 ,用冒号分割各库   **/lib:##/lib  --driver-class-path   | driver 的类路径,用--jars 添加的jar包会自动包含在类路径里,用冒号分割,**.jar:##.jar  --executor-memory     | MEM       每个executor内存大小,standalone模式默认512M  --version             |     Print the version of current Spark  Spark standalone with cluster deploy mode only  --driver-cores | NUM          driver使用内核数,默认为1Spark standalone and Mesos with cluster deploy mode only:  --supervise    | 如果设置了该参数,driver失败是会重启  --kill SUBMISSION_ID   |     If given, kills the driver specified.     --status SUBMISSION_ID |     If given, requests the status of the driver specified. Spark standalone and Mesos only:  --total-executor-cores | NUM  executor使用的总核数。 也可通过spark-env.sh中来设置,效果等同spark.deploy.defaultCores、spark.cores.maxYARN-only:  --executor-cores | NUM        每个executor使用的内核数,默认为1。也可通过spark-env.sh中SPARK_EXECUTOR_CORES来设置  --queue          | QUEUE_NAME          提交应用程序给哪个YARN的队列,默认是default队列  --num-executors  | NUM         启动的executor数量,默认是2个。也可通过spark-env.sh中SPARK_EXECUTOR_INSTANCES来设置  --archives       | ARCHIVES         被每个executor提取到工作目录的档案列表,用逗号隔开  --------------------------------------------------------------------------------------------------------------  Master URL  | 含义local        | 使用1个worker线程在本地运行Spark应用程序local[K]        | 使用K个worker线程在本地运行Spark应用程序local[*]           | 使用所有剩余worker线程在本地运行Spark应用程序spark://HOST:PORT  |连接到Spark Standalone集群,以便在该集群上运行Spark应用程序mesos://HOST:PORT  |连接到Mesos集群,以便在该集群上运行Spark应用程序yarn-client  | 以client方式连接到YARN集群,集群的定位由环境变量HADOOP_CONF_DIR定义,该方式driver在client运行。yarn-cluster  | 以cluster方式连接到YARN集群,集群的定位由环境变量HADOOP_CONF_DIR定义,该方式driver也在集群中运行。--------------------------------------------------------------------------------------------------------------
这些皆可在 spark-default.conf配置,或者部分可在 sparkconf().set设置应用程序属性|--------------------------------------------------------------------------------------------| 属性名称                   | 默认值 | 含义|--------------------------------------------------------------------------------------------| spark.app.name             | (none) | 你的应用程序的名字。这将在UI和日志数据中出现|--------------------------------------------------------------------------------------------| spark.driver.cores         | 1      | driver程序运行需要的cpu内核数|--------------------------------------------------------------------------------------------| spark.driver.maxResultSize | 1g     | 每个Spark action(如collect)所有分区的序列化结果的总大小限制|                            |        | 。设置的值应该不小于1m,0代表没有限制。如果总大小超过这个限制,程序将会终止。|                            |        | 大的限制值可能导致driver出现内存溢出错误(依赖于spark.driver.|                            |        | memory和JVM中对象的内存消耗)。|--------------------------------------------------------------------------------------------| spark.driver.memory        | 512m   | driver进程使用的内存数|--------------------------------------------------------------------------------------------| spark.executor.memory      | 512m   | 每个executor进程使用的内存数。和JVM内存串拥有相同的格式(如512m,|                            |        | 2g)|--------------------------------------------------------------------------------------------| spark.extraListeners       | (none) | 注册监听器,需要实现SparkListener|--------------------------------------------------------------------------------------------| spark.local.dir            | /tmp   | Spark中暂存空间的使用目录。在Spark1.0以及更高的版本中,这个属性被S|                            |        | PARK_LOCAL_DIRS(Standalone, Mesos)和LOCAL|                            |        | _DIRS(YARN)环境变量覆盖。|--------------------------------------------------------------------------------------------| spark.logConf              | false  | 当SparkContext启动时,将有效的SparkConf记录为INFO。|--------------------------------------------------------------------------------------------| spark.master               | (none) | 集群管理器连接的地方|--------------------------------------------------------------------------------------------  运行环境|----------------------------------------------------------------------------------------------------------------| 属性名称                                     | 默认值   | 含义|----------------------------------------------------------------------------------------------------------------| spark.driver.extraClassPath                  | (none)   | 附加到driver的classpath的额外的classpath实体。|----------------------------------------------------------------------------------------------------------------| spark.driver.extraJavaOptions                | (none)   | 传递给driver的JVM选项字符串。例如GC设置或者其它日志设置。注意,在这个|                                              |          | 选项中设置Spark属性或者堆大小是不合法的。Spark属性需要用--drive|                                              |          | r-class-path设置。|----------------------------------------------------------------------------------------------------------------| spark.driver.extraLibraryPath                | (none)   | 指定启动driver的JVM时用到的库路径|----------------------------------------------------------------------------------------------------------------| spark.driver.userClassPathFirst              | false    | (实验性)当在driver中加载类时,是否用户添加的jar比Spark自己的ja|                                              |          | r优先级高。这个属性可以降低Spark依赖和用户依赖的冲突。它现在还是一个实验性|                                              |          | 的特征。|----------------------------------------------------------------------------------------------------------------| spark.executor.extraClassPath                | (none)   | 附加到executors的classpath的额外的classpath实体。这个|                                              |          | 设置存在的主要目的是Spark与旧版本的向后兼容问题。用户一般不用设置这个选项|----------------------------------------------------------------------------------------------------------------| spark.executor.extraJavaOptions              | (none)   | 传递给executors的JVM选项字符串。例如GC设置或者其它日志设置。注意,|                                              |          | 在这个选项中设置Spark属性或者堆大小是不合法的。Spark属性需要用Spar|                                              |          | kConf对象或者spark-submit脚本用到的spark-defaults|                                              |          | .conf文件设置。堆内存可以通过spark.executor.memory设置|                                              |          | |----------------------------------------------------------------------------------------------------------------| spark.executor.extraLibraryPath              | (none)   | 指定启动executor的JVM时用到的库路径|----------------------------------------------------------------------------------------------------------------| spark.executor.logs.rolling.maxRetainedFiles | (none)   | 设置被系统保留的最近滚动日志文件的数量。更老的日志文件将被删除。默认没有开启。|----------------------------------------------------------------------------------------------------------------| spark.executor.logs.rolling.size.maxBytes    | (none)   | executor日志的最大滚动大小。默认情况下没有开启。值设置为字节|----------------------------------------------------------------------------------------------------------------| spark.executor.logs.rolling.strategy         | (none)   | 设置executor日志的滚动(rolling)策略。默认情况下没有开启。可以配|                                              |          | 置为time和size。对于time,用spark.executor.logs.|                                              |          | rolling.time.interval设置滚动间隔;对于size,用spar|                                              |          | k.executor.logs.rolling.size.maxBytes设置最|                                              |          | 大的滚动大小|----------------------------------------------------------------------------------------------------------------| spark.executor.logs.rolling.time.interval    | daily    | executor日志滚动的时间间隔。默认情况下没有开启。合法的值是daily, |                                              |          | hourly, minutely以及任意的秒。|----------------------------------------------------------------------------------------------------------------| spark.files.userClassPathFirst               | false    | (实验性)当在Executors中加载类时,是否用户添加的jar比Spark自己|                                              |          | 的jar优先级高。这个属性可以降低Spark依赖和用户依赖的冲突。它现在还是一个|                                              |          | 实验性的特征。|----------------------------------------------------------------------------------------------------------------| spark.python.worker.memory                   | 512m     | 在聚合期间,每个python worker进程使用的内存数。在聚合期间,如果内存|                                              |          | 超过了这个限制,它将会将数据塞进磁盘中|----------------------------------------------------------------------------------------------------------------| spark.python.profile                         | false    | 在Python worker中开启profiling。通过sc.show_pro|                                              |          | files()展示分析结果。或者在driver退出前展示分析结果。可以通过sc.|                                              |          | dump_profiles(path)将结果dump到磁盘中。如果一些分析结果已|                                              |          | 经手动展示,那么在driver退出前,它们再不会自动展示|----------------------------------------------------------------------------------------------------------------| spark.python.profile.dump                    | (none)   | driver退出前保存分析结果的dump文件的目录。每个RDD都会分别dump一|                                              |          | 个文件。可以通过ptats.Stats()加载这些文件。如果指定了这个属性,分析|                                              |          | 结果不会自动展示|----------------------------------------------------------------------------------------------------------------| spark.python.worker.reuse                    | true     | 是否重用python worker。如果是,它将使用固定数量的Python wo|                                              |          | rkers,而不需要为每个任务fork()一个Python进程。如果有一个非常大|                                              |          | 的广播,这个设置将非常有用。因为,广播不需要为每个任务从JVM到Python w|                                              |          | orker传递一次|----------------------------------------------------------------------------------------------------------------| spark.executorEnv.[EnvironmentVariableName]  | (none)   | 通过EnvironmentVariableName添加指定的环境变量到execu|                                              |          | tor进程。用户可以指定多个EnvironmentVariableName,设置|                                              |          | 多个环境变量|----------------------------------------------------------------------------------------------------------------| spark.mesos.executor.home                    | driver   | side SPARK_HOME 设置安装在Mesos的executor上的Sp|                                              |          | ark的目录。默认情况下,executors将使用driver的Spark本地(|                                              |          | home)目录,这个目录对它们不可见。注意,如果没有通过 spark.execu|                                              |          | tor.uri指定Spark的二进制包,这个设置才起作用|----------------------------------------------------------------------------------------------------------------| spark.mesos.executor.memoryOverhead          | executor | memory * 0.07, 最小384m 这个值是spark.executo|                                              |          | r.memory的补充。它用来计算mesos任务的总内存。另外,有一个7%的硬编|                                              |          | 码设置。最后的值将选择spark.mesos.executor.memoryOv|                                              |          | erhead或者spark.executor.memory的7%二者之间的大者|----------------------------------------------------------------------------------------------------------------  Shuffle行为|---------------------------------------------------------------------------------------------------------| 属性名称                                | 默认值 | 含义|---------------------------------------------------------------------------------------------------------| spark.reducer.maxMbInFlight             | 48     | 从递归任务中同时获取的map输出数据的最大大小(mb)。因为每一个输出都需要我们|                                         |        | 创建一个缓存用来接收,这个设置代表每个任务固定的内存上限,所以除非你有更大的内存|                                         |        | ,将其设置小一点|---------------------------------------------------------------------------------------------------------| spark.shuffle.blockTransferService      | netty  | 实现用来在executor直接传递shuffle和缓存块。有两种可用的实现:ne|                                         |        | tty和nio。基于netty的块传递在具有相同的效率情况下更简单|---------------------------------------------------------------------------------------------------------| spark.shuffle.compress                  | true   | 是否压缩map操作的输出文件。一般情况下,这是一个好的选择。|---------------------------------------------------------------------------------------------------------| spark.shuffle.consolidateFiles          | false  | 如果设置为”true”,在shuffle期间,合并的中间文件将会被创建。创建更少|                                         |        | 的文件可以提供文件系统的shuffle的效率。这些shuffle都伴随着大量递归|                                         |        | 任务。当用ext4和dfs文件系统时,推荐设置为”true”。在ext3中,因为|                                         |        | 文件系统的限制,这个选项可能机器(大于8核)降低效率|---------------------------------------------------------------------------------------------------------| spark.shuffle.file.buffer.kb            | 32     | 每个shuffle文件输出流内存内缓存的大小,单位是kb。这个缓存减少了创建只中|                                         |        | 间shuffle文件中磁盘搜索和系统访问的数量|---------------------------------------------------------------------------------------------------------| spark.shuffle.io.maxRetries             | 3      | Netty only,自动重试次数|---------------------------------------------------------------------------------------------------------| spark.shuffle.io.numConnectionsPerPeer  | 1      | Netty only|---------------------------------------------------------------------------------------------------------| spark.shuffle.io.preferDirectBufs       | true   | Netty only|---------------------------------------------------------------------------------------------------------| spark.shuffle.io.retryWait              | 5      | Netty only|---------------------------------------------------------------------------------------------------------| spark.shuffle.manager                   | sort   | 它的实现用于shuffle数据。有两种可用的实现:sort和hash。基于sor|                                         |        | t的shuffle有更高的内存使用率|---------------------------------------------------------------------------------------------------------| spark.shuffle.memoryFraction            | 0.2    | 如果spark.shuffle.spill为true,shuffle中聚合和合并|                                         |        | 组操作使用的java堆内存占总内存的比重。在任何时候,shuffles使用的所有|                                         |        | 内存内maps的集合大小都受这个限制的约束。超过这个限制,spilling数据将|                                         |        | 会保存到磁盘上。如果spilling太过频繁,考虑增大这个值|---------------------------------------------------------------------------------------------------------| spark.shuffle.sort.bypassMergeThreshold | 200    | (Advanced) In the sort-based shuffle man|                                         |        | ager, avoid merge-sorting data if there |                                         |        | is no map-side aggregation and there are|                                         |        |  at most this many reduce partitions|---------------------------------------------------------------------------------------------------------| spark.shuffle.spill                     | true   | 如果设置为”true”,通过将多出的数据写入磁盘来限制内存数。通过spark.s|                                         |        | huffle.memoryFraction来指定spilling的阈值|---------------------------------------------------------------------------------------------------------| spark.shuffle.spill.compress            | true   | 在shuffle时,是否将spilling的数据压缩。压缩算法通过spark.i|                                         |        | o.compression.codec指定。|---------------------------------------------------------------------------------------------------------  Spark UI|-----------------------------------------------------------------------------------------------------------| 属性名称                | 默认值                   | 含义|-----------------------------------------------------------------------------------------------------------| spark.eventLog.compress | false                    | 是否压缩事件日志。需要spark.eventLog.enabled为true|-----------------------------------------------------------------------------------------------------------| spark.eventLog.dir      | file:///tmp/spark-events | Spark事件日志记录的基本目录。在这个基本目录下,Spark为每个应用程序创建|                         |                          | 一个子目录。各个应用程序记录日志到直到的目录。用户可能想设置这为统一的地点,像H|                         |                          | DFS一样,所以历史文件可以通过历史服务器读取|-----------------------------------------------------------------------------------------------------------| spark.eventLog.enabled  | false                    | 是否记录Spark的事件日志。这在应用程序完成后,重新构造web UI是有用的|-----------------------------------------------------------------------------------------------------------| spark.ui.killEnabled    | true                     | 运行在web UI中杀死stage和相应的job|-----------------------------------------------------------------------------------------------------------| spark.ui.port           | 4040                     | 你的应用程序dashboard的端口。显示内存和工作量数据|-----------------------------------------------------------------------------------------------------------| spark.ui.retainedJobs   | 1000                     | 在垃圾回收之前,Spark UI和状态API记住的job数|-----------------------------------------------------------------------------------------------------------| spark.ui.retainedStages | 1000                     | 在垃圾回收之前,Spark UI和状态API记住的stage数|-----------------------------------------------------------------------------------------------------------  压缩和序列化|--------------------------------------------------------------------------------------------------------------------------------------------| 属性名称                               | 默认值                                     | 含义|--------------------------------------------------------------------------------------------------------------------------------------------| spark.broadcast.compress               | true                                       | 在发送广播变量之前是否压缩它|--------------------------------------------------------------------------------------------------------------------------------------------| spark.closure.serializer               | org.apache.spark.serializer.JavaSerializer | 闭包用到的序列化类。目前只支持java序列化器|--------------------------------------------------------------------------------------------------------------------------------------------| spark.io.compression.codec             | snappy                                     | 压缩诸如RDD分区、广播变量、shuffle输出等内部数据的编码解码器。默认情况|                                        |                                            | 下,Spark提供了三种选择:lz4、lzf和snappy,你也可以用完整的类名|                                        |                                            | 来制定。|--------------------------------------------------------------------------------------------------------------------------------------------| spark.io.compression.lz4.block.size    | 32768                                      | LZ4压缩中用到的块大小。降低这个块的大小也会降低shuffle内存使用率|--------------------------------------------------------------------------------------------------------------------------------------------| spark.io.compression.snappy.block.size | 32768                                      | Snappy压缩中用到的块大小。降低这个块的大小也会降低shuffle内存使用率|                                        |                                            | |--------------------------------------------------------------------------------------------------------------------------------------------| spark.kryo.classesToRegister           | (none)                                     | 如果你用Kryo序列化,给定的用逗号分隔的自定义类名列表表示要注册的类|--------------------------------------------------------------------------------------------------------------------------------------------| spark.kryo.referenceTracking           | true                                       | 当用Kryo序列化时,跟踪是否引用同一对象。如果你的对象图有环,这是必须的设置。|                                        |                                            | 如果他们包含相同对象的多个副本,这个设置对效率是有用的。如果你知道不在这两个场景|                                        |                                            | ,那么可以禁用它以提高效率|--------------------------------------------------------------------------------------------------------------------------------------------| spark.kryo.registrationRequired        | false                                      | 是否需要注册为Kyro可用。如果设置为true,然后如果一个没有注册的类序列化,|                                        |                                            | Kyro会抛出异常。如果设置为false,Kryo将会同时写每个对象和其非注册类|                                        |                                            | 名。写类名可能造成显著地性能瓶颈。|--------------------------------------------------------------------------------------------------------------------------------------------| spark.kryo.registrator                 | (none)                                     | 如果你用Kryo序列化,设置这个类去注册你的自定义类。如果你需要用自定义的方式注|                                        |                                            | 册你的类,那么这个属性是有用的。否则spark.kryo.classesToRe|                                        |                                            | gister会更简单。它应该设置一个继承自KryoRegistrator的类|--------------------------------------------------------------------------------------------------------------------------------------------| spark.kryoserializer.buffer.max.mb     | 64                                         | Kryo序列化缓存允许的最大值。这个值必须大于你尝试序列化的对象|--------------------------------------------------------------------------------------------------------------------------------------------| spark.kryoserializer.buffer.mb         | 0.064                                      | Kyro序列化缓存的大小。这样worker上的每个核都有一个缓存。如果有需要,缓|                                        |                                            | 存会涨到spark.kryoserializer.buffer.max.mb设置|                                        |                                            | 的值那么大。|--------------------------------------------------------------------------------------------------------------------------------------------| spark.rdd.compress                     | true                                       | 是否压缩序列化的RDD分区。在花费一些额外的CPU时间的同时节省大量的空间|--------------------------------------------------------------------------------------------------------------------------------------------| spark.serializer                       | org.apache.spark.serializer.JavaSerializer | 序列化对象使用的类。默认的Java序列化类可以序列化任何可序列化的java对象但|                                        |                                            | 是它很慢。所有我们建议用org.apache.spark.serializer.|                                        |                                            | KryoSerializer|--------------------------------------------------------------------------------------------------------------------------------------------| spark.serializer.objectStreamReset     | 100                                        | 当用org.apache.spark.serializer.JavaSerial|                                        |                                            | izer序列化时,序列化器通过缓存对象防止写多余的数据,然而这会造成这些对象的垃|                                        |                                            | 圾回收停止。通过请求’reset’,你从序列化器中flush这些信息并允许收集老|                                        |                                            | 的数据。为了关闭这个周期性的reset,你可以将值设为-1。默认情况下,每一百个|                                        |                                            | 对象reset一次|--------------------------------------------------------------------------------------------------------------------------------------------  运行时行为|------------------------------------------------------------------------------------------------------------------------------------------------| 属性名称                         | 默认值                                               | 含义|------------------------------------------------------------------------------------------------------------------------------------------------| spark.broadcast.blockSize        | 4096                                                 | TorrentBroadcastFactory传输的块大小,太大值会降低并发,太|                                  |                                                      | 小的值会出现性能瓶颈|------------------------------------------------------------------------------------------------------------------------------------------------| spark.broadcast.factory          | org.apache.spark.broadcast.TorrentBroadcastFactory   | broadcast实现类|------------------------------------------------------------------------------------------------------------------------------------------------| spark.cleaner.ttl                | (infinite)                                           | spark记录任何元数据(stages生成、task生成等)的持续时间。定期清理|                                  |                                                      | 可以确保将超期的元数据丢弃,这在运行长时间任务是很有用的,如运行7*24的spa|                                  |                                                      | rkstreaming任务。RDD持久化在内存中的超期数据也会被清理|------------------------------------------------------------------------------------------------------------------------------------------------| spark.default.parallelism        | 本地模式:机器核数;Mesos:8;其他:max(executor的core,2) | 如果用户不设置,系统使用集群中运行shuffle操作的默认任务数(groupBy|                                  |                                                      | Key、 reduceByKey等)|------------------------------------------------------------------------------------------------------------------------------------------------| spark.executor.heartbeatInterval | 10000                                                | executor 向 the driver 汇报心跳的时间间隔,单位毫秒|------------------------------------------------------------------------------------------------------------------------------------------------| spark.files.fetchTimeout         | 60                                                   | driver 程序获取通过SparkContext.addFile()添加的文件|                                  |                                                      | 时的超时时间,单位秒|------------------------------------------------------------------------------------------------------------------------------------------------| spark.files.useFetchCache        | true                                                 | 获取文件时是否使用本地缓存|------------------------------------------------------------------------------------------------------------------------------------------------| spark.files.overwrite            | false                                                | 调用SparkContext.addFile()时候是否覆盖文件|------------------------------------------------------------------------------------------------------------------------------------------------| spark.hadoop.cloneConf           | false                                                | 每个task是否克隆一份hadoop的配置文件|------------------------------------------------------------------------------------------------------------------------------------------------| spark.hadoop.validateOutputSpecs | true                                                 | 是否校验输出|------------------------------------------------------------------------------------------------------------------------------------------------| spark.storage.memoryFraction     | 0.6                                                  | Spark内存缓存的堆大小占用总内存比例,该值不能大于老年代内存大小,默认值为0|                                  |                                                      | .6,但是,如果你手动设置老年代大小,你可以增加该值|------------------------------------------------------------------------------------------------------------------------------------------------| spark.storage.memoryMapThreshold | 2097152                                              | 内存块大小|------------------------------------------------------------------------------------------------------------------------------------------------| spark.storage.unrollFraction     | 0.2                                                  | Fraction of spark.storage.memoryFraction|                                  |                                                      |  to use for unrolling blocks in memory.|------------------------------------------------------------------------------------------------------------------------------------------------| spark.tachyonStore.baseDir       | System.getProperty(“java.io.tmpdir”)                 | Tachyon File System临时目录|------------------------------------------------------------------------------------------------------------------------------------------------| spark.tachyonStore.url           | tachyon://localhost:19998                            | Tachyon File System URL|------------------------------------------------------------------------------------------------------------------------------------------------  网络|---------------------------------------------------------------------------------------------------------| 属性名称                              | 默认值   | 含义|---------------------------------------------------------------------------------------------------------| spark.driver.host                     | (local   | hostname) driver监听的主机名或者IP地址。这用于和execut|                                       |          | ors以及独立的master通信|---------------------------------------------------------------------------------------------------------| spark.driver.port                     | (random) | driver监听的接口。这用于和executors以及独立的master通信|---------------------------------------------------------------------------------------------------------| spark.fileserver.port                 | (random) | driver的文件服务器监听的端口|---------------------------------------------------------------------------------------------------------| spark.broadcast.port                  | (random) | driver的HTTP广播服务器监听的端口|---------------------------------------------------------------------------------------------------------| spark.replClassServer.port            | (random) | driver的HTTP类服务器监听的端口|---------------------------------------------------------------------------------------------------------| spark.blockManager.port               | (random) | 块管理器监听的端口。这些同时存在于driver和executors|---------------------------------------------------------------------------------------------------------| spark.executor.port                   | (random) | executor监听的端口。用于与driver通信|---------------------------------------------------------------------------------------------------------| spark.port.maxRetries                 | 16       | 当绑定到一个端口,在放弃前重试的最大次数|---------------------------------------------------------------------------------------------------------| spark.akka.frameSize                  | 10       | 在”control plane”通信中允许的最大消息大小。如果你的任务需要发送大|                                       |          | 的结果到driver中,调大这个值|---------------------------------------------------------------------------------------------------------| spark.akka.threads                    | 4        | 通信的actor线程数。当driver有很多CPU核时,调大它是有用的|---------------------------------------------------------------------------------------------------------| spark.akka.timeout                    | 100      | Spark节点之间的通信超时。单位是秒|---------------------------------------------------------------------------------------------------------| spark.akka.heartbeat.pauses           | 6000     | This is set to a larger value to disable|                                       |          |  failure detector that comes inbuilt akk|                                       |          | a. It can be enabled again, if you plan |                                       |          | to use this feature (Not recommended). A|                                       |          | cceptable heart beat pause in seconds fo|                                       |          | r akka. This can be used to control sens|                                       |          | itivity to gc pauses. Tune this in combi|                                       |          | nation of spark.akka.heartbeat.interval |                                       |          | and spark.akka.failure-detector.threshol|                                       |          | d if you need to.|---------------------------------------------------------------------------------------------------------| spark.akka.failure-detector.threshold | 300.0    | This is set to a larger value to disable|                                       |          |  failure detector that comes inbuilt akk|                                       |          | a. It can be enabled again, if you plan |                                       |          | to use this feature (Not recommended). T|                                       |          | his maps to akka’s akka.remote.transport|                                       |          | -failure-detector.threshold. Tune this i|                                       |          | n combination of spark.akka.heartbeat.pa|                                       |          | uses and spark.akka.heartbeat.interval i|                                       |          | f you need to.|---------------------------------------------------------------------------------------------------------| spark.akka.heartbeat.interval         | 1000     | This is set to a larger value to disable|                                       |          |  failure detector that comes inbuilt akk|                                       |          | a. It can be enabled again, if you plan |                                       |          | to use this feature (Not recommended). A|                                       |          |  larger interval value in seconds reduce|                                       |          | s network overhead and a smaller value (|                                       |          |  ~ 1 s) might be more informative for ak|                                       |          | ka’s failure detector. Tune this in comb|                                       |          | ination of spark.akka.heartbeat.pauses a|                                       |          | nd spark.akka.failure-detector.threshold|                                       |          |  if you need to. Only positive use case |                                       |          | for using failure detector can be, a sen|                                       |          | sistive failure detector can help evict |                                       |          | rogue executors really quick. However th|                                       |          | is is usually not the case as gc pauses |                                       |          | and network lags are expected in a real |                                       |          | Spark cluster. Apart from that enabling |                                       |          | this leads to a lot of exchanges of hear|                                       |          | t beats between nodes leading to floodin|                                       |          | g the network with those.|---------------------------------------------------------------------------------------------------------  调度相关属性|--------------------------------------------------------------------------------------------------------------| 属性名称                        | 默认值              | 含义|--------------------------------------------------------------------------------------------------------------| spark.task.cpus                 | 1                   | 为每个任务分配的内核数|--------------------------------------------------------------------------------------------------------------| spark.task.maxFailures          | 4                   | Task的最大重试次数|--------------------------------------------------------------------------------------------------------------| spark.scheduler.mode            | FIFO                | Spark的任务调度模式,还有一种Fair模式|--------------------------------------------------------------------------------------------------------------| spark.cores.max                 | 无                  | 当应用程序运行在Standalone集群或者粗粒度共享模式Mesos集群时,单个应用|                                 |                     | 程序向集群请求的最大CPU内核总数(不是指每台机器,而是整个集群)。如果不设置,|                                 |                     | 对于Standalone集群将使用spark.deploy.defaultCores中数值,|                                 |                     | 而Mesos将使用集群中可用的内核。设置后可以实现多个应用同时运行。否则只能FIFO|--------------------------------------------------------------------------------------------------------------| spark.mesos.coarse              | False               | 如果设置为true,在Mesos集群中运行时使用粗粒度共享模式|--------------------------------------------------------------------------------------------------------------| spark.speculation               | False               | 以下几个参数是关于Spark推测执行机制的相关参数。此参数设定是否使用推测执行机|                                 |                     | 制,如果设置为true则spark使用推测执行机制,对于Stage中拖后腿的Ta|                                 |                     | sk在其他节点中重新启动,并将最先完成的Task的计算结果最为最终结果|--------------------------------------------------------------------------------------------------------------| spark.speculation.interval      | 100                 | Spark多长时间进行检查task运行状态用以推测,以毫秒为单位|--------------------------------------------------------------------------------------------------------------| spark.speculation.quantile      | 无                  | 推测启动前,Stage必须要完成总Task的百分比|--------------------------------------------------------------------------------------------------------------| spark.speculation.multiplier    | 1.5                 | 比已完成Task的运行速度中位数慢多少倍才启用推测|--------------------------------------------------------------------------------------------------------------| spark.locality.wait             | 3000                | 以下几个参数是关于Spark数据本地性的。本参数是以毫秒为单位启动本地数据tas|                                 |                     | k的等待时间,如果超出就启动下一本地优先级别的task。该设置同样可以应用到各优|                                 |                     | 先级别的本地性之间(本地进程 -> 本地节点 -> 本地机架 -> 任意节点 )|                                 |                     | ,当然,也可以通过spark.locality.wait.node等参数设置不同|                                 |                     | 优先级别的本地性|--------------------------------------------------------------------------------------------------------------| spark.locality.wait.process     | spark.locality.wait | 本地进程级别的本地等待时间|--------------------------------------------------------------------------------------------------------------| spark.locality.wait.node        | spark.locality.wait | 本地节点级别的本地等待时间|--------------------------------------------------------------------------------------------------------------| spark.locality.wait.rack        | spark.locality.wait | 本地机架级别的本地等待时间|--------------------------------------------------------------------------------------------------------------| spark.scheduler.revive.interval | 1000                | 复活重新获取资源的Task的最长时间间隔(毫秒),发生在Task因为本地资源不足|                                 |                     | 而将资源分配给其他Task运行后进入等待时间,如果这个等待时间内重新获取足够的资|                                 |                     | 源就继续计算|--------------------------------------------------------------------------------------------------------------  Dynamic Allocation|--------------------------------------------------------------------------------------------------------------------------------------------------------| 属性名称                                                 | 默认值                               | 含义|--------------------------------------------------------------------------------------------------------------------------------------------------------| spark.dynamicAllocation.enabled                          | false                                | 是否开启动态资源搜集|--------------------------------------------------------------------------------------------------------------------------------------------------------| spark.dynamicAllocation.executorIdleTimeout              | 600                                  | |--------------------------------------------------------------------------------------------------------------------------------------------------------| spark.dynamicAllocation.initialExecutors                 | spark.dynamicAllocation.minExecutors | |--------------------------------------------------------------------------------------------------------------------------------------------------------| spark.dynamicAllocation.maxExecutors                     | Integer.MAX_VALUE                    | |--------------------------------------------------------------------------------------------------------------------------------------------------------| spark.dynamicAllocation.minExecutors                     | 0                                    | |--------------------------------------------------------------------------------------------------------------------------------------------------------| spark.dynamicAllocation.schedulerBacklogTimeout          | 5                                    | |--------------------------------------------------------------------------------------------------------------------------------------------------------| spark.dynamicAllocation.sustainedSchedulerBacklogTimeout | schedulerBacklogTimeout              | |--------------------------------------------------------------------------------------------------------------------------------------------------------  安全|---------------------------------------------------------------------------------------------------------| 属性名称                                | 默认值 | 含义|---------------------------------------------------------------------------------------------------------| spark.authenticate                      | false  | 是否Spark验证其内部连接。如果不是运行在YARN上,请看spark.auth|                                         |        | enticate.secret|---------------------------------------------------------------------------------------------------------| spark.authenticate.secret               | None   | 设置Spark两个组件之间的密匙验证。如果不是运行在YARN上,但是需要验证,这|                                         |        | 个选项必须设置|---------------------------------------------------------------------------------------------------------| spark.core.connection.auth.wait.timeout | 30     | 连接时等待验证的实际。单位为秒|---------------------------------------------------------------------------------------------------------| spark.core.connection.ack.wait.timeout  | 60     | 连接等待回答的时间。单位为秒。为了避免不希望的超时,你可以设置更大的值|---------------------------------------------------------------------------------------------------------| spark.ui.filters                        | None   | 应用到Spark web UI的用于过滤类名的逗号分隔的列表。过滤器必须是标准的|                                         |        | javax servlet Filter。通过设置java系统属性也可以指定每个|                                         |        | 过滤器的参数。spark.<class name of filter>.para|                                         |        | ms='param1=value1,param2=value2'。例如-Dspa|                                         |        | rk.ui.filters=com.test.filter1、-Dspark.c|                                         |        | om.test.filter1.params='param1=foo,param|                                         |        | 2=testing'|---------------------------------------------------------------------------------------------------------| spark.acls.enable                       | false  | 是否开启Spark acls。如果开启了,它检查用户是否有权限去查看或修改job|                                         |        | 。UI利用使用过滤器验证和设置用户|---------------------------------------------------------------------------------------------------------| spark.ui.view.acls                      | empty  | 逗号分隔的用户列表,列表中的用户有查看Spark web UI的权限。默认情况下|                                         |        | ,只有启动Spark job的用户有查看权限|---------------------------------------------------------------------------------------------------------| spark.modify.acls                       | empty  | 逗号分隔的用户列表,列表中的用户有修改Spark job的权限。默认情况下,只有|                                         |        | 启动Spark job的用户有修改权限|---------------------------------------------------------------------------------------------------------| spark.admin.acls                        | empty  | 逗号分隔的用户或者管理员列表,列表中的用户或管理员有查看和修改所有Spark j|                                         |        | ob的权限。如果你运行在一个共享集群,有一组管理员或开发者帮助debug,这个选|                                         |        | 项有用|---------------------------------------------------------------------------------------------------------  加密|----------------------------------------------------------------------------------------------| 属性名称                     | 默认值 | 含义|----------------------------------------------------------------------------------------------| spark.ssl.enabled            | false  | 是否开启ssl|----------------------------------------------------------------------------------------------| spark.ssl.enabledAlgorithms  | Empty  | JVM支持的加密算法列表,逗号分隔|----------------------------------------------------------------------------------------------| spark.ssl.keyPassword        | None   | |----------------------------------------------------------------------------------------------| spark.ssl.keyStore           | None   | |----------------------------------------------------------------------------------------------| spark.ssl.keyStorePassword   | None   | |----------------------------------------------------------------------------------------------| spark.ssl.protocol           | None   | |----------------------------------------------------------------------------------------------| spark.ssl.trustStore         | None   | |----------------------------------------------------------------------------------------------| spark.ssl.trustStorePassword | None   | |----------------------------------------------------------------------------------------------  Spark Streaming|------------------------------------------------------------------------------------------------------------------| 属性名称                                       | 默认值   | 含义|------------------------------------------------------------------------------------------------------------------| spark.streaming.blockInterval                  | 200      | 在这个时间间隔(ms)内,通过Spark Streaming receivers|                                                |          | 接收的数据在保存到Spark之前,chunk为数据块。推荐的最小值为50ms|------------------------------------------------------------------------------------------------------------------| spark.streaming.receiver.maxRate               | infinite | 每秒钟每个receiver将接收的数据的最大记录数。有效的情况下,每个流将消耗至|                                                |          | 少这个数目的记录。设置这个配置为0或者-1将会不作限制|------------------------------------------------------------------------------------------------------------------| spark.streaming.receiver.writeAheadLogs.enable | false    | Enable write ahead logs for receivers. A|                                                |          | ll the input data received through recei|                                                |          | vers will be saved to write ahead logs t|                                                |          | hat will allow it to be recovered after |                                                |          | driver failures|------------------------------------------------------------------------------------------------------------------| spark.streaming.unpersist                      | true     | 强制通过Spark Streaming生成并持久化的RDD自动从Spark内存中|                                                |          | 非持久化。通过Spark Streaming接收的原始输入数据也将清除。设置这个|                                                |          | 属性为false允许流应用程序访问原始数据和持久化RDD,因为它们没有被自动清除|                                                |          | 。但是它会造成更高的内存花费|------------------------------------------------------------------------------------------------------------------  集群管理Spark On YARN|-------------------------------------------------------------------------------------------------------------------------------------------------| 属性名称                                          | 默认值                               | 含义|-------------------------------------------------------------------------------------------------------------------------------------------------| spark.yarn.am.memory                              | 512m                                 | client 模式时,am的内存大小;cluster模式时,使用spark.dr|                                                   |                                      | iver.memory变量|-------------------------------------------------------------------------------------------------------------------------------------------------| spark.driver.cores                                | 1                                    | claster模式时,driver使用的cpu核数,这时候driver运行在am|                                                   |                                      | 中,其实也就是am和核数;client模式时,使用spark.yarn.am.c|                                                   |                                      | ores变量|-------------------------------------------------------------------------------------------------------------------------------------------------| spark.yarn.am.cores                               | 1                                    | client 模式时,am的cpu核数|-------------------------------------------------------------------------------------------------------------------------------------------------| spark.yarn.am.waitTime                            | 100000                               | 启动时等待时间|-------------------------------------------------------------------------------------------------------------------------------------------------| spark.yarn.submit.file.replication                | 3                                    | 应用程序上传到HDFS的文件的副本数|-------------------------------------------------------------------------------------------------------------------------------------------------| spark.yarn.preserve.staging.files                 | False                                | 若为true,在job结束后,将stage相关的文件保留而不是删除|-------------------------------------------------------------------------------------------------------------------------------------------------| spark.yarn.scheduler.heartbeat.interval-ms        | 5000                                 | Spark AppMaster发送心跳信息给YARN RM的时间间隔|-------------------------------------------------------------------------------------------------------------------------------------------------| spark.yarn.max.executor.failures                  | 2倍于executor数,最小值3              | 导致应用程序宣告失败的最大executor失败次数|-------------------------------------------------------------------------------------------------------------------------------------------------| spark.yarn.applicationMaster.waitTries            | 10                                   | RM等待Spark AppMaster启动重试次数,也就是SparkContex|                                                   |                                      | t初始化次数。超过这个数值,启动失败|-------------------------------------------------------------------------------------------------------------------------------------------------| spark.yarn.historyServer.address                  | Spark                                | history server的地址(不要加 http://)。这个地址会在Spa|                                                   |                                      | rk应用程序完成后提交给YARN RM,然后RM将信息从RM UI写到histo|                                                   |                                      | ry server UI上。|-------------------------------------------------------------------------------------------------------------------------------------------------| spark.yarn.dist.archives                          | (none)                               | |-------------------------------------------------------------------------------------------------------------------------------------------------| spark.yarn.dist.files                             | (none)                               | |-------------------------------------------------------------------------------------------------------------------------------------------------| spark.executor.instances                          | 2                                    | executor实例个数|-------------------------------------------------------------------------------------------------------------------------------------------------| spark.yarn.executor.memoryOverhead                | executorMemory                       | * 0.07, with minimum of 384 executor的堆内|                                                   |                                      | 存大小设置|-------------------------------------------------------------------------------------------------------------------------------------------------| spark.yarn.driver.memoryOverhead                  | driverMemory                         | * 0.07, with minimum of 384 driver的堆内存大|                                                   |                                      | 小设置|-------------------------------------------------------------------------------------------------------------------------------------------------| spark.yarn.am.memoryOverhead                      | AM                                   | memory * 0.07, with minimum of 384 am的堆|                                                   |                                      | 内存大小设置,在client模式时设置|-------------------------------------------------------------------------------------------------------------------------------------------------| spark.yarn.queue                                  | default                              | 使用yarn的队列|-------------------------------------------------------------------------------------------------------------------------------------------------| spark.yarn.jar                                    | (none)                               | |-------------------------------------------------------------------------------------------------------------------------------------------------| spark.yarn.access.namenodes                       | (none)                               | |-------------------------------------------------------------------------------------------------------------------------------------------------| spark.yarn.appMasterEnv.[EnvironmentVariableName] | (none)                               | 设置am的环境变量|-------------------------------------------------------------------------------------------------------------------------------------------------| spark.yarn.containerLauncherMaxThreads            | 25                                   | am启动executor的最大线程数|-------------------------------------------------------------------------------------------------------------------------------------------------| spark.yarn.am.extraJavaOptions                    | (none)                               | |-------------------------------------------------------------------------------------------------------------------------------------------------| spark.yarn.maxAppAttempts                         | yarn.resourcemanager.am.max-attempts | in YARN am重试次数|-------------------------------------------------------------------------------------------------------------------------------------------------  Spark History Server的属性|----------------------------------------------------------------------------------------------------------------------------------------------| 属性名称                           | 默认                                             | 含义|----------------------------------------------------------------------------------------------------------------------------------------------| spark.history.provider             | org.apache.spark.deploy.history.FsHistoryProvide | 应用历史后端实现的类名。 目前只有一个实现, 由Spark提供, 它查看存储在文|                                    |                                                  | 件系统里面的应用日志|----------------------------------------------------------------------------------------------------------------------------------------------| spark.history.fs.logDirectory      | file:/tmp/spark-events                           | |----------------------------------------------------------------------------------------------------------------------------------------------| spark.history.updateInterval       | 10                                               | 以秒为单位,多长时间Spark history server显示的信息进行更新。|                                    |                                                  | 每次更新都会检查持久层事件日志的任何变化。|----------------------------------------------------------------------------------------------------------------------------------------------| spark.history.retainedApplications | 50                                               | 在Spark history server上显示的最大应用程序数量,如果超过这个|                                    |                                                  | 值,旧的应用程序信息将被删除。|----------------------------------------------------------------------------------------------------------------------------------------------| spark.history.ui.port              | 18080                                            | 官方版本中,Spark history server的默认访问端口|----------------------------------------------------------------------------------------------------------------------------------------------| spark.history.kerberos.enabled     | false                                            | 是否使用kerberos方式登录访问history server,对于持久层位于|                                    |                                                  | 安全集群的HDFS上是有用的。如果设置为true,就要配置下面的两个属性。|----------------------------------------------------------------------------------------------------------------------------------------------| spark.history.kerberos.principal   | 空                                               | 用于Spark history server的kerberos主体名称|----------------------------------------------------------------------------------------------------------------------------------------------| spark.history.kerberos.keytab      | 空                                               | 用于Spark history server的kerberos keytab文件|                                    |                                                  | 位置|----------------------------------------------------------------------------------------------------------------------------------------------| spark.history.ui.acls.enable       | false                                            | 授权用户查看应用程序信息的时候是否检查acl。如果启用,只有应用程序所有者和sp|                                    |                                                  | ark.ui.view.acls指定的用户可以查看应用程序信息;如果禁用,不做任|                                    |                                                  | 何检查。|----------------------------------------------------------------------------------------------------------------------------------------------  


源码spark-env.sh配置选项:

# Options read when launching programs locally with
# ./bin/run-example or ./bin/spark-submit
# - HADOOP_CONF_DIR, to point Spark towards Hadoop configuration files
# - SPARK_LOCAL_IP, to set the IP address Spark binds to on this node
# - SPARK_PUBLIC_DNS, to set the public dns name of the driver program
# - SPARK_CLASSPATH, default classpath entries to append


# Options read by executors and drivers running inside the cluster
# - SPARK_LOCAL_IP, to set the IP address Spark binds to on this node
# - SPARK_PUBLIC_DNS, to set the public DNS name of the driver program
# - SPARK_CLASSPATH, default classpath entries to append
# - SPARK_LOCAL_DIRS, storage directories to use on this node for shuffle and RDD data
# - MESOS_NATIVE_JAVA_LIBRARY, to point to your libmesos.so if you use Mesos


# Options read in YARN client mode
# - HADOOP_CONF_DIR, to point Spark towards Hadoop configuration files
# - SPARK_EXECUTOR_INSTANCES, Number of workers to start (Default: 2)//对于yarn模式需要设置,表示集群启动的总worker(executor)进程数
# - SPARK_EXECUTOR_CORES, Number of cores for the workers (Default: 1).//对于yarn模式需要设置,表示每个worker所占核数
# - SPARK_EXECUTOR_MEMORY, Memory per Worker (e.g. 1000M, 2G) (Default: 1G)//对于yarn模式需要设置,表示每个worker所占内存量
# - SPARK_DRIVER_MEMORY, Memory for Master (e.g. 1000M, 2G) (Default: 512 Mb)//可设置为1-2G,表示driver内存
# - SPARK_YARN_APP_NAME, The name of your application (Default: Spark)
# - SPARK_YARN_QUEUE, The hadoop queue to use for allocation requests (Default: ‘default’)
# - SPARK_YARN_DIST_FILES, Comma separated list of files to be distributed with the job.
# - SPARK_YARN_DIST_ARCHIVES, Comma separated list of archives to be distributed with the job.


# Options for the daemons used in the standalone deploy mode
# - SPARK_MASTER_IP, to bind the master to a different IP address or hostname
# - SPARK_MASTER_PORT / SPARK_MASTER_WEBUI_PORT, to use non-default ports for the master
# - SPARK_MASTER_OPTS, to set config properties only for the master (e.g. "-Dx=y")
# - SPARK_WORKER_CORES, to set the number of cores to use on this machine //在该节点上可供该节点上所有worker使用的总核数,默认是所有
# - SPARK_WORKER_MEMORY, to set how much total memory workers have to give executors (e.g. 1000m, 2g)
# - SPARK_WORKER_PORT / SPARK_WORKER_WEBUI_PORT, to use non-default ports for the worker
# - SPARK_WORKER_INSTANCES, to set the number of worker processes per node// 它对于yarn、 mesos模式是无意义的。因为在yarn和 mesos中每个节点跑一个worker可以运行多个excutor。但是对于standalone模式对于一个application只允许一个worker进程(WORKER_INSTANCES)跑一个executor(但是对于多个application是可以每个app在worker内启一个executor)。故提供该配置以实现一个节点运行多个EXECUTER,因此,在standalone模式下设置--num-executors(spark-submit) 或者 SPARK_EXECUTOR_INSTANCES (spark-env.sh)都是无效的。

# - SPARK_WORKER_DIR, to set the working directory of worker processes
# - SPARK_WORKER_OPTS, to set config properties only for the worker (e.g. "-Dx=y")
# - SPARK_HISTORY_OPTS, to set config properties only for the history server (e.g. "-Dx=y")
# - SPARK_SHUFFLE_OPTS, to set config properties only for the external shuffle service (e.g. "-Dx=y")
# - SPARK_DAEMON_JAVA_OPTS, to set config properties for all daemons (e.g. "-Dx=y")
# - SPARK_PUBLIC_DNS, to set the public dns name of the master or workers


# Generic options for the daemons used in the standalone deploy mode
# - SPARK_CONF_DIR      Alternate conf dir. (Default: ${SPARK_HOME}/conf)
# - SPARK_LOG_DIR       Where log files are stored.  (Default: ${SPARK_HOME}/logs)
# - SPARK_PID_DIR       Where the pid file is stored. (Default: /tmp)
# - SPARK_IDENT_STRING  A string representing this instance of spark. (Default: $USER)
# - SPARK_NICENESS      The scheduling priority for daemons. (Default: 0)
"spark-env.sh" 59L, 3565C                                                                                          2,0-1         Top
# - SPARK_IDENT_STRING  A string representing this instance of spark. (Default: $USER)
# - SPARK_NICENESS      The scheduling priority for daemons. (Default: 0)



0 0
原创粉丝点击