Spark Standalone模式部署

来源:互联网 发布:大主宰知轩藏书 编辑:程序博客网 时间:2024/04/27 13:02

Spark可以在Mesos或者YARN集群管理平台运行,也可以使用spark的standalone(独立)部署模式,主要用于本机测试用。

安装部署Spark需要是编译后的版本,可以到spark官网下载http://spark.apache.org/downloads.html

解压开(部署之前需要准备JAVA环境,配置JAVA_HOME)

运行脚本:

./sbin/start-master.sh
启动master以后,可以通过webUI访问spark的master节点,默认端口号是8080,链接:http://localhost:8080

运行脚本启动worker:

./sbin/start-slave.sh <master-spark-URL>
参数设置:

ArgumentMeaning-h HOST--host HOSTHostname to listen on-i HOST--ip HOSTHostname to listen on (deprecated, use -h or --host)-p PORT--port PORTPort for service to listen on (default: 7077 for master, random for worker)--webui-port PORTPort for web UI (default: 8080 for master, 8081 for worker)-c CORES--cores CORESTotal CPU cores to allow Spark applications to use on the machine (default: all available); only on worker-m MEM--memory MEMTotal amount of memory to allow Spark applications to use on the machine, in a format like 1000M or 2G (default: your machine's total RAM minus 1 GB); only on worker-d DIR--work-dir DIRDirectory to use for scratch space and job output logs (default: SPARK_HOME/work); only on worker--properties-file FILEPath to a custom Spark properties file to load (default: conf/spark-defaults.conf)spark启动的相关脚本都在${SPARK_HOME}/conf/sbin/目录下









0 0
原创粉丝点击