生产环境实战spark (9)分布式集群 5台设备 SPARK集群安装
来源:互联网 发布:淘宝司法拍卖房产税 编辑:程序博客网 时间:2024/04/29 02:05
生产环境实战spark (9)分布式集群 5台设备 SPARK集群安装
1, 上传spark到master,检查
[root@master rhzf_spark_setupTools]# lshadoop-2.6.5.tar.gz jdk-8u121-linux-x64.tar.gz scala-2.11.8.zip spark-2.1.0-bin-hadoop2.6.tgz[root@master rhzf_spark_setupTools]#
2,解压缩spark安装
[root@master rhzf_spark_setupTools]# tar -zxvf spark-2.1.0-bin-hadoop2.6.tgz[root@master rhzf_spark_setupTools]# lshadoop-2.6.5.tar.gz jdk-8u121-linux-x64.tar.gz scala-2.11.8.zip spark-2.1.0-bin-hadoop2.6 spark-2.1.0-bin-hadoop2.6.tgz[root@master rhzf_spark_setupTools]# mv spark-2.1.0-bin-hadoop2.6 /usr/local[root@master rhzf_spark_setupTools]# cd /usr/local[root@master local]# lsbin games include lib libexec rhzf_spark_setupTools scala-2.11.8 spark-2.1.0-bin-hadoop2.6etc hadoop-2.6.5 jdk1.8.0_121 lib64 rhzf_setup_scripts sbin share src[root@master local]#
3,编辑 /etc/profile profile文件
export JAVA_HOME=/usr/local/jdk1.8.0_121export SCALA_HOME=/usr/local/scala-2.11.8export HADOOP_HOME=/usr/local/hadoop-2.6.5export SPARK_HOME=/usr/local/spark-2.1.0-bin-hadoop2.6export PATH=.:$PATH:$JAVA_HOME/bin:$SCALA_HOME/bin:$HADOOP_HOME/bin:$SPARK_HOME/bin
刷新生效
[root@master spark-2.1.0-bin-hadoop2.6]# source /etc/profile
4,spark配置文件修改
[root@master spark-2.1.0-bin-hadoop2.6]# cd ..[root@master local]# lsbin games include lib libexec rhzf_spark_setupTools scala-2.11.8 spark-2.1.0-bin-hadoop2.6etc hadoop-2.6.5 jdk1.8.0_121 lib64 rhzf_setup_scripts sbin share src[root@master local]# cd spark-2.1.0-bin-hadoop2.6[root@master spark-2.1.0-bin-hadoop2.6]# lsbin conf data examples jars LICENSE licenses NOTICE python R README.md RELEASE sbin yarn[root@master spark-2.1.0-bin-hadoop2.6]# cd conf[root@master conf]# lsdocker.properties.template log4j.properties.template slaves.template spark-env.sh.templatefairscheduler.xml.template metrics.properties.template spark-defaults.conf.template[root@master conf]# mv spark-env.sh.template spark-env.sh[root@master conf]# lsdocker.properties.template log4j.properties.template slaves.template spark-env.shfairscheduler.xml.template metrics.properties.template spark-defaults.conf.template[root@master conf]# vi spark-env.sh
export JAVA_HOME=/usr/local/jdk1.8.0_121export SCALA_HOME=/usr/local/scala-2.11.8export SPARK_MASTER_IP=10. 0.237export SPARK_WORKER_MEMORY=2gexport HADOOP_CONF_DIR=/usr/local/hadoop-2.6.5/etc/hadoop"spark-env.sh" 82L, 4180C written[root@master conf]#
[root@master conf]# lsdocker.properties.template log4j.properties.template slaves.template spark-env.shfairscheduler.xml.template metrics.properties.template spark-defaults.conf.template[root@master conf]# mv slaves.template slaves[root@master conf]# lsdocker.properties.template fairscheduler.xml.template log4j.properties.template metrics.properties.template slaves spark-defaults.conf.template spark-env.sh[root@master conf]# vi slavesworker01worker02worker03worker04
5,woker节点脚本分发
[root@master rhzf_setup_scripts]# lsrhzf_hadoop.sh rhzf_hosts_scp.sh rhzf_jdk.sh rhzf_scala.sh rhzf_ssh.sh[root@master rhzf_setup_scripts]# vi rhzf_spark.sh#!/bin/shfor i in 238 239 240 241doscp -rq /usr/local/spark-2.1.0-bin-hadoop2.6 root@10 .$i:/usr/local/spark-2.1.0-bin-hadoop2.6scp -rq /etc/profile root@10 .$i:/etc/profilessh root@10. 0.$i source /etc/profiledone
[root@master rhzf_setup_scripts]# lsrhzf_hadoop.sh rhzf_hosts_scp.sh rhzf_jdk.sh rhzf_scala.sh rhzf_spark.sh rhzf_ssh.sh[root@master rhzf_setup_scripts]# chmod u+x rhzf_spark.sh[root@master rhzf_setup_scripts]# ./rhzf_spark.sh[root@master rhzf_setup_scripts]#
6,启动spark集群
[root@master bin]# pwd/usr/local/spark-2.1.0-bin-hadoop2.6/bin[root@master bin]# cd ..[root@master spark-2.1.0-bin-hadoop2.6]# cd sbin[root@master sbin]# lsslaves.sh start-all.sh start-mesos-shuffle-service.sh start-thriftserver.sh stop-mesos-dispatcher.sh stop-slaves.shspark-config.sh start-history-server.sh start-shuffle-service.sh stop-all.sh stop-mesos-shuffle-service.sh stop-thriftserver.shspark-daemon.sh start-master.sh start-slave.sh stop-history-server.sh stop-shuffle-service.shspark-daemons.sh start-mesos-dispatcher.sh start-slaves.sh stop-master.sh stop-slave.sh[root@master sbin]# start-all.shstarting org.apache.spark.deploy.master.Master, logging to /usr/local/spark-2.1.0-bin-hadoop2.6/logs/spark-root-org.apache.spark.deploy.master.Master-1-master.outworker03: starting org.apache.spark.deploy.worker.Worker, logging to /usr/local/spark-2.1.0-bin-hadoop2.6/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-worker03.outworker04: starting org.apache.spark.deploy.worker.Worker, logging to /usr/local/spark-2.1.0-bin-hadoop2.6/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-worker04.outworker01: starting org.apache.spark.deploy.worker.Worker, logging to /usr/local/spark-2.1.0-bin-hadoop2.6/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-worker01.outworker02: starting org.apache.spark.deploy.worker.Worker, logging to /usr/local/spark-2.1.0-bin-hadoop2.6/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-worker02.out
显示结果如下
0 0
- 生产环境实战spark (9)分布式集群 5台设备 SPARK集群安装
- 生产环境实战spark (7)分布式集群 5台设备 Hadoop集群安装
- 生产环境实战spark (6)分布式集群 5台设备 Scala安装
- 生产环境实战spark (11)分布式集群 5台设备 Zookeeper集群、Kafka集群安装部署
- 生产环境实战spark (10)分布式集群 5台设备 SPARK集群 HistoryServer WEBUI不能打开问题解决 File file:/tmp/spark-events does not
- 生产环境实战spark (5)分布式集群 5台设备之间hosts文件配置 ssh免密码登录
- 生产环境实战spark (8)分布式集群 Hadoop集群WEBUI打不开问题解决,关闭防火墙firewall
- Spark 分布式集群环境搭建
- Spark完全分布式集群安装
- Spark视频第4期:构建商业生产环境下的Spark集群实战
- spark分布式安装 spark集群搭建 hadoop集群搭建
- spark分布式安装 spark集群搭建 hadoop集群搭建
- Spark 1.2 集群环境安装
- spark环境搭建,伪分布式、集群
- Spark 1.6.1分布式集群环境搭建
- Spark 2.0分布式集群环境搭建
- 大数据 IMF传奇 如何搭建 8台设备的SPARK分布式 集群
- 搭建Spark分布式集群
- Python(1)——python安装和第一个程序
- leetcode#463 Island Perimeter
- 委托,匿名方法,lambda表达式
- Swift 获取联系人信息
- 剑指offer--二叉树的镜像
- 生产环境实战spark (9)分布式集群 5台设备 SPARK集群安装
- 搭载repo服务器
- jquery select2插件初始化时赋多个值
- maven笔记
- android 6.0权限全面详细分析和解决方案
- 《UNIX操作系统》学习笔记
- SAP发布RFC接口,然后用JAVA调用根据物料号查询物料描述
- POJ 1236 Network of Schools(强连通分量,缩点)
- KEYCODE列表