cnet6.5 32bit 安装spark

来源:互联网 发布:手机炒股软件排名 编辑:程序博客网 时间:2024/05/01 13:16

1、查看系统环境

[plain] view plain copy print?在CODE上查看代码片派生到我的代码片
  1. cat /etc/redhat-release 
  2. uname -r 
  3. uname -m 
关闭所有服务器的防火墙
[plain] view plain copy print?在CODE上查看代码片派生到我的代码片
  1. /etc/init.d/iptables stop 
  2. chkconfig iptables off 
  3. chkconfig --list iptables 
2、Spark集群机器规划

一共准备了四台机器

HostIPHadoopSparkNode1192.168.2.128MasterMasterNode2192.168.2.130SlaveSlaveNode3192.168.2.131SlaveSlaveNode4192.168.2.1132SlaveSlave

3、下载Hadoop2.6.0版本和对应的Spark1.6.0版本、下载jdk-7u65-linux-x64.rpm

       在node1上创建一个的soft目录将对应的软件下载到soft目录

[plain] view plain copy print?在CODE上查看代码片派生到我的代码片
  1. mkdir /soft/ 
  2. cd /soft/ 
  3. wget http://archive.apache.org/dist/hadoop/core/hadoop-2.6.0/hadoop-2.6.0.tar.gz 
  4. wget http://archive.apache.org/dist/spark/spark-1.6.0/spark-1.6.0-bin-hadoop2.6.tgz 

4、部署spark分布式集群

4.1、对应IP和系统主机名

4.1.1将系统中的IP和主机名进行一个映射配置/etc/hosts中,方便后期直接操作直接使用主机名称

[plain] view plain copy print?在CODE上查看代码片派生到我的代码片
  1. vim /etc/hosts 
  2. #编辑内容为 
  3. 192.168.2.128 node1 
  4. 192.168.2.130 node2 
  5. 192.168.2.131 node3 
  6. 192.168.2.132 node4 

4.1.2、将node1上的/etc/hosts配置文件发送到其它三台主机上      

[plain] view plain copy print?在CODE上查看代码片派生到我的代码片
  1. scp /etc/hosts root@192.168.2.130:/etc/hosts 
  2. scp /etc/hosts root@192.168.2.131:/etc/hosts 
  3. scp /etc/hosts root@192.168.2.132:/etc/hosts 

4.2、在四台服务器上创建spark用户

[plain] view plain copy print?在CODE上查看代码片派生到我的代码片
  1. #node1 创建用户 
  2. useradd spark 
  3. #node2 创建用户 
  4. useradd spark 
  5. #node3 创建用户 
  6. useradd spark 
  7. #node4 创建用户 
  8. useradd spark 

4.3、配置主机直接免密码登录

             配置四台主机通过ssh服务进行系统的免密码登录,后期hadoop、spark向slave发送命令时就无需输入对应的口令了。

4.3.1、配置SSH免登录

[plain] view plain copy print?在CODE上查看代码片派生到我的代码片
  1. #node1 
  2. ssh-keygen -t rsa -P '' 
  3. #node2 
  4. ssh-keygen -t rsa -P '' 
  5. #node3 
  6. ssh-keygen -t rsa -P '' 
  7. #node4 
  8. ssh-keygen -t rsa -P '' 

4.3.2、将node2、node3、node4三台服务器上的/home/spark/.ssh/id_rsa.pub文件拷贝到node1上并进行重命名

[plain] view plain copy print?在CODE上查看代码片派生到我的代码片
  1. #node2 
  2. scp /home/spark/.ssh/id_rsa.pub spark@node1:/home/spark/.ssh/node2.id_rsa.pub 
  3. #node3 
  4. scp /home/spark/.ssh/id_rsa.pub spark@node1:/home/spark/.ssh/node3.id_rsa.pub 
  5. #node4 
  6. scp /home/spark/.ssh/id_rsa.pub spark@node1:/home/spark/.ssh/node4.id_rsa.pub 
  7. #node1 
  8. cat /home/spark/.ssh/id_rsa.pub >> /home/spark/.ssh/authorized_keys 
  9. cat /home/spark/.ssh/node2.id_rsa.pub >> /home/spark/.ssh/authorized_keys 
  10. cat /home/spark/.ssh/node3.id_rsa.pub >> /home/spark/.ssh/authorized_keys 
  11. cat /home/spark/.ssh/node4.id_rsa.pub >> /home/spark/.ssh/authorized_keys 
  12. chmod 600 /home/spark/.ssh/authorized_keys 
  13. scp /home/spark/.ssh/authorized_keys spark@node2:/home/spark/.ssh/authorized_keys 
  14. scp /home/spark/.ssh/authorized_keys spark@node3:/home/spark/.ssh/authorized_keys 
  15. scp /home/spark/.ssh/authorized_keys spark@node4:/home/spark/.ssh/authorized_keys 

4.4、安装JDK

将/soft/jdk-7u65-Linux-x64.rpm软件通过scp命令拷贝到其它节点上然后在各个节点上安装jdk

[plain] view plain copy print?在CODE上查看代码片派生到我的代码片
  1. #node1 
  2. rpm -ivh /soft/jdk-7u65-linux-x64.rpm 
配置jdk的环境变量
[plain] view plain copy print?在CODE上查看代码片派生到我的代码片
  1. vim /etc/profile 
配置内容为

[plain] view plain copy print?在CODE上查看代码片派生到我的代码片
  1. export JAVA_HOME=/usr/java/latest 
  2. export JRE_HOME=$JAVA_HOME/jre 
  3. export PATH=$JAVA_HOME/bin:$PATH 
  4. export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar 

配置完成后通过使其生效

[plain] view plain copy print?在CODE上查看代码片派生到我的代码片
  1. source /etc/profile 

其它服务器安装node1上进行配置即可

4.5、安装hadoop

在node1上解压hadoop软件到/usr/local/

[plain] view plain copy print?在CODE上查看代码片派生到我的代码片
  1. tar xf /soft/hadoop-2.6.0.tar.gz -C /usr/local/ 
  2. ln -sv /usr/local/hadoop-2.6.0/ /usr/local/hadoop 

1、将hadoop配置到环境变量中/etc/profile内容:

[plain] view plain copy print?在CODE上查看代码片派生到我的代码片
  1. export HADOOP_HOME=/usr/local/hadoop-2.6.0 
  2. export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop 
  3. export PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH 

修改hadoop的配置文件

1、文件 slaves,将作为 DataNode 的主机名写入该文件,每行一个,默认为 localhost,所以在伪分布式配置时,节点即作为 NameNode 也作为 DataNode。分布式配置可以保留 localhost,也可以删掉,让 Master 节点仅作为 NameNode 使用。

[plain] view plain copy print?在CODE上查看代码片派生到我的代码片
  1. vim /usr/local/hadoop-2.6.0/etc/hadoop/slaves 
文件内容为:

[plain] view plain copy print?在CODE上查看代码片派生到我的代码片
  1. node2 
  2. node3 
  3. node4 

2、修改/usr/local/hadoop/etc/hadoop/core-site.xml文件

[plain] view plain copy print?在CODE上查看代码片派生到我的代码片
  1. <configuration> 
  2.         <property> 
  3.                 <name>fs.defaultFS</name> 
  4.                 <value>hdfs://node1:9000</value> 
  5.         </property> 
  6.         <property> 
  7.                 <name>hadoop.tmp.dir</name> 
  8.                 <value>/hadoopdata/tmp</value> 
  9.                 <description>Abase for other temporary directories.</description> 
  10.         </property> 
  11. </configuration> 

3、文件 hdfs-site.xml,dfs.replication 一般设为 3

[plain] view plain copy print?在CODE上查看代码片派生到我的代码片
  1. <configuration> 
  2.         <property> 
  3.                 <name>dfs.namenode.secondary.http-address</name> 
  4.                 <value>node1:50090</value> 
  5.         </property> 
  6.         <property> 
  7.                 <name>dfs.replication</name> 
  8.                 <value>3</value> 
  9.         </property> 
  10.         <property> 
  11.                 <name>dfs.namenode.name.dir</name> 
  12.                 <value>/hadoopdata/tmp/dfs/name</value> 
  13.         </property> 
  14.         <property> 
  15.                 <name>dfs.datanode.data.dir</name> 
  16.                 <value>/hadoopdata/tmp/dfs/data</value> 
  17.         </property> 
  18. </configuration> 
4、文件 mapred-site.xml (可能需要先重命名,默认文件名为 mapred-site.xml.template),然后配置修改如下
[plain] view plain copy print?在CODE上查看代码片派生到我的代码片
  1. <configuration> 
  2.         <property> 
  3.                 <name>mapreduce.framework.name</name> 
  4.                 <value>yarn</value> 
  5.         </property> 
  6.         <property> 
  7.                 <name>mapreduce.jobhistory.address</name> 
  8.                 <value>node1:10020</value> 
  9.         </property> 
  10.         <property> 
  11.                 <name>mapreduce.jobhistory.webapp.address</name> 
  12.                 <value>node1:19888</value> 
  13.         </property> 
  14. </configuration> 
5、修改配置文件 yarn-site.xml
[plain] view plain copy print?在CODE上查看代码片派生到我的代码片
  1. <configuration> 
  2.         <property> 
  3.                 <name>yarn.resourcemanager.hostname</name> 
  4.                 <value>node1</value> 
  5.         </property> 
  6.         <property> 
  7.                 <name>yarn.nodemanager.aux-services</name> 
  8.                 <value>mapreduce_shuffle</value> 
  9.         </property> 
  10. </configuration> 
6、创建hadoop存储数据临时目录

[plain] view plain copy print?在CODE上查看代码片派生到我的代码片
  1. mkdir /hadoopdata/tmp -pv 
  2. mkdir -p /hadoopdata/tmp/dfs/{name,data} 
  3. chown -R spark:spark /hadoopdata/ 

7、将node1节点上的hadoop分发到其它的节点上(首先删除/usr/local/hadoop/share/doc/这个文件都是文档)

[plain] view plain copy print?在CODE上查看代码片派生到我的代码片
  1. scp -r /usr/local/hadoop-2.6.0/ root@node2:/usr/local/hadoop-2.6.0/ 
8、将node1上/etc/profile拷贝到其它服务器上

[plain] view plain copy print?在CODE上查看代码片派生到我的代码片
  1. scp /etc/profile root@node2:/etc/profile 
  2. source /etc/profile 
9、修改hadoop安装目录用户所组所主

[plain] view plain copy print?在CODE上查看代码片派生到我的代码片
  1. chown -R spark:spark /usr/local/hadoop-2.6.0/ 

10、格式化hdfs

[plain] view plain copy print?在CODE上查看代码片派生到我的代码片
  1. hdfs namenode -format 

11、启动hdfs、yarn

[plain] view plain copy print?在CODE上查看代码片派生到我的代码片
  1. start-dfs.sh 
  2. start-yarn.sh 
12、查看hadoop集群是否成功
[plain] view plain copy print?在CODE上查看代码片派生到我的代码片
  1. hdfs dfsadmin -report 

[plain] view plain copy print?在CODE上查看代码片派生到我的代码片
  1. http://node1:50070/dfshealth.html#tab-datanode 

4.6、安装scala

spark依赖的Scala的版本为2.1.x版本所以选择的scala版本为scala-2.10.6.tgz

1、解压scala到/usr/local/中

[plain] view plain copy print?在CODE上查看代码片派生到我的代码片
  1. tar xf /soft/scala-2.10.6.tgz -C /usr/local/ 
2、配置环境变量

[plain] view plain copy print?在CODE上查看代码片派生到我的代码片
  1. vim /etc/profile 
  2. #内容为: 
  3. export SCALA_HOME=/usr/local/scala-2.10.6 
  4. export PATH=$SCALA_HOME/bin:$PATH 
  5. source /etc/profile 
其它服务器按照node1服务器上进行安装即可

4.6、安装spark

1、在node1上解压spark软件到/usr/local/

[plain] view plain copy print?在CODE上查看代码片派生到我的代码片
  1. tar xf /soft/spark-1.6.0-bin-hadoop2.6.tgz -C /usr/local/ 
2、修改spark的配置文件spark-env.sh
[plain] view plain copy print?在CODE上查看代码片派生到我的代码片
  1. cp /usr/local/spark-1.6.0-bin-hadoop2.6/conf/spark-env.sh.template /usr/local/spark-1.6.0-bin-hadoop2.6/conf/spark-env.sh 
配置内容如下

[plain] view plain copy print?在CODE上查看代码片派生到我的代码片
  1. export JAVA_HOME=/usr/java/latest 
  2. export SCALA_HOME=/usr/local/scala-2.10.6 
  3. export HADOOP_HOME=/usr/local/hadoop-2.6.0 
  4. export HADOOP_CONF_DIR=/usr/local/hadoop-2.6.0/etc/hadoop 
  5. export SPARK_MASTER_IP=node1 
  6. export SPARK_WORKER_MEMORY=1g 
  7. export SPARK_EXECUTOR_MEMORY=1g 
  8. export SPARK_DRIVER_MEMORY=1g 
  9. export SPARK_WORKER_CORES=1 
3、修改/usr/local/spark-1.6.0-bin-hadoop2.6/conf/slaves

配置内容如下

[plain] view plain copy print?在CODE上查看代码片派生到我的代码片
  1. node2 
  2. node3 
  3. node4 
4、修改/usr/local/spark-1.6.0-bin-hadoop2.6/conf/spark-defaults.conf

[plain] view plain copy print?在CODE上查看代码片派生到我的代码片
  1. cp /usr/local/spark-1.6.0-bin-hadoop2.6/conf/spark-defaults.conf.template /usr/local/spark-1.6.0-bin-hadoop2.6/conf/spark-defaults.conf 
  2. #配置内容 
  3. spark.executor.extraJavaOptions        -XX:+PrintGCDetails -Dkey=value -Dnumbers="one two three" 
  4. spark.eventLog.enabled            true 
  5. spark.eventLog.dir            hdfs://node1:9000/historyserverforSpark 
  6. spark.yarn.historyServer.address    node1:18080 
  7. spark.history.fs.logDirectory        hdfs://node1:9000/historyserverforSpark 

注意需要说明的是:配置了spark历史记录信息情况必须要在hdfs文件系统中创建historyserverforSpark此目录,如果没有创建spark是无法启动记录历史信息的进程

[plain] view plain copy print?在CODE上查看代码片派生到我的代码片
  1. hadoop dfs -mkdir /historyserverforSpark 

5、配置系统环境变量

[plain] view plain copy print?在CODE上查看代码片派生到我的代码片
  1. vim /etc/profile 
  2. #内容为: 
  3. export SPARK_HOME=/usr/local/spark-1.6.0-bin-hadoop2.6 
  4. export PATH=$SPARK_HOME/bin:$SPARK_HOME/sbin:$PATH 
  5. source /etc/profile 
6、通过scp将spark分发到其它节点
[plain] view plain copy print?在CODE上查看代码片派生到我的代码片
  1. scp -r /usr/local/spark-1.6.0-bin-hadoop2.6/ root@node2:/usr/local/spark-1.6.0-bin-hadoop2.6/ 
  2. scp /etc/profile root@node2:/etc/profile 
7、修改spark目录的所主所组

[plain] view plain copy print?在CODE上查看代码片派生到我的代码片
  1. chown -R spark:spark /usr/local/spark-1.6.0-bin-hadoop2.6/ 
8、启动集群spark

[plain] view plain copy print?在CODE上查看代码片派生到我的代码片
  1. /usr/local/spark-1.6.0-bin-hadoop2.6/sbin/start-all.sh  
  2. #启动一个记录历史程序的进程 
  3. /usr/local/spark-1.6.0-bin-hadoop2.6/sbin/start-history-server.sh  
查看是否正常启动

9、启动HistoryServer记录spark程序历史记录

[plain] view plain copy print?在CODE上查看代码片派生到我的代码片
  1. /usr/local/spark-1.6.0-bin-hadoop2.6/sbin/start-history-server.sh 

5、测试运行一个spark程序

[plain] view plain copy print?在CODE上查看代码片派生到我的代码片
  1. /usr/local/spark-1.6.0-bin-hadoop2.6/bin/spark-submit --class org.apache.spark.examples.SparkPi --master spark://node1:7077 /usr/local/spark-1.6.0-bin-hadoop2.6/lib/spark-examples-1.6.0-hadoop2.6.0.jar 1000 
spark提交任务

./bin/spark-submit \
  --class <main-class> \
  --master <master-url> \
  --deploy-mode <deploy-mode> \
  --conf <key>=<value> \
  ... # other options
  <application-jar> \
  [application-arguments]

1、--class: The entry point for your application (e.g. org.apache.spark.examples.SparkPi)#程序运行的主路径包含(包名+main类名)
2、--master: The master URL for the cluster (e.g. spark://23.195.26.187:7077)#集群中master的访问入口
3、--deploy-mode: Whether to deploy your driver on the worker nodes (cluster) or locally as an external client (client) (default: client)#spark作业提交模式
4、--conf: Arbitrary Spark configuration property in key=value format. For values that contain spaces wrap “key=value” in quotes (as shown).#spark提交作业的参数设置,参数设置为(key=value)这种模式
5、application-jar: Path to a bundled jar including your application and all dependencies. The URL must be globally visible inside of your cluster, for instance, an hdfs:// path or a file:// path that is present on all nodes.#spark提交作业的jar路径
6、application-arguments: Arguments passed to the main method of your main class, if any#此参数可以通过main中的参数输入到程序中
0 0
原创粉丝点击