spark on hadoop2.0

来源:互联网 发布:剑灵天族捏脸数据导入 编辑:程序博客网 时间:2024/06/04 00:33
主题:在Hadoop2.0 的yarn上安装spark。因为hadoop、spark、scala、jdk 的安装包都会有依赖关系,选择了如下搭配:
jdk安装包 jdk-8u151-linux-x64.rpm
hadoop安装包 hadoop-2.6.0.tar.gz
spark安装包 spark-1.6.2-bin-hadoop2.6.tgz
scala安装包 scala-2.12.4.rpm
python3安装包 Python-3.6.3.tgz
pip安装包 pip-9.0.1.tar.gz

步骤简介:
1、安装操作系统
2、操作系统hostname修改
3、python3 的环境准备
4、修改脚本配置
5、上传安装包和脚本到其中一台设备
6、执行脚本


详细步骤:
1、操作系统选择在vmware9.0 中安装centos6.5 基本所有的配置都选择默认,内存1G,硬盘20G,网络选择NAT 转换自动获取。一共需要三台设备。
2、三台设备中分别命名:master/slave1/slave2,并添加用户hadoop,将脚本和安装包都上传到/home/hadoop 目录下
3、给三台设备安装python3 

安装python3 可以按照脚本installpython.sh 直接执行或者按照详细步骤一步步手动执行;

installpython.sh

#!/bin/bash#yum -y install vim wget ntp bind-utils net-tools nmap  python-setuptools make gcc gcc-c++ zlib-devel readline* openssl-devel#cd /home/hadoop#tar -zxvf pip-9.0.1.tar.gz#cd /home/hadoop/pip-9.0.1#python setup.py install#cd /home/hadoop#tar -zxvf Python-3.6.3.tgzcd /home/hadoop/Python-3.6.3/./configuremakemake installmv /usr/bin/python /usr/bin/python.bakln -s /usr/local/bin/python3 /usr/bin/python#sed -i 's%#!/usr/bin/python%#!/usr/bin/python2.6%g' /usr/bin/yumpip install numpy scipy matplotlib scikit-learn pycrypto paramiko


具体见:
免密登录设置可以使用shell 脚本或者 python 脚本进行编写,python 脚本中需要使用到已有模块 paramiko 。由于paramiko 模块支持python 2.6 + 和python 3.4+ 的python 环境。在centos6.5 中自带的python 2.6.6 无法满足paramiko 的需求,需要对其进行升级。可以将python 2.6.6 升级到最新的版本python 3.6.3。
python 升级的方法已经在网络上有现成的方法:http://blog.csdn.net/wwwdaan5com/article/details/78218277
本次实验需求会和网络上的方法有些差异,会做相应细微的调整。

以下命令都是使用root 用户进行执行
1.安装常用套件 
由于是最小化安装,所以里面什么都没有,要自己装。 #这个如果不是最小化安装的话可以不执行,执行需要一段时间,centos6.5 执行花了1个小时去更新这些大大小小的包
# yum -y update #这一步非必须,可以不执行,若不需要更新系统则不执行

# yum -y install vim wget ntp bind-utils net-tools nmap#安装一些常用的包
2.安装python辅助工具— easy_install
# yum -y install python-setuptools
3.安装python辅助工具— pip
# easy_install pip #这个命令无法进行安装pip,需要手动去pip 的官网下载最新的包手动安装
pip 的官网 https://pypi.python.org/pypi/pip
useradd hadoop 增加hadoop 用户
pip-9.0.1.tar.gz 放置到/home/hadoop 目录下
tar -zxvf pip-9.0.1.tar.gz 
cd pip-9.0.1
python setup.py install
具体的pip 的一些使用方法 可以见 http://www.ttlsa.com/python/how-to-install-and-use-pip-ttlsa/


4.更新编译器,若没有更新可能会造成python内的make无法编译。
# yum -y install make gcc gcc-c++
5.安装zlib-devel,若没有安装会再make install过程中出现错误。
# yum -y install zlib-devel
6.安装读取工具,若无安装,则在python command mode下无法使用方向键
# yum -y install readline*
7.安装openssl-devel,若没有安装,则安装numpy、scipy等套件时会出现错误。
# yum -y install openssl-devel
8.下载并安装Python3.6.3 官网https://www.python.org/downloads/release/python-363/
# wget https://www.python.org/ftp/python/3.6.3/Python-3.6.3.tgz
# tar -zxvf Python-3.6.3.tgz
# cd Python-3.6.3/
# ./configure
# make
# make install
9.将原本python重命名,用python3.6取代python2.6
# mv /usr/bin/python /usr/bin/python.bak
# ln -s /usr/local/bin/python3 /usr/bin/python
10.更改配置,否则原本yum的功能会无法使用
# vim /usr/bin/yum
# vim /usr/libexec/urlgrabber-ext-down
备注:被​将#!/usr/bin/python改为#! /usr/bin/python2.6,保存退出即可。
11.安装常用套件
# pip install numpy scipy matplotlib scikit-learn
12.安装paramiko 模块
pip install pycrypto paramiko 

完成安装,进行测试:
[root@master Python-3.6.3]# python3
Python 3.6.3 (default, Nov 20 2017, 00:16:35) 
[GCC 4.4.7 20120313 (Red Hat 4.4.7-18)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import paramiko
>>> 
则说明已经完成安装paramiko 模块
paramiko 的使用方法见 官方文档 http://docs.paramiko.org/en/2.4/
基本的用法见博客 http://www.jb51.net/article/97655.htm

4、根据hostname 和ip地址修改脚本文件中的对应ip 和hostname

5、确认安装包和脚本已经上传到其中一台设备的/home/hadoop目录上,可以是master。

安装包包含:
1.脚本文件
可以分为文件传输、远程执行命令脚本 remotecontrol.py 和本地执行脚本installsparkonyarn.py 。
思路是本地将包和脚本先传输给各个节点,然后在每一个节点上执行本地执行脚本。
remotecontrol.py 中包含:
1、免密登录设置和/etc/hosts 文件设置,并将文件都传输到各个节点
installsparkonyarn.py 中包含:
1.jdk安装包
使用jdk-8u151-linux-x64.rpm
2.hadoop安装包
使用hadoop-2.6.0.tar.gz
3.spark安装包
使用spark-1.6.2-bin-hadoop2.6.tgz
4.scala安装包
使用scala-2.12.4.rpm


remotecontrol.py

#!/ust/bin/python3import paramikoimport os#y远程登录各台设备,并生成sshkeydef sshkey():    for i in range(153, 156):        ssh = paramiko.SSHClient()        ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())        ipstr = "192.168.220." + str(i)        ssh.connect(ipstr, 22, "root", "123456")        stdin, stdout, stderr = ssh.exec_command("ssh-keygen -t rsa -P '' -f /root/.ssh/id_rsa")        stdin, stdout, stderr = ssh.exec_command("hostname")        stdoutstr = str(stdout.readlines())        hostnamestr = stdoutstr[2:len(stdoutstr)-4]        etchostsstr = '\n' + ipstr + '\t' + hostnamestr        fo1 = open('/etc/hosts','a+')        fo1.write(etchostsstr)        fo1.close()        ssh.close()#远程获取sshkey的key公钥文件,并保存到本地def getfile():    for i in range(153,156):        ipstr = "192.168.220." + str(i)        t = paramiko.Transport((ipstr,22))        t.connect(username = "root", password = "123456")        sftp = paramiko.SFTPClient.from_transport(t)        remotepath = '/root/.ssh/id_rsa.pub'        pathstr = "/root/.ssh/id_rsa.pub-" + str(i)        localpath = pathstr        sftp.get(remotepath, localpath)        t.close()#将各个公钥都写到认证文件中def catkey():    for i in range(153,156):        filestr = "/root/.ssh/id_rsa.pub-" + str(i)        fo1 = open(filestr,'r+')        context = fo1.read()        fo1.close()        fo2 = open('/root/.ssh/authorized_keys','a+')        fo2.write(context)        fo2.close()#注意本地已经有的文件就不用传输给自己。否则会被替换成空文档#将本地的认证文件和hosts 文件传送给其他设备def putfile():    for i in range(154,156):        ipstr = "192.168.220." + str(i)        t = paramiko.Transport((ipstr,22))        t.connect(username = "root", password = "123456")        sftp = paramiko.SFTPClient.from_transport(t)        remotepath = '/root/.ssh/authorized_keys'        localpath = '/root/.ssh/authorized_keys'        sftp.put(localpath,remotepath)        remotepath ='/etc/hosts'        localpath = '/etc/hosts'        sftp.put(localpath,remotepath)        remotepath = '/home/hadoop/installsparkonyarn.py'        localpath = '/home/hadoop/installsparkonyarn.py'        sftp.put(localpath, remotepath)        remotepath = '/home/hadoop/jdk-8u151-linux-x64.rpm'        localpath = '/home/hadoop/jdk-8u151-linux-x64.rpm'        sftp.put(localpath, remotepath)        remotepath = '/home/hadoop/hadoop-2.6.0.tar.gz'        localpath = '/home/hadoop/hadoop-2.6.0.tar.gz'        sftp.put(localpath, remotepath)        remotepath = '/home/hadoop/spark-1.6.2-bin-hadoop2.6.tgz'        localpath = '/home/hadoop/spark-1.6.2-bin-hadoop2.6.tgz'        sftp.put(localpath, remotepath)        remotepath = '/home/hadoop/scala-2.12.4.rpm'        localpath = '/home/hadoop/scala-2.12.4.rpm'        sftp.put(localpath, remotepath)        t.close()#远程执行安装命令def control():    for i in range(153, 156):        ssh = paramiko.SSHClient()        ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())        ipstr = "192.168.220." + str(i)        ssh.connect(ipstr, 22, "root", "123456")        stdin, stdout, stderr = ssh.exec_command("python3 /home/hadoop/installsparkonyarn.py")        stdoutstr = str(stdout.readlines())        print(stdoutstr)        os.chdir("/home/hadoop")        fo1 = open('stdoutstr','a+')        fo1.write(stdoutstr)        fo1.close()        ssh.close()sshkey()getfile()catkey()putfile()control()

installsparkonyarn.py

#!/usr/bin/python3# -*- coding: UTF-8 -*-import os#installjavadef javainstall():    os.chdir("/home/hadoop")    os.system("rpm -ivh /home/hadoop/jdk-8u151-linux-x64.rpm >> /home/hadoop/rpmjavalog")    javastr = '''export JAVA_HOME=/usr/java/jdk1.8.0_151export CLASSPAT=.:$CLASSPATH:$JAVA_HOME/libexport PATH=$PATH:$JAVA_HOME/bin'''    fo1 = open("/root/.bashrc","a+")    fo1.write(javastr)    fo1.close()    os.system("source /root/.bashrc")    print('Jdk install succesfully!')#installhadoopdef hadoopinstall():    os.system("tar -zxvf hadoop-2.6.0.tar.gz >> /home/hadoop/tarhadooplog")    os.chdir("/home/hadoop/hadoop-2.6.0/etc/hadoop")    os.system("sed -i 's#export JAVA_HOME=${JAVA_HOME}#export JAVA_HOME=/usr/java/jdk1.8.0_151#g' hadoop-env.sh")    os.system("sed -i 's%# export JAVA_HOME=/home/y/libexec/jdk1.6.0/%export JAVA_HOME=/usr/java/jdk1.8.0_151%g' yarn-env.sh")    os.system("mv core-site.xml core-site.xml_bakinit")    coresitestr = '''<?xml version="1.0" encoding="UTF-8"?><?xml-stylesheet type="text/xsl" href="configuration.xsl"?><configuration>    <property>        <name>fs.defaultFS</name>        <value>hdfs://master:9000</value>    </property>    <property>        <name>hadoop.tmp.dir</name>        <value>/home/hadoop/hadoop-2.6.0/tmp</value>    </property>    <property>        <name>io.file.buffer.size</name>        <value>131072</value>    </property></configuration>'''    fo1 = open("core-site.xml",'a+')    fo1.write(coresitestr)    fo1.close()    os.system("mkdir -p /home/hadoop/hadoop-2.6.0/tmp")    os.system("mv hdfs-site.xml hdfs-site.xml_bakinit")    hdfssitestr = '''<?xml version="1.0" encoding="UTF-8"?><?xml-stylesheet type="text/xsl" href="configuration.xsl"?><configuration>    <property>          <name>dfs.replication</name>          <value>2</value>      </property>      <property>          <name>dfs.namenode.name.dir</name>          <value>/home/hadoop/hdfs/name</value>      <description>Path on the local filesystem where the NameNode stores the namespace and transactions logs persistently.</description>      </property>      <property>          <name>dfs.datanode.data.dir</name>          <value>/home/hadoop/hdfs/data</value>      </property>      <property>          <name>dfs.blocksize</name>          <value>268435456</value>      </property>      <property>          <name>dfs.namenode.handler.count</name>          <value>100</value>      </property>      <property>        <name>dfs.namenode.http-address</name>        <value>master:50070</value>    </property></configuration>    '''    fo2 = open("hdfs-site.xml",'a+')    fo2.write(hdfssitestr)    fo2.close()    os.system("mkdir -p /home/hadoop/hdfs/name")    os.system("mkdir -p /home/hadoop/hdfs/data")    mapredsitestr = '''<?xml version="1.0" encoding="UTF-8"?><?xml-stylesheet type="text/xsl" href="configuration.xsl"?><configuration>    <property>        <name>mapreduce.framework.name</name>        <value>yarn</value>    </property>    <property>        <name>mapreduce.map.memory.mb</name>         <value>1536</value>    </property>    <property>        <name>mapreduce.reduce.memory.mb</name>        <value>3072</value>    </property>    <property>       <name>yarn.app.mapreduce.am.resource.mb</name>       <value>1536</value>         </property>    <property>        <name>mapreduce.reduce.java.opts</name>        <value>-Xmx2560M</value>    </property>    <property>          <name>mapreduce.task.io.sort.mb</name>          <value>200</value>         </property>    <property>          <name>mapreduce.task.io.sort.factor</name>          <value>100</value>    </property>    <property>      <name>mapreduce.map.java.opts</name>      <value>-Xmx1024M</value>      <description>for JMC: -XX:+UnlockCommercialFeatures -XX:+FlightRecorder </description>    </property>    <property>        <name>mapreduce.reduce.shuffle.parallelcopies</name>        <value>50</value>    </property>    <property>        <name>mapreduce.jobhistory.address</name>        <value>master:10020</value>    </property>    <property>        <name>mapreduce.jobhistory.webapp.address</name>        <value>master:19888</value>    </property></configuration>'''    fo3 = open("mapred-site.xml",'a+')    fo3.write(mapredsitestr)    fo3.close()    os.system("mv yarn-site.xml yarn-site.xml_bakinit")    yarnsitestr = '''<?xml version="1.0"?><configuration>    <property>        <name>yarn.resourcemanager.hostname</name>        <value>master</value>    </property>    <property>        <name>yarn.resourcemanager.address</name>        <value>master:8032</value>    </property>    <property>        <name>yarn.resourcemanager.scheduler.address</name>        <value>master:8030</value>    </property>    <property>        <name>yarn.resourcemanager.webapp.address</name>        <value>master:8088</value>    </property>    <property>        <name>yarn.resourcemanager.resource-tracker.address</name>        <value>master:8031</value>    </property>    <property>        <name>yarn.resourcemanager.admin.address</name>        <value>master:8033</value>    </property>    <property>        <name>yarn.nodemanager.aux-services</name>        <value>mapreduce_shuffle</value>    </property>    <property>        <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>        <value>org.apache.hadoop.mapred.ShuffleHandler</value>    </property>    <property>        <name>yarn.nodemanager.resource.memory-mb</name>        <value>12288</value>    </property>    <property>        <name>yarn.scheduler.minimum-allocation-mb</name>        <value>512</value>    </property>    <property>        <name>yarn.scheduler.maximum-allocation-mb</name>        <value>12288</value>    </property>     <property>        <name>yarn.nodemanager.vmem-pmem-ratio</name>        <value>4.2</value>     </property>     <property>        <name>yarn.nodemanager.local-dirs</name>        <value>/home/hadoop/yarnnodemanager/localdir</value>      </property></configuration>'''    fo4 = open("yarn-site.xml",'a+')    fo4.write(yarnsitestr)    fo4.close()    os.system("mkdir -p /home/hadoop/yarnnodemanager/localdir")    os.system('sed -i "s/localhost/slave1\\nslave2/g" slaves')    bashrcstr = '''export HADOOP_HOME=/home/hadoop/hadoop-2.6.0export HADOOP_PREFIX=/home/hadoop/hadoop-2.6.0export HADOOP_CONF_DIR=/home/hadoop/hadoop-2.6.0/etc/hadoopexport HADOOP_YARN_HOME=/home/hadoop/hadoop-2.6.0'''    fo5 = open("/root/.bashrc",'a+')    fo5.write(bashrcstr)    fo5.close()    os.system("source /root/.bashrc")    print('Hadoop install succesfully!')#spark 安装def sparkinstall():    os.chdir("/home/hadoop/")    os.system("tar -zxvf spark-1.6.2-bin-hadoop2.6.tgz >> tarsparklog")    os.system("cd /home/hadoop/spark-1.6.2-bin-hadoop2.6/conf")    os.system("rpm -ivh scala-2.12.4.rpm >> rpmscalalog ")    sparkenvstr = '''export SCALA_HOME=/usr/share/scalaexport JAVA_HOME=/usr/java/jdk1.8.0_151export HADOOP_HOME=/home/hadoop/hadoop-2.6.0export HADOOP_CONF_DIR=/home/hadoop/hadoop-2.6.0/etc/hadoopSPARK_MASTER_IP=masterSPARK_LOCAL_DIRS=/home/hadoop/spark-1.6.2-bin-hadoop2.6SPARK_DRIVER_MEMORY=1G'''    os.chdir("/home/hadoop/spark-1.6.2-bin-hadoop2.6/conf")    os.system("cp spark-env.sh.template spark-env.sh")    fo6 = open("spark-env.sh", "a+")    fo6.write(sparkenvstr)    fo6.close()    os.system("cp slaves.template slaves")    os.system('sed -i "s/localhost/slave1\\nslave2\\n/g" slaves')    os.system("useradd hadoop")    os.system("chown -R hadoop /home/hadoop/")    os.system("chgrp -R  hadoop  /home/hadoop/")    os.system('service iptables stop')    os.system('chkconfig iptables off')    os.system('setenforce 0')    os.system("sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config")    os.system("sed -i 's/#   StrictHostKeyChecking ask/StrictHostKeyChecking no/g' /etc/ssh/ssh_config ")    os.system("service sshd restart")    print('Spark install succesfully!')    print("installsparkonyarn is succesfully!!!")javainstall()hadoopinstall()sparkinstall()


6、执行脚本即可以完成hadoop的免密安装、hadoop的安装、spark的安装、scala 的安装 已经操作系统的防火墙等设置
原创粉丝点击