Hadoop分布式框架搭建

来源:互联网 发布:淘宝购物必看10条经验 编辑:程序博客网 时间:2024/05/29 07:18
hadoop版本:hadoop-0.20.205.0-1.i386.rpm
jdk版本:jdk-6u35-linux-i586-rpm.bin 


环境为linux 64
master: 10.10.30.64
slave1: 10.10.30.65
slave2: 10.10.30.68

总体的步骤:
1.修改主机名/etc/hosts(虚拟机拷贝后若不一致要从新修改,从新分发)
2.创建一个普通账户(hadoop),hadoop以此账户运行。
2.root安装jdk
3.修改环境变量
4.安装hadoop,修改配置文件
5.将虚拟机拷贝2份,分别作为slave1,slave2
6.配置ssh,使两两之间,自己登陆自己都免密码
7.用普通账户格式化namenode

8.启动,并观察是否正常运行了

------------------------1.设置主机ip地址映射/etc/hosts-----------------------------------------------------2.添加hadoop用户,作为运行hadoop的用户---------------------------------------------3.安装jdk并设置环境变量------------------------------------[test@master 桌面]$ su - root密码:[root@master ~]# useradd hadoop[root@master ~]# passwd hadoop更改用户 hadoop 的密码 。新的 密码:无效的密码: 过短无效的密码: 过于简单重新输入新的 密码:passwd: 所有的身份验证令牌已经成功更新。[root@master ~]# vim /etc/hosts[root@master ~]# cat /etc/hosts10.10.30.64 master10.10.30.65 slave110.10.30.68 slave2[root@master ~]# cd /home/test/[root@master test]# lshadoop-0.20.205.0-1.i386.rpm  公共的  视频  文档  音乐jdk-6u35-linux-i586-rpm.bin   模板    图片  下载  桌面[root@master test]# chmod 744 jdk-6u35-linux-i586-rpm.bin  #给bin执行权限[root@master test]# ./jdk-6u35-linux-i586-rpm.bin Unpacking...Checksumming...Extracting...UnZipSFX 5.50 of 17 February 2002, by Info-ZIP (Zip-Bugs@lists.wku.edu).  inflating: jdk-6u35-linux-i586.rpm    inflating: sun-javadb-common-10.6.2-1.1.i386.rpm    inflating: sun-javadb-core-10.6.2-1.1.i386.rpm    inflating: sun-javadb-client-10.6.2-1.1.i386.rpm    inflating: sun-javadb-demo-10.6.2-1.1.i386.rpm    inflating: sun-javadb-docs-10.6.2-1.1.i386.rpm    inflating: sun-javadb-javadoc-10.6.2-1.1.i386.rpm  Preparing...                ########################################### [100%]   1:jdk                    ########################################### [100%]Unpacking JAR files...rt.jar...jsse.jar...charsets.jar...tools.jar...localedata.jar...plugin.jar...javaws.jar...deploy.jar...Installing JavaDBPreparing...                ########################################### [100%]   1:sun-javadb-common      ########################################### [ 17%]   2:sun-javadb-core        ########################################### [ 33%]   3:sun-javadb-client      ########################################### [ 50%]   4:sun-javadb-demo        ########################################### [ 67%]   5:sun-javadb-docs        ########################################### [ 83%]   6:sun-javadb-javadoc     ########################################### [100%]Java(TM) SE Development Kit 6 successfully installed.Product Registration is FREE and includes many benefits:* Notification of new versions, patches, and updates* Special offers on Oracle products, services and training* Access to early releases and documentationProduct and system data will be collected. If your configurationsupports a browser, the JDK Product Registration form willbe presented. If you do not register, none of this informationwill be saved. You may also register your JDK later byopening the register.html file (located in the JDK installationdirectory) in a browser.For more information on what data Registration collects and how it is managed and used, see:http://java.sun.com/javase/registration/JDKRegistrationPrivacy.htmlPress Enter to continue..... Done.[root@master test]# vim /etc/profile[root@master test]# ls /usr/java/jdk1.6.0_35/bin        lib          register.html        THIRDPARTYLICENSEREADME.txtCOPYRIGHT  LICENSE      register_ja.htmlinclude    man          register_zh_CN.htmljre        README.html  src.zip[root@master test]# tail -3 /etc/profile#设置环境变量export JAVA_HOME=/usr/java/jdk1.6.0_35export CLASSPATH=.:$JAVA_HOME/lib:$JAVA_HOME/jre/libexport PATH=$JAVA_HOME/bin:$JAVA_HOME/jre/bin:$PATH[root@master test]# -----------------------安装hadoop,修改配置文件-----------------------#这一步完后后,将虚拟机拷贝两份,分别作为slave1,slave2#若启动两个拷贝后,ip地址和hosts不一样,要改为实际的ip[root@master test]# lshadoop-0.20.205.0-1.i386.rpm            公共的jdk-6u35-linux-i586.rpm                 模板jdk-6u35-linux-i586-rpm.bin             视频sun-javadb-client-10.6.2-1.1.i386.rpm   图片sun-javadb-common-10.6.2-1.1.i386.rpm   文档sun-javadb-core-10.6.2-1.1.i386.rpm     下载sun-javadb-demo-10.6.2-1.1.i386.rpm     音乐sun-javadb-docs-10.6.2-1.1.i386.rpm     桌面sun-javadb-javadoc-10.6.2-1.1.i386.rpm[root@master test]# rpm -ivh hadoop-0.20.205.0-1.i386.rpm Preparing...                ########################################### [100%]   1:hadoop                 ########################################### [100%][root@master test]# cd /etc/hadoop/[root@master hadoop]# lscapacity-scheduler.xml      hadoop-policy.xml      slavesconfiguration.xsl           hdfs-site.xml          ssl-client.xml.examplecore-site.xml               log4j.properties       ssl-server.xml.examplefair-scheduler.xml          mapred-queue-acls.xml  taskcontroller.cfghadoop-env.sh               mapred-site.xmlhadoop-metrics2.properties  masters[root@master hadoop]# vim hadoop-env.sh export JAVA_HOME=/usr/java/jdk1.6.0_35[root@master hadoop]# vim core-site.xml [root@master hadoop]# cat core-site.xml <?xml version="1.0"?><?xml-stylesheet type="text/xsl" href="configuration.xsl"?><!-- Put site-specific property overrides in this file. --><configuration>        <property>                <name>hadoop.tmp.dir</name>                <value>/opt/hadoop/tmp</value>(tmp文件夹需提前建好)        </property>        <property>                <name>fs.default.name</name>                <value>hdfs://10.10.30.64:9000</value>(备注:此处一定使用10.10.30.64,否则配置hadoop-eclipse插件时会连接不上)        </property>        <property>                 <name>dfs.name.dir</name>                           <value>/opt/hadoop/name</value>         </property></configuration> (备注:如没有配置hadoop.tmp.dir参数,此时系统默认的临时目录为:/tmp/hadoo-hadoop。而这个目录在每次重启后都会被干掉,必须重新执行format才行,否则会出错。)[root@master hadoop]# vim hdfs-site.xml [root@master hadoop]# cat hdfs-site.xml <?xml version="1.0"?><?xml-stylesheet type="text/xsl" href="configuration.xsl"?><!-- Put site-specific property overrides in this file. --><configuration>        <property>                <name>dfs.replication</name>                  <value>3</value>        </property>        <property>                <name>dfs.data.dir</name>                  <value>/opt/hadoop/hdfs_data</value>        </property></configuration>(备注:文件必须已经预先创建好并存在!)[root@master hadoop]# vim mapred-site.xml (在较高版本该文件被yarn-site.xml替代)[root@master hadoop]# cat mapred-site.xml <configuration><!-- Site specific YARN configuration properties -->        <property>                <name>mapred.job.tracker</name>                  <value>hdfs://10.10.30.64:9001</value>        </property>        <property>                <name>mapred.system.dir</name>                  <value>/opt/hadoop/mapred_system</value>        </property>        <property>                <name>mapred.local.dir</name>                  <value>/opt/hadoop/mapred_local</value>        </property></configuration>(备注:注意上面一定要填Ip,不要填localhost,不然eclipse会连接不到!)[root@master hadoop]# cat masters(这里的masters和slaves文件一定提前修改,目的是设置主从关系) master[root@master hadoop]# cat slaves slave1slave2[root@master hadoop]# ------------------------下面切换到hadoop用户,设置ssh免密码登陆------------------------------[hadoop@master ~]$ ssh-keygen -t dsa#这个步骤在两个从节点上也要做一遍Generating public/private dsa key pair.Enter file in which to save the key (/home/hadoop/.ssh/id_dsa): Created directory '/home/hadoop/.ssh'.Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/hadoop/.ssh/id_dsa.Your public key has been saved in /home/hadoop/.ssh/id_dsa.pub.The key fingerprint is:6f:88:68:8a:d6:c7:b0:c7:e2:8b:b7:fa:7b:b4:a1:56 hadoop@masterThe key's randomart image is:+--[ DSA 1024]----+|                 ||                 ||                 ||                 ||        S        ||   . E . o       ||  . @ + . o      || o.X B   .       ||ooB*O            |+-----------------+[hadoop@master ~]$ cd .ssh/[hadoop@master .ssh]$ lsid_dsa  id_dsa.pub[hadoop@master .ssh]$ cp id_dsa.pub authorized_keys#必须要将公钥改名为authorized_keys#编辑authorized_keys文件,将两个从节点中生成的公钥id_dsa.pub中的内容拷贝到authorized_keys中[hadoop@master .ssh]$ vim authorized_keys[hadoop@master .ssh]$ exitlogout[test@master .ssh]$ su - root密码:[root@master ~]# cd /home/hadoop/[root@master hadoop]# ls[root@master hadoop]# cd .ssh/[root@master .ssh]# lsauthorized_keys  id_dsa  id_dsa.pub#切换到root,将authorized_keys分别拷贝到两个从节点的/home/hadoop/.ssh下#这里root拷贝的时候不需要输入密码,因为之前也被我设置免密码了[root@master .ssh]# scp authorized_keys slave1:/home/hadoop/.ssh/authorized_keys                                                100% 1602     1.6KB/s   00:00    [root@master .ssh]# scp authorized_keys slave2:/home/hadoop/.ssh/authorized_keys                                                100% 1602     1.6KB/s   00:00    [root@master .ssh]##这样拷贝完后,三台机用hadoop用户ssh登陆就不需要密码了,#注意,第一次登陆需要,然后再登陆就不需要了,一定要两两之间#自己登陆自己都走一遍-------------------------------格式化并启动hadoop---------------------------#注意:把三台机的防火墙都关掉测试。[hadoop@master ~]$ /usr/bin/hadoop namenode -format12/09/01 16:52:24 INFO namenode.NameNode: STARTUP_MSG: /************************************************************STARTUP_MSG: Starting NameNodeSTARTUP_MSG:   host = master/10.10.30.64STARTUP_MSG:   args = [-format]STARTUP_MSG:   version = 0.20.205.0STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20-security-205 -r 1179940; compiled by 'hortonfo' on Fri Oct  7 06:19:16 UTC 2011************************************************************/12/09/01 16:52:24 INFO util.GSet: VM type       = 32-bit12/09/01 16:52:24 INFO util.GSet: 2% max memory = 2.475 MB12/09/01 16:52:24 INFO util.GSet: capacity      = 2^19 = 524288 entries12/09/01 16:52:24 INFO util.GSet: recommended=524288, actual=52428812/09/01 16:52:24 INFO namenode.FSNamesystem: fsOwner=hadoop12/09/01 16:52:24 INFO namenode.FSNamesystem: supergroup=supergroup12/09/01 16:52:24 INFO namenode.FSNamesystem: isPermissionEnabled=true12/09/01 16:52:24 INFO namenode.FSNamesystem: dfs.block.invalidate.limit=10012/09/01 16:52:24 INFO namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)12/09/01 16:52:24 INFO namenode.NameNode: Caching file names occuring more than 10 times 12/09/01 16:52:24 INFO common.Storage: Image file of size 112 saved in 0 seconds.12/09/01 16:52:25 INFO common.Storage: Storage directory /tmp/hadoop-hadoop/dfs/name has been successfully formatted.12/09/01 16:52:25 INFO namenode.NameNode: SHUTDOWN_MSG: /************************************************************SHUTDOWN_MSG: Shutting down NameNode at master/10.10.30.64************************************************************/[hadoop@master ~]$ /usr/bin/start-all.sh#启动信息像下面这样就正常启动了starting namenode, logging to /var/log/hadoop/hadoop/hadoop-hadoop-namenode-master.outslave2: starting datanode, logging to /var/log/hadoop/hadoop/hadoop-hadoop-datanode-slave2.outslave1: starting datanode, logging to /var/log/hadoop/hadoop/hadoop-hadoop-datanode-slave1.outmaster: starting secondarynamenode, logging to /var/log/hadoop/hadoop/hadoop-hadoop-secondarynamenode-master.outstarting jobtracker, logging to /var/log/hadoop/hadoop/hadoop-hadoop-jobtracker-master.outslave2: starting tasktracker, logging to /var/log/hadoop/hadoop/hadoop-hadoop-tasktracker-slave2.outslave1: starting tasktracker, logging to /var/log/hadoop/hadoop/hadoop-hadoop-tasktracker-slave1.out#但是通过查看java的进程信息查不到,可能原因有两个:#1.防火墙没关#2.若防火墙关了还这样,重启。[hadoop@master ~]$ /usr/java/jdk1.6.0_35/bin/jps28499 Jps[root@master ~]# iptables -F[root@master ~]# exitlogout[hadoop@master ~]$ /usr/bin/start-all.sh starting namenode, logging to /var/log/hadoop/hadoop/hadoop-hadoop-namenode-master.outslave2: starting datanode, logging to /var/log/hadoop/hadoop/hadoop-hadoop-datanode-slave2.outslave1: starting datanode, logging to /var/log/hadoop/hadoop/hadoop-hadoop-datanode-slave1.outmaster: starting secondarynamenode, logging to /var/log/hadoop/hadoop/hadoop-hadoop-secondarynamenode-master.outstarting jobtracker, logging to /var/log/hadoop/hadoop/hadoop-hadoop-jobtracker-master.outslave2: starting tasktracker, logging to /var/log/hadoop/hadoop/hadoop-hadoop-tasktracker-slave2.outslave1: starting tasktracker, logging to /var/log/hadoop/hadoop/hadoop-hadoop-tasktracker-slave1.out[hadoop@master ~]$ /usr/java/jdk1.6.0_35/bin/jps 30630 Jps---------------------------重启后正常了----------------------------------------------master节点---------------------------[hadoop@master ~]$ /usr/bin/start-all.sh starting namenode, logging to /var/log/hadoop/hadoop/hadoop-hadoop-namenode-master.outslave2: starting datanode, logging to /var/log/hadoop/hadoop/hadoop-hadoop-datanode-slave2.outslave1: starting datanode, logging to /var/log/hadoop/hadoop/hadoop-hadoop-datanode-slave1.outmaster: starting secondarynamenode, logging to /var/log/hadoop/hadoop/hadoop-hadoop-secondarynamenode-master.outstarting jobtracker, logging to /var/log/hadoop/hadoop/hadoop-hadoop-jobtracker-master.outslave2: starting tasktracker, logging to /var/log/hadoop/hadoop/hadoop-hadoop-tasktracker-slave2.outslave1: starting tasktracker, logging to /var/log/hadoop/hadoop/hadoop-hadoop-tasktracker-slave1.out[hadoop@master ~]$ /usr/java/jdk1.6.0_35/bin/jps 3388 JobTracker3312 SecondaryNameNode3159 NameNode3533 Jps------------------------salve1---------------------------------[hadoop@master ~]$ ssh slave1Last login: Sat Sep  1 16:51:48 2012 from slave2[hadoop@slave1 ~]$ su - root密码:[root@slave1 ~]# iptables -F[root@slave1 ~]# setenforce 0[root@slave1 ~]# exitlogout[hadoop@slave1 ~]$ /usr/java/jdk1.6.0_35/bin/jps 3181 TaskTracker3107 DataNode3227 Jps--------------------------slave2------------------------------[hadoop@master ~]$ ssh slave2Last login: Sat Sep  1 16:52:02 2012 from slave2[hadoop@slave2 ~]$ su - root密码:[root@slave2 ~]# iptables -F[root@slave2 ~]# setenforce 0[root@slave2 ~]# exitlogout[hadoop@slave2 ~]$ /usr/java/jdk1.6.0_35/bin/jps 3165 DataNode3297 Jps3241 TaskTracker[hadoop@slave2 ~]$--------------------------------------------------------------错误:从节点Nodemanager可以开启,但是查看日志文件发现从节点无法接通主节点,一直发送请求。 <p align="left">2016-08-04 09:40:03,188 INFO org.apache.hadoop.ipc.Client: Retryingconnect to server: 0.0.0.0/0.0.0.0:8031. Already tried 0 time(s); retry policyis RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000MILLISECONDS)</p><p align="left">2016-08-04 09:40:04,189 INFO org.apache.hadoop.ipc.Client: Retryingconnect to server: 0.0.0.0/0.0.0.0:8031. Already tried 1 time(s); retry policyis RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000MILLISECONDS)</p><p align="left">2016-08-04 09:40:05,189 INFO org.apache.hadoop.ipc.Client: Retryingconnect to server: 0.0.0.0/0.0.0.0:8031. Already tried 2 time(s); retry policyis RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000MILLISECONDS)</p><p align="left">2016-08-04 09:40:06,190 INFO org.apache.hadoop.ipc.Client: Retryingconnect to server: 0.0.0.0/0.0.0.0:8031. Already tried 3 time(s); retry policyis RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000MILLISECONDS)</p><p align="left">2016-08-04 09:40:07,191 INFO org.apache.hadoop.ipc.Client: Retryingconnect to server: 0.0.0.0/0.0.0.0:8031. Already tried 4 time(s); retry policyis RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000MILLISECONDS)</p><p align="left">2016-08-04 09:40:08,192 INFO org.apache.hadoop.ipc.Client: Retryingconnect to server: 0.0.0.0/0.0.0.0:8031. Already tried 5 time(s); retry policyis RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000MILLISECONDS)</p><p align="left">2016-08-04 09:40:09,192 INFO org.apache.hadoop.ipc.Client: Retryingconnect to server: 0.0.0.0/0.0.0.0:8031. Already tried 6 time(s); retry policyis RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000MILLISECONDS)</p><p align="left">2016-08-04 09:40:10,193 INFO org.apache.hadoop.ipc.Client: Retryingconnect to server: 0.0.0.0/0.0.0.0:8031. Already tried 7 time(s); retry policyis RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000MILLISECONDS)</p><p align="left">2016-08-04 09:40:11,194 INFO org.apache.hadoop.ipc.Client: Retryingconnect to server: 0.0.0.0/0.0.0.0:8031. Already tried 8 time(s); retry policyis RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000MILLISECONDS)</p><p align="left">2016-08-04 09:40:12,194 INFO org.apache.hadoop.ipc.Client: Retryingconnect to server: 0.0.0.0/0.0.0.0:8031. Already tried 9 time(s); retry policyis RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000MILLISECONDS)</p>解决办法:在yarn-site.xml文件中添加如下配置  <property>    <name>yarn.resourcemanager.address</name>    <value>master:8032</value>  </property>  <property>    <name>yarn.resourcemanager.scheduler.address</name>    <value>master:8030</value>  </property>  <property>    <name>yarn.resourcemanager.resource-tracker.address</name>    <value>master:8031</value>  </property>重启hadoop,查看Nodemanager日志文件。


参考文章:http://blog.csdn.net/chen_jp/article/details/7933103

0 0
原创粉丝点击