hadoop-1.2.1安装方法详解

来源:互联网 发布:php微信第三方登录api 编辑:程序博客网 时间:2024/05/18 20:31

一、环境准备
1、安装VMware软件,然后在VMware里安装三台Linux虚拟机(我使用的是redhat)
2、设置虚拟机

主机名
ip
内存
硬盘
master
192.168.10.200
500M
8G
slave1
192.168.10.201
500M
8G
slave2
192.168.10.202
500M
8G

3、下载hadoop-1.2.1安装文件hadoop-1.2.1-bin.tar.gz

二、安装
注:标示“三台主机”的表示要在三台电脑上都做该操作,“master节点”表示只在mdw主机上操作
1、关闭防火墙(三台主机)(学习时可以直接关闭,正式环境是通过开放端口)
service iptables stop  停止防火墙服务,重启电脑后仍然会开启
chkconfig iptables off  关闭防火墙服务开机启动,重启后生效
可以两个命令结合使用避免重启

2、关闭SELINUX(三台主机)
修改/etc/selinux/config 中的SELINUX="" 为 disabled 

3、修改hosts文件(三台主机)
在hosts文件中添加或修改一下内容
192.168.10.200 master
192.168.10.201 slave1
192.168.10.202 slave2

由于我以前安装greenplum时已经修改过hosts文件,所以我直接追加相应的解析名:
192.168.10.200 mdw master
192.168.10.201 sdw1 slave1
192.168.10.202 sdw2 slave2

greendplum安装方法http://blog.csdn.net/gnail_oug/article/details/46945283

添加之后,可以通过ping命令测试是否正确,如:ping slave1 测试是否能访问slave1节点

4、配置java环境(三台主机)
上传jdk文件并授权之后,执行安装配置
[root@mdw temp]# . /jdk-6u45-linux-i586-rpm.bin


验证jdk安装
[root@mdw temp]# java -version
java version "1.6.0_45"
Java(TM) SE Runtime Environment (build 1.6.0_45-b06)
Java HotSpot(TM) Client VM (build 20.45-b01, mixed mode, sharing)

5、创建hadoop用户,并设置密码(三台主机)
[root@mdw temp]# useradd hadoop
[root@mdw temp]# passwd hadoop


6、设置无密码登录(三台主机)
      (1)三台主机上分别使用hadoop用户登录,生成密钥
[hadoop@mdw ~]$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/hadoop/.ssh/id_rsa): (按Enter键)
Created directory '/home/hadoop/.ssh'.
Enter passphrase (empty for no passphrase): (按Enter键)
Enter same passphrase again: (按Enter键)
Your identification has been saved in /home/hadoop/.ssh/id_rsa.
Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub.
The key fingerprint is:
02:68:05:12:5b:b2:8e:53:e8:d0:0d:85:b8:77:ad:49 hadoop@mdw
The key's randomart image is:
+--[ RSA 2048]----+
|+o++o            |
|.O.=             |
|=.= o.           |
|*o. E..          |
|o+ o o. S        |
| .  o  .         |
|                 |
|                 |
|                 |
+-----------------+


这样在hadoop用户目录下产生了一个.ssh隐藏文件夹

        (2)配置SSH无密码登录主机
三台主机分别将公钥id_rsa.pub的内容追加到authorized_keys文件,并将authorized_keys文件发送到其他两台主机
[hadoop@mdw ~]$ cat ~/.ssh/id_rsa.pub >>~/.ssh/authorized_keys
[hadoop@mdw ~]$ scp ~/.ssh/authorized_keys slave1:~/.ssh/
执行完以上操作后,每台主机的authorized_keys文件里都包含这三台主机的公钥

       (3)三台主机上分别更改authorized_keys文件的权限为600
[hadoop@mdw .ssh]$ chmod 600 authorized_keys 
       (4)验证SSH无密码登录
[hadoop@mdw ~]$ ssh slave1
[hadoop@mdw ~]$ ssh slave2
在任意一台主机上,都可以无密码登录到另外两台主机


7、配置hadoop用的jdk环境变量(master主机)
在hadoop用户文件夹下的 .bashrc文件中添加以下内容

export JAVA_HOME=/usr/java/jdk1.6.0_45export PATH=$PATH:$JAVA_HOME/binexport CLASSPATH=$CLASSPATH:$JAVA_HOME/lib

将 .bashrc文件发送到另外两台主机
[hadoop@mdw ~]$ scp .bashrc slave1:~
[hadoop@mdw ~]$ scp .bashrc slave2:~


8、上传hadoop安装文件,并解压文件(master主机)
[hadoop@mdw temp]$ cp /mnt/cdrom/hadoop-1.2.1-bin.tar.gz .
[hadoop@mdw temp]$ tar -xzvf hadoop-1.2.1-bin.tar.gz 
如果是使用root用户上传解压的文件,要将文件所属者改为hadoop
# chown -R hadoop:hadoop hadoop-1.2.1

9、修改hadoop-env.sh文件,配置hadoop的运行环境(master主机)
修改/home/hadoop/hadoop-1.2.1/conf/hadoop-env.sh 文件,
将里面的
# export JAVA_HOME=/usr/lib/j2sdk1.5-sun 
配置去掉注释,并改为 
export JAVA_HOME=/usr/java/jdk1.6.0_45

将里面的
# export HADOOP_HEAPSIZE=2000
去掉注释,并改为
export HADOOP_HEAPSIZE=100
如果你电脑的内存够大,这里可以不用配置,也可以根据你自己的电脑内存情况配置

10、修改core-site.xml文件
修改/home/hadoop/hadoop-1.2.1/conf/core-site.xml 文件,添加以下配置
<property>
        <name>fs.default.name</name>
        <value>hdfs://master:9000</value>
</property>
<property>
         <name>hadoop.tmp.dir</name>
         <value>/home/hadoop/tmp</value>
</property>


11、修改hdfs-site.xml文件
修改/home/hadoop/hadoop-1.2.1/conf/hdfs-site.xml 文件,添加以下配置

<property>
        <name>dfs.data.dir</name>
        <value>/data/hadoop</value>
</property>
<property>
        <name>dfs.replication</name>
        <value>2</value>
</property>


注意:在每个DataNode节点上要创建好指定的目录/data/hadoop,并且将所属用户改为hadoop
[root@sdw1 data]# chown hadoop:hadoop hadoop/

12、修改mapred-site.xml 
修改/home/hadoop/hadoop-1.2.1/conf/mapred-site.xml 文件,添加以下配置
<property>
        <name>mapred.job.tracker</name>
        <value>master:9001</value>
</property>


13、修改masters和slaves文件 
修改/home/hadoop/hadoop-1.2.1/conf/masters文件,内容改为:
master
修改/home/hadoop/hadoop-1.2.1/conf/slaves文件,内容改为:
slave1
slave2



14、把在Master节点上配置好的hadoop目录复制到slave1和slave2节点
[hadoop@mdw ~]$ scp -r hadoop-1.2.1/ slave1:~
[hadoop@mdw ~]$ scp -r hadoop-1.2.1/ slave2:~

15、格式化HDFS分布式文件系统
[hadoop@mdw ~]$ ./hadoop-1.2.1/bin/hadoop namenode -format
15/05/26 06:58:43 INFO namenode.NameNode: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = mdw/192.168.1.200
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 1.2.1
STARTUP_MSG:   build = https://svn.apache.org/repos/asf ... branches/branch-1.2 -r 1503152; compiled by 'mattf' on Mon Jul 22 15:23:09 PDT 2013
STARTUP_MSG:   java = 1.6.0_45
************************************************************/
15/05/26 06:58:43 INFO util.GSet: Computing capacity for map BlocksMap
15/05/26 06:58:43 INFO util.GSet: VM type       = 32-bit
15/05/26 06:58:43 INFO util.GSet: 2.0% max memory = 101384192
15/05/26 06:58:43 INFO util.GSet: capacity      = 2^19 = 524288 entries
15/05/26 06:58:43 INFO util.GSet: recommended=524288, actual=524288
15/05/26 06:58:43 INFO namenode.FSNamesystem: fsOwner=hadoop
15/05/26 06:58:43 INFO namenode.FSNamesystem: supergroup=supergroup
15/05/26 06:58:43 INFO namenode.FSNamesystem: isPermissionEnabled=true
15/05/26 06:58:43 INFO namenode.FSNamesystem: dfs.block.invalidate.limit=100
15/05/26 06:58:43 INFO namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
15/05/26 06:58:43 INFO namenode.FSEditLog: dfs.namenode.edits.toleration.length = 0
15/05/26 06:58:43 INFO namenode.NameNode: Caching file names occuring more than 10 times 
15/05/26 06:58:43 INFO common.Storage: Image file /home/hadoop/tmp/dfs/name/current/fsimage of size 112 bytes saved in 0 seconds.
15/05/26 06:58:43 INFO namenode.FSEditLog: closing edit log: position=4, editlog=/home/hadoop/tmp/dfs/name/current/edits
15/05/26 06:58:43 INFO namenode.FSEditLog: close success: truncate to 4, editlog=/home/hadoop/tmp/dfs/name/current/edits
15/05/26 06:58:43 INFO common.Storage: Storage directory /home/hadoop/tmp/dfs/name has been successfully formatted.
15/05/26 06:58:43 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at mdw/192.168.1.200
************************************************************/


16、启动并检测守护进程
[hadoop@mdw ~]$ ./hadoop-1.2.1/bin/start-all.sh 
starting namenode, logging to /home/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-hadoop-namenode-mdw.out
slave2: starting datanode, logging to /home/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-hadoop-datanode-sdw2.out
slave1: starting datanode, logging to /home/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-hadoop-datanode-sdw1.out
master: starting secondarynamenode, logging to /home/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-hadoop-secondarynamenode-mdw.out
starting jobtracker, logging to /home/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-hadoop-jobtracker-mdw.out
slave2: starting tasktracker, logging to /home/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-hadoop-tasktracker-sdw2.out
slave1: starting tasktracker, logging to /home/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-hadoop-tasktracker-sdw1.out


master节点:
[hadoop@mdw ~]$ jps
16841 Jps
16739 JobTracker
16658 SecondaryNameNode
16488 NameNode


slave节点:
[hadoop@sdw1 logs]$ jps
31014 Jps
30835 DataNode
30929 TaskTracker


17、网页查看集群状态
http://master:50070
http://master:50030
如果本地电脑没有配置master解析,将master换成ip
http://192.168.1.200:50070
http://192.168.1.200:50030

18、停止hadoop集群
[hadoop@mdw ~]$ ./hadoop-1.2.1/bin/stop-all.sh  
stopping jobtracker
slave2: stopping tasktracker
slave1: stopping tasktracker
stopping namenode
slave2: stopping datanode
slave1: stopping datanode
master: stopping secondarynamenode









1 0
原创粉丝点击