Centos 7 Hadoop-2.5.1 分布式 环境搭建

来源:互联网 发布:安卓程序员招聘要求 编辑:程序博客网 时间:2024/05/18 00:32

Hadoop-2.5.1 分布式环境搭建

系统环境:Centos 7三台机器Ip地址master 192.168.192.11slave1 192.168.192.12slave2 192.168.192.13

一、Centos 7 环境搭建

修改主机名

master$ hostnamectl set-hostname masterslave1$ hostnamectl set-hostname slave1slave2$ hostnamectl set-hostname slave2

获得管理员权限

$ su root$ chmod -v u+w /etc/sudoers

在 root ALL=(ALL) ALL下添加

Hadoop    ALL=(ALL)       ALL 
$ chmod -v u-w /etc/sudoers

修改 /etc/hosts文件

$ sudo vi /etc/hosts

修改为如下内容

::1         localhost localhost.localdomain localhost6 localhost6.localdomain6192.168.192.11 master192.168.192.12 slave1192.168.192.13 slave2

同步集群时间
设置时区

$ timedatectl set-timezone Asia/Shanghai

设置时间:

$ timedatectl set-time "YYYY-MM-DD HH:MM:SS"

注:请将YYYY-MM-DD HH:MM:SS 改为具体格式的时间,如2016-01-07 11:11:11

二、安装JDK

1.查看是否有自带JDK

$ Java -version

2.若存在,则卸载自带JDK

$ rpm -qa|grep javajavapackages-tools-3.4.1-6.el7_0.noarchpython-javapackages-3.4.1-6.el7_0.noarchjava-1.7.0-openjdk-1.7.0.75-2.5.4.2.el7_0.x86_64tzdata-java-2015a-1.el7.noarchjava-1.7.0-openjdk-headless-1.7.0.75-2.5.4.2.el7_0.x86_64$ sudo yum -y remove java-1.7.0-openjdk-1.7.0.75-2.5.4.2.el7_0.x86_64$ sudo yum -y remove java-1.7.0-openjdk-headless-1.7.0.75-2.5.4.2.el7_0.x86_64

3.安装JDK
解压JDK

$ tar -zxvf  jdk-7u75-linux-x64.gz

编辑 /etc/profile

$ sudo vi /etc/profile

添加如下内容

export JAVA_HOME=/home/hadoop/java/jdk1.7.0_75export JRE_HOME=$JAVA_HOME/jreexport CLASSPATH=.:$JAVA_HOME/lib:$JRE_HOME/libexport PATH=$PATH:$JAVA_HOME/bin

生效文件

$ source /etc/profile

4、配置 SSH 免登录
关闭防火墙

$ systemctl stop firewalld.service #停止systemctl $ disable firewalld.service #禁用

生成密钥

$ ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa

导入公钥到认证文件中

$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys 

更改.ssh 以及认证文件的权限

$ chmod 700 .ssh/$ chmod 600 .ssh/authorized_keys

注:以上操作,请分别在三台机子上操作一遍
实现三台机子间免登录
将公钥发送到其他机器上

master$ scp authorized_keys hadoop@slave1:~/.ssh/authorized_keys_from_master$ scp authorized_keys hadoop@slave2:~/.ssh/authorized_keys_from_masterslave1$ scp authorized_keys hadoop@master:~/.ssh/authorized_keys_from_slave1$ scp authorized_keys hadoop@slave2:~/.ssh/authorized_keys_from_slave1slave2$ scp authorized_keys hadoop@master:~/.ssh/authorized_keys_from_slave2$ scp authorized_keys hadoop@slave1:~/.ssh/authorized_keys_from_slave2

各机器拷贝公钥

master$ cat authorized_keys_from_slave1 >> authorized_keys$ cat authorized_keys_from_slave2 >> authorized_keysslave1$ cat authorized_keys_from_master >> authorized_keys$ cat authorized_keys_from_slave2 >> authorized_keysslave2$ cat authorized_keys_from_slave1 >> authorized_keys$ cat authorized_keys_from_master >> authorized_keys

注:至此以上三台机器已经可以实现免密码登录操作

三、Hadoop-2.5.1 环境配置

解压压缩包

$ tar -zxvf hadoop-2.5.1.tar.gz

创建相关文件夹,注意在所有机器建立相应文件夹

$ mkdir dfs/$ Mkdir dfs/name$ Mkdir dfs/data$ Mkdir tmp/

修改相关配置文件
配置文件1:hadoop-env.sh
修改JAVA_HOME值 为 /home/hadoop/java/jdk1.7.0_75

配置文件2:yarn-env.sh
修改JAVA_HOME值 为 /home/hadoop/java/jdk1.7.0_75

配置文件3:slaves (这个文件里面保存所有slave节点)
写入:

masterslave1slave2

配置文件4:core-site.xml

<configuration>    <property>        <name>fs.defaultFS</name>        <value>hdfs://master:8020</name>    </property>    <property>                <name>io.file.buffer.size</name>                <value>131072</name>        </property>    <property>                <name>hadoop.tmp.dir</name>                <value>file:/home/hadoop/tmp</name>        </property>    <property>                <name>hadoop.proxyuser.master.hosts</name>                <value>*</name>        </property>    <property>                <name>hadoop.proxyuser.master.groups</name>                <value>*</name>        </property></configuration>

配置文件5:hdfs-site.xml

<configuration>        <property>                <name>dfs.namenode.secondary.http-address</name>                <value>master:9001</value>        </property>        <property>                <name>dfs.namenode.name.dir</name>                <value>file:/home/hadoop/dfs/name</value>        </property>        <property>                <name>dfs.datanode.data.dir</name>                <value>file:/home/hadoop/dfs/data</value>        </property>        <property>                <name>dfs.replication</name>                <value>3</value>        </property>        <property>                <name>dfs.webhdfs.enabled</name>                <value>true</value>        </property></configuration>

配置文件6:mapred-site.xml

<configuration>        <property>                <name>mapreduce.framework.name</name>                <value>yarn</value>        </property>        <property>                <name>mapreduce.jobhistory.address</name>                <value>master:10020</value>        </property>        <property>                <name>mapreduce.jobhistroy.webapp.address</name>                <value>master:19888</value>        </property></configuration>

配置文件7:yarn-site.xml

<configuration><!-- Site specific YARN configuration properties -->        <property>                <name>yarn.nodemanager.aux-services</name>                <value>mapreduce_shuffle</value>        </property>        <property>                <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>                <value>org.apache.hadoop.mapred.ShuffleHandler</value>        </property>        <property>                <name>yarn.resourcemanager.address</name>                <value>master:8032</value>        </property>        <property>                <name>yarn.resourcemanager.scheduler.address</name>                <value>master:8030</value>        </property>        <property>                <name>yarn.resourcemanager.resource-tracker.address</name>                <value>master:8031</value>        </property>        <property>                <name>yarn.resourcemanager.admin.address</name>                <value>master:8033</value>        </property>        <property>                <name>yarn.resourcemanager.webapp.address</name>                <value>master:8088</value>        </property></configuration>

将master 下配置好的Hadoop文件夹复制到另两个节点

$ scp -r /home/hadoop/hadoop-2.5.1 hadoop@slave1:~/$ scp -r /home/hadoop/hadoop-2.5.1 hadoop@slave2:~/

分别修改各机器上的 /etc/profile

$ sudo vi /etc/profile修改或添加如下内容export HADOOP_HOME=/home/hadoop/hadoop-2.5.1export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin

格式化nameNode

$ Hadoop namenode -format

启动

$ cd /home/hadoop/hadoop-2.5.1/sbin$ ./start-dfs.sh各机器jps进程情况Master:3630 DataNode3968 Jps3514 NameNode3838 SecondaryNameNodeSlave1Slave2:3253 DataNode3332 Jps
$ ./start-yarn.sh各机器jps进程情况Master:3630 DataNode4130 NodeManager4018 ResourceManager3514 NameNode4272 Jps3838 SecondaryNameNodeSlave1 和 slave2:3511 Jps3253 DataNode3399 NodeManager

至此,Hadoop-2.5.1 环境搭建完成,可以访问http://master:8088 查看各节点信息

0 0