Hadoop 2.8 集群的安装

来源:互联网 发布:好喝的洋酒推荐 知乎 编辑:程序博客网 时间:2024/04/26 16:22

安装单个hadoop

安装必备库

> sudo apt-get install ssh > sudo apt-get install rsync

安装hadoop

> wget http://mirrors.hust.edu.cn/apache/hadoop/common/hadoop-2.8.2/hadoop-2.8.2.tar.gz> tar -zxvf hadoop-2.8.2.tar.gz > mv hadoop-2.8.2 hadoop> cd hadoop

集群设置

假设存在三个主机(32,33,34),其中32为master,33和34为slave。

设置SSH互通

# 在slave 34上面生成ssh公钥> ssh-keygen -t rsa# 把slave的公钥发送给master> scp /home/ubuntu/.ssh/id_rsa.pub ubuntu@192.168.33.32:~/.ssh/id_rsa.pub.slave1# 在master上面设置authorize_key> cat ~/.ssh/id_rsa.pub* >> ~/.ssh/authorized_keys# 把master的认证key发送到slave上去> scp /home/ubuntu/.ssh/authorized_keys ubuntu@192.168.33.33:~/.ssh/# 测试ssh> ssh 192.168.33.33

设置全局环境变量

> sudo vim /etc/environment 

加入如下:

export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-amd64export JRE_HOME=/usr/lib/jvm/java-1.8.0-openjdk-amd64/jreexport HADOOP_HOME=/opt/hadoopexport HADOOP_CONF_DIR=/opt/hadoop/etc/hadoop
> source /etc/environment 

创建子目录

> cd /opt/hadoop> mkdir tmp > mkdir hdfs>  > cd hdfs > mkdir name> mkdir tmp> mkdir data

配置core-site.xml

修改Hadoop核心配置文件core-site.xml,这里配置的是HDFS master(即namenode)的地址和端口号。

> vim /opt/hadoop/etc/hadoop/core-site.xml 

加入如下配置:

<configuration>      <property>        <name>fs.defaultFS</name>        <value>hdfs://192.168.33.32:9000</value>    </property>    <property>        <name>io.file.buffer.size</name>        <value>131072</value>    </property></configuration>

配置hdfs-site.xml

> vim /opt/hadoop/etc/hadoop/hdfs-site.xml

加入如下配置:

<configuration>      <property>        <name>dfs.namenode.name.dir</name>        <value>/opt/hadoop/hdfs/name</value>    </property>    <property>        <name>dfs.datanode.name.dir</name>        <value>/opt/hadoop/hdfs/data</value>    </property>    <property>        <name>dfs.namenode.datanode.registration.ip-hostname-check</name>        <value>false</value>    </property>    <property>        <name>dfs.replication</name>        <value>3</value>    </property>    <property>        <name>dfs.namenode.http-address</name>        <value>192.168.33.32:50070</value>    </property>    <property>        <name>dfs.namenode.secondary.http-address</name>        <value>192.168.33.32:50090</value>    </property></configuration> 

配置yarn-site.xml

> vim /opt/hadoop/etc/hadoop/yarn-site.xml

加入如下配置:

<configuration>      <property>        <name>yarn.resourcemanager.hostname</name>        <value>192.168.33.32</value>    </property>    <property>        <name>yarn.nodemanager.resource.memory-mb</name>        <value>1024</value>    </property>    <property>        <name>yarn.nodemanager.aux-services</name>        <value>mapreduce_shuffle</value>    </property> </configuration> 

配置mapred-site.xml

修改Hadoop中MapReduce的配置文件。

> mv /opt/hadoop/etc/hadoop/mapred-site.xml.template /opt/hadoop/etc/hadoop/mapred-site.xml> vim /opt/hadoop/etc/hadoop/mapred-site.xml

加入如下配置:

<configuration>      <property>              <name>mapreduce.framework.name</name>              <value>yarn</value>        </property>     </configuration> 

配置masters

> vim /opt/hadoop/etc/hadoop/masters

加入如下:

192.168.33.32

配置slaves

> vim /opt/hadoop/etc/hadoop/slaves 

加入slave的IP地址:

192.168.33.33192.168.33.34

hadoop文件夹分发

分发hadoop

将在master服务器上配置好的hadoop文件夹分发到所有的slave服务器上面。

> scp -r /opt/hadoop ubuntu@192.168.33.33:/opt/> scp -r /opt/hadoop ubuntu@192.168.33.34:/opt/

修改slave服务器的环境变量

> sudo vim /etc/environment > source /etc/environment

加入:

export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-amd64export JRE_HOME=/usr/lib/jvm/java-1.8.0-openjdk-amd64/jreexport HADOOP_HOME=/opt/hadoop

启动

> rm -rf /tmp/hadoop*> ./bin/hdfs namenode -format> ./sbin/start-dfs.sh> ./sbin/start-yarn.sh

查看:
http://192.168.33.32:50070/

关闭

> ./sbin/stop-dfs.sh> ./sbin/stop-yarn.sh 
原创粉丝点击