64位hadoop完全分布式安装教程

来源:互联网 发布:java 整型转字符串 编辑:程序博客网 时间:2024/05/16 06:11

64位Redhat 6下hadoop-2.2.0完全分布式安装过程详解

                            作者:Ring

1   前提条件

l 电脑需要连网

l 主要参考资料

n  http://blog.csdn.net/licongcong_0224/article/details/12972889

n  http://blog.csdn.net/bamuta/article/details/13506893

n  http://blog.csdn.net/bamuta/article/details/13506843

l 下载资料:

n  redhat as 6:server-6.0-x86_64-dvd.iso

n  vmware 10g

n  下载jdk: jdk-7u51-linux-x64.tar.gz

n  下载hadoop-2.2.0-src

n  下载maven

l 所有软件都安装在/usr/local目录下,owner为hduser

l 三台集群计划如下

注:本例是装在虚机上,宿主的IP为192.168.0.10,网关192.168.0.1;虚机与宿主机同一个网段

IP地址

用户/密码

机器名

用途

192.168.0.105

hduser/passwd

hadoopmaster

namenode/second namenode/resourcemanager

192.168.0.104

hduser/passwd

Slave1

datanode/nodemanager

192.168.0.103

hduser/passwd

Slave2

datanode/nodemanager

2   准备操作系统(以master为例,其他机器相同)

2.1   虚机安装

l 安装过程不在描述,装好后,利用桥接的方式连接。保证hosts与vmware之间同一个网段即可。

n  安装过程中发现cpu不支持虚拟技术,需要修改bios设置,Intel的CPU可通过securable文件查看系统支与否,网上有各种教材。

l 在虚拟上安装redhat as 6

2.2   创建hduser帐户,该用户用于和宣hadoop

n  设置sudoers

$su root

$vim /etc/sudoers

打开sudoers,在“## Allow root to run any commands anywhere”后增加

hduser  ALL=(ALL)       ALL

 

    设置好后,记得用exit退出root帐号

2.3   配置IP地址

$suroot

    $vim /etc/sysconfig/network-scripts/ifcfg-eth0

        #增加如下记录

IPADDR=192.168.0.105

NETMASK=255.255.255.0

GATEWAY=192.168.0.1

ONBOOT=yes

USERCTL=no

BOOTPROTO=static

 

保存配置,用如下命令重启网卡生效:

$service network restart

$exist

2.4   修改机器名称

$su root

$hostname hadoopmaster (或slave1,slave2)

$vim /etc/sysconfig/network(比如hadoopmaster主机为HOSTNAME= hadoopmaster)

2.5   指定hosts文件

$su root

$vim /etc/hosts

192.168.0.105 hadoopmaster

192.168.0.104 slave1

192.168.0.103slave2

$exit

(以下开始用hduser用户操作)

2.6   安装jdk

$sudo tar zxvf ~/Downloads/jdk-7u51-linux-x64.tar.gz  /usr/local

 

设置环境变量:

$sudo vim /etc/profile (文件尾)

exportJAVA_HOME=/usr/local/jdk1.7.0_51

exportJRE_HOME=/usr/local/jdk1.7.0_51/jre

exportMAVEN_HOME=/usr/local/apache-maven-3.1.1

exportPATH=$JAVA_HOME/bin:$JAVA_HOME/jre/bin: $PATH

exportCLASSPATH=$CLASSPATH:.:$JAVA_HOME/lib:$JAVA_HOME/jre/lib

$sudo update-alternatives --install /usr/bin/javajava /usr/local/jdk1.7.0_51/bin/java

$sudo update-alternatives --install /usr/bin/javacjavac /usr/local/jdk1.7.0_51/bin/javac

$sudoupdate-alternatives –config java

 

2.7   配置yum资源

$su root

$vim /etc/yum.conf

#网上有很多文章是在“/etc/yum.repos.d/packagekit-media.repo”添加,但好像不管用。

    #Yum.conf原文内容别删除

 

[base]

name=CentOS-6-Base

baseurl=http://ftp.sjtu.edu.cn/centos/6.5/os/$basearch/

gpgcheck=0

gpgkey=http://mirror.centos.org/centos/RPM-GPG-KEY-centos-6

#released updates

[update]

name=CentOS-6-Updates

baseurl=http://ftp.sjtu.edu.cn/centos/6.5/updates/$basearch/

gpgcheck=0

gpgkey=http://mirror.centos.org/centos/RPM-GPG-KEY-centos-6

[addons]

name=CentOS-6-Addons

baseurl=http://ftp.sjtu.edu.cn/centos/6.5/addons/$basearch/

gpgcheck=0

enabled=0

gpgkey=http://mirror.centos.org/centos/RPM-GPG-KEY-centos-6

[extras]

name=CentOS-6-Extras

baseurl=http://ftp.sjtu.edu.cn/centos/6.5/extras/$basearch/

gpgcheck=0

gpgkey=http://mirror.centos.org/centos/RPM-GPG-KEY-centos-6

[centosplus]

name=CentOS-6-Plus

baseurl=http://ftp.sjtu.edu.cn/centos/6.5/centosplus/$basearch/

gpgcheck=0

enabled=0

gpgkey=http://mirror.centos.org/centos/RPM-GPG-KEY-centos-6

[contrib]

name=CentOS-6-Contrib

baseurl=http://ftp.sjtu.edu.cn/centos/6.5/contrib/$basearch/

gpgcheck=0

enabled=0

gpgkey=http://mirror.centos.org/centos/RPM-GPG-KEY-centos-6

[dag]

name=Dag RPM Repository for RHEL6

baseurl=http://ftp.riken.jp/Linux/dag/redhat/el6/en/$basearch/dag/

enabled=0

gpgcheck=0

gpgkey=http://ftp.riken.jp/Linux/dag/packages/RPM-GPG-KEY.dag.txt

 

$rpm --import http://ftp.sjtu.edu.cn/centos/6.5/os/x86_64/RPM-GPG-KEY-CentOS-6

3   重新编译hadoop

在64位linux装的hadoop,在很多地方会遇到libhadoop.so.1.0.0 whichmight have disabled stack guard. 是因为hadoop是32位的,需要手工编译hadoop。

3.1   必要的安装包

$yum install autoconfautomake libtool cmake

$ yum install ncurses-devel

$ yum install openssl-devel

$ yum install gcc*

3.2   安装protobuf

没装 protobuf,后面编译做不完,结果如下:

[INFO] —hadoop-maven-plugins:2.2.0:protoc (compile-protoc)@ hadoop-common —[WARNING] [protoc, --version] failed:java.io.IOException:Cannot run program “protoc”: error=2, No suchfile or directory

[ERROR] stdout: []

 

 

3.2.1    安装过程

下载:https://protobuf.googlecode.com/files/protobuf-2.5.0.tar.gz

 

$sudo tar zxvf protobuf-2.5.0 /usr/local/

$cd /usr/local/protobuf-2.5.0

依次执行下面的命令即可

$./configure

$make

$make check

$make install

$sudo protoc–version(检验安装与成功与否)

libprotoc 2.5.0

3.3   安装mvn环境

下载地址:http://maven.apache.org/download.cgi

 

$sudo tar zxvf ~/downloads/apache-maven-3.1.1-bin.tar.gz  /usr/local

$vim /etc/profile下修改环境变量

    exportPATH=$JAVA_HOME/bin:$JAVA_HOME/jre/bin:$MAVEN_HOME/bin:$PATH

 

#环境变量生效需要退出后,重新登录或用update-alternatives命令

3.4   编译hadoop-2.2.0

$mvnpackage -Pdist,native -DskipTests –Dtar

 

编译结果

[INFO] ReactorSummary:

[INFO]

[INFO] ApacheHadoop Main................................ SUCCESS [6.600s]

[INFO] ApacheHadoop Project POM......................... SUCCESS [3.974s]

[INFO] ApacheHadoop Annotations......................... SUCCESS [9.878s]

[INFO] ApacheHadoop Assemblies.......................... SUCCESS [0.856s]

[INFO] ApacheHadoop Project Dist POM.................... SUCCESS [4.750s]

[INFO] ApacheHadoop Maven Plugins....................... SUCCESS [8.720s]

[INFO] ApacheHadoop Auth................................ SUCCESS [10.107s]

[INFO] ApacheHadoop Auth Examples....................... SUCCESS [5.734s]

[INFO] Apache HadoopCommon.............................. SUCCESS [4:32.636s]

[INFO] ApacheHadoop NFS................................. SUCCESS [29.700s]

[INFO] ApacheHadoop Common Project...................... SUCCESS [0.090s]

[INFO] ApacheHadoop HDFS................................ SUCCESS [6:15.394s]

[INFO] ApacheHadoop HttpFS.............................. SUCCESS [1:09.238s]

[INFO] ApacheHadoop HDFS BookKeeperJournal ............. SUCCESS [27.676s]

[INFO] ApacheHadoop HDFS-NFS............................ SUCCESS [13.954s]

[INFO] ApacheHadoop HDFS Project........................ SUCCESS [0.212s]

[INFO]hadoop-yarn....................................... SUCCESS [0.962s]

[INFO]hadoop-yarn-api................................... SUCCESS [1:48.066s]

[INFO] hadoop-yarn-common................................SUCCESS [1:37.543s]

[INFO]hadoop-yarn-server................................ SUCCESS [4.301s]

[INFO]hadoop-yarn-server-common......................... SUCCESS [29.502s]

[INFO]hadoop-yarn-server-nodemanager.................... SUCCESS [36.593s]

[INFO]hadoop-yarn-server-web-proxy...................... SUCCESS [13.273s]

[INFO]hadoop-yarn-server-resourcemanager................ SUCCESS [30.612s]

[INFO]hadoop-yarn-server-tests.......................... SUCCESS [4.374s]

[INFO]hadoop-yarn-client................................ SUCCESS [14.115s]

[INFO]hadoop-yarn-applications.......................... SUCCESS [0.218s]

[INFO]hadoop-yarn-applications-distributedshell......... SUCCESS [9.871s]

[INFO]hadoop-mapreduce-client........................... SUCCESS [1.095s]

[INFO]hadoop-mapreduce-client-core...................... SUCCESS [1:30.650s]

[INFO]hadoop-yarn-applications-unmanaged-am-launcher.... SUCCESS [15.089s]

[INFO]hadoop-yarn-site.................................. SUCCESS [0.637s]

[INFO]hadoop-yarn-project............................... SUCCESS [25.809s]

[INFO]hadoop-mapreduce-client-common.................... SUCCESS [45.919s]

[INFO]hadoop-mapreduce-client-shuffle................... SUCCESS [14.693s]

[INFO]hadoop-mapreduce-client-app....................... SUCCESS [39.562s]

[INFO]hadoop-mapreduce-client-hs........................ SUCCESS [19.299s]

[INFO]hadoop-mapreduce-client-jobclient................. SUCCESS [18.549s]

[INFO]hadoop-mapreduce-client-hs-plugins................ SUCCESS [5.134s]

[INFO] ApacheHadoop MapReduce Examples.................. SUCCESS [17.823s]

[INFO]hadoop-mapreduce.................................. SUCCESS [12.726s]

[INFO] ApacheHadoop MapReduce Streaming................. SUCCESS [19.760s]

[INFO] ApacheHadoop Distributed Copy.................... SUCCESS [33.332s]

[INFO] ApacheHadoop Archives............................ SUCCESS [9.522s]

[INFO] ApacheHadoop Rumen............................... SUCCESS [15.141s]

[INFO] ApacheHadoop Gridmix............................. SUCCESS [15.052s]

[INFO] ApacheHadoop Data Join........................... SUCCESS [8.621s]

[INFO] ApacheHadoop Extras.............................. SUCCESS [8.744s]

[INFO] ApacheHadoop Pipes............................... SUCCESS [28.645s]

[INFO] ApacheHadoop Tools Dist.......................... SUCCESS [6.238s]

[INFO] ApacheHadoop Tools............................... SUCCESS [0.126s]

[INFO] ApacheHadoop Distribution........................ SUCCESS [1:20.132s]

[INFO] ApacheHadoop Client.............................. SUCCESS [18.820s]

[INFO] ApacheHadoop Mini-Cluster........................ SUCCESS [2.151s]

[INFO]------------------------------------------------------------------------

[INFO] BUILDSUCCESS

[INFO]------------------------------------------------------------------------

[INFO] Total time:29:07.811s

[INFO] Finished at:Thu Oct 24 09:43:18 CST2013

[INFO] FinalMemory: 78M/239M

[INFO]------------------------------------------------------------------------

4   安装hadoop

安装过程参见http://blog.csdn.net/licongcong_0224/article/details/12972889

(感谢原作者)该文章已详细说明安装了过程了,我就不再重重说明。补充如下:

4.1      设置local无密码登陆

1.  需要注意的是.ssh文件的权限应该是700

$chown 700 .ssh

2.  .ssh文件夹里的文件权限应该是:600或644

$chown 600 –R .ssh/*

   

4.2   首次启动服务之前检查

$ls /usr/local/hadoop-2.2.0/lib/native–l

    如果该文件夹下存在libhadoop.so.1.0.0和libhdfs.so.0.0.0,但无红线部分内容,需要做link,否则启动时会报错,命令如下:

    $ln -slibhadoop.so.1.0.0 libhadoop.so

$ln-s libhdfs.so.0.0.0libhdfs.so

4.3   启动命令

在master上执行即可。

0 0