centos6.4安装hadoop2.2.0

来源:互联网 发布:调查报告 知乎 编辑:程序博客网 时间:2024/06/09 15:26
1. 安装jdk
http://blog.csdn.net/u013619834/article/details/38894649

2. 安装maven
wget http://apache.fayea.com/apache-mirror/maven/maven-3/3.2.3/binaries/apache-maven-3.2.3-bin.tar.gz
tar zxvf apache-maven-3.2.3-bin.tar.gz
mv apache-maven-3.2.3 /usr/local
添加环境变量
echo "export MAVEN_HOME=/usr/local/apache-maven-3.2.3" >> /etc/profile.d/app.sh
echo "export PATH=\$MAVEN_HOME/bin:\$PATH" >> /etc/profile.d/app.sh
source /etc/profile
mvn --version

3. 安装protobuf
yum -y install gcc gcc-c++ make
https://code.google.com/p/protobuf/downloads/list

tar zxvf protobuf-2.5.0.tar.gz
cd protobuf-2.5.0
./configure --prefix=/usr/local/protobuf
make
make install
echo "export PROTOC_HOME=/usr/local/protobuf" >> /etc/profile.d/app.sh
echo "export PATH=\$PROTOC_HOME/bin:\$PATH" >> /etc/profile.d/app.sh
source /etc/profile
protoc --version

合并export PATH

4. 安装其他依赖
yum -y install cmake openssl-devel ncurses-devel


5. 编译安装hadoop

wget http://mirrors.devlib.org/apache/hadoop/common/hadoop-2.2.0/hadoop-2.2.0-src.tar.gz
tar zxvf hadoop-2.2.0-src.tar.gz

cd hadoop-2.2.0-src/hadoop-common-project/hadoop-auth

vim pom.xml

注:hadoop 2.0.6-alpha,2.2.0源码中存在一个BUG详见(https://issues.apache.org/jira/browse/HADOOP-10110),解决办法如下:
在55行下面添加
    <dependency>
      <groupId>org.mortbay.jetty</groupId>
      <artifactId>jetty-util</artifactId>
      <scope>test</scope>
    </dependency>

vim /usr/local/apache-maven-3.2.3/conf/settings.xml
由于maven国外服务器可能连不上,先给maven配置一下国内镜像,在<mirrors></mirros>里添加,原本的不要动
    <mirror>
      <id>nexus-osc</id>
      <mirrorOf>*</mirrorOf>
      <name>Nexusosc</name>
      <url>http://maven.oschina.net/content/groups/public/</url>
    </mirror>

cd ../..
mvn package -DskipTests -Pdist,native -Dtar


查看编译后的文件
ls hadoop-dist/target
cp hadoop-dist/target/hadoop-2.2.0.tar.gz  /usr/local/src

分布式集群的安装

设置hostname
vim /etc/sysconfig/network
修改为
master/slave1/slave2/slave3

hostname master/slave1/slave2/slave3

vim /etc/hosts
10.200.3.151 slave1
10.200.3.152 slave2
10.200.3.153 slave3
10.200.3.154 master


创建hadoop用户
useradd hadoop
passwd hadoop


设置hadoop用户SSH无密码登陆
su - hadoop
ssh-keygen -t rsa
ssh-copy-id -i ~/.ssh/id_rsa.pub  hadoop@master
ssh-copy-id -i ~/.ssh/id_rsa.pub  hadoop@slave1
ssh-copy-id -i ~/.ssh/id_rsa.pub  hadoop@slave2
ssh-copy-id -i ~/.ssh/id_rsa.pub  hadoop@slave3

创建hadoop数据目录
mkdir -p /hadoop/dfs/name
mkdir -p /hadoop/dfs/data
mkdir -p /hadoop/temp
chown -R hadoop.hadoop /hadoop


复制hadoop程序到/home/hadoop目录
tar zxvf hadoop-2.2.0.tar.gz
mv hadoop-2.2.0 /home/hadoop
chown -R hadoop.hadoop /home/hadoop/hadoop-2.2.0

修改hadoop配置文件
/usr/local/hadoop-2.2.0/etc/hadoop

如果profile已经配置了JAVA_HOME,下面两文件不用修改
vim hadoop-env.sh
vim yarn-env.sh

vim slaves
添加
slave1
slave2
slave3


vim core-site.xml
添加
<configuration>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://master:9000</value>
    </property>
    <property>
        <name>io.file.buffer.size</name>
        <value>131072</value>
    </property>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>/hadoop/temp</value>
        <description>Abase for other temporary directories.</description>
    </property>
    <property>
        <name>hadoop.proxyuser.hadoop.hosts</name>
        <value>*</value>
    </property>
    <property>
        <name>hadoop.proxyuser.hadoop.groups</name>
        <value>*</value>
    </property>
</configuration>


vim hdfs-site.xml
添加
<configuration>
    <property>
        <name>dfs.namenode.secondary.http-address</name>
        <value>master:9001</value>
    </property>
    <property>
        <name>dfs.namenode.name.dir</name>
        <value>file:/hadoop/dfs/name</value>
    </property>
    <property>
        <name>dfs.datanode.data.dir</name>
        <value>file:/hadoop/dfs/data</value>
    </property>
    <property>
        <name>dfs.replication</name>
        <value>3</value>
    </property>
    <property>
        <name>dfs.webhdfs.enabled</name>
        <value>true</value>
    </property>
</configuration>


cp mapred-site.xml.template mapred-site.xml
vim mapred-site.xml
添加
<configuration>
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
    <property>
        <name>mapreduce.jobhistory.address</name>
        <value>master:10020</value>
    </property>
    <property>
        <name>mapreduce.jobhistory.webapp.address</name>
        <value>master:19888</value>
    </property>
</configuration>


vim yarn-site.xml
添加
<configuration>
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
    <property>
        <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
        <value>org.apache.hadoop.mapred.ShuffleHandler</value>
    </property>
    <property>
        <name>yarn.resourcemanager.address</name>
        <value>master:8032</value>
    </property>
    <property>
        <name>yarn.resourcemanager.scheduler.address</name>
        <value>master:8030</value>
    </property>
    <property>
        <name>yarn.resourcemanager.resource-tracker.address</name>
        <value>master:8031</value>
    </property>
    <property>
        <name>yarn.resourcemanager.admin.address</name>
        <value>master:8033</value>
    </property>
    <property>
        <name>yarn.resourcemanager.webapp.address</name>
        <value>master:8088</value>
    </property>
</configuration>



配置好以后将adoop目录分别copy到其它3个节点上
su - hadoop
scp -r hadoop-2.2.0 hadoop@slave1:~
scp -r hadoop-2.2.0 hadoop@slave2:~
scp -r hadoop-2.2.0 hadoop@slave3:~


查看hadoop版本:
/home/hadoop/hadoop-2.2.0/bin/hadoop version

格式化namenode
/home/hadoop/hadoop-2.2.0/bin/hdfs namenode -format

启动hdfs
/home/hadoop/hadoop-2.2.0/sbin/start-dfs.sh
jps
这时候在master中输入jps应该看到namenode和secondarynamenode服务启动,slave中看到datanode服务启动

启动yarn
/home/hadoop/hadoop-2.2.0/sbin/start-yarn.sh
jps
master中应该有ResourceManager服务,slave中应该有nodemanager服务

查看集群状态
/home/hadoop/hadoop-2.2.0/bin/hdfs dfsadmin -report

查看文件块组成
/home/hadoop/hadoop-2.2.0/bin/hdfs fsck / -files -blocks

查看HDFS
http://10.200.3.151:50070/dfshealth.jsp

查看RM
http://10.200.3.151:8088/cluster


hdfs命令
创建目录
./bin/hadoop fs -mkdir  /aaa

列出目录
./bin/hadoop fs -ls /


上传文件
./bin/hadoop fs -put ~/test.txt /aaa

下载文件
./bin/hadoop fs -get /aaa/test.txt bbb.txt

查看文件
./bin/hadoop fs -cat /aaa/test.txt

删除文件
./bin/hadoop fs -rm /aaa/test.txt


运行map-reduce测试

./bin/hadoop jar ./share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar wordcount /in /out


参考
http://blog.csdn.net/zhoudetiankong/article/details/16983337

0 0
原创粉丝点击