Hadoop-2.6.0集群搭建

来源:互联网 发布:51信用卡待遇 知乎 编辑:程序博客网 时间:2024/05/17 22:26

 

Hadoop-2.6.0集群搭建


系统虚拟机搭建可以参考这篇我之前写的文章:http://blog.csdn.net/baolibin528/article/details/43893063 


1、用到的软件和IP设置:

1.1、jdk和hadoop版本:

hadoop-2.6.0.tar.gz

jdk-8u25-linux-x64.gz

1.2、

统一用户名hadoop00

1.3、集群IP与主机名:

192.168.1.2 hadoop00192.168.1.3 hadoop11192.168.1.4 hadoop22

1.4、

网关192.168.1.1

子网掩码255.255.255.0

 

2、配置SSH:

在三台机器上分别设置SSH免密码登陆:

这里以配置hadoop00为例:

2.1、生成无密码密钥对:

[root@hadoop00 ~]# ssh-keygen -t rsa
2.2、把 id_rsa.pub 追加到授权的 key 里面去:

[root@hadoop00 ~]# cat ~/.ssh/id_rsa.pub>> ~/.ssh/authorized_keys
2.3、免密码登陆本机:

[root@hadoop00 ~]# ssh hadoop00Last login: Tue Feb 24 19:06:51 2015 from hadoop00[root@hadoop00 ~]#

在hadoop11和hadoop22上进行相同配置。


3、配置/etc/hosts:

编辑命令:

[root@hadoop00 ~]# vim /etc/hosts

 配置内容:

127.0.0.1  localhost localhost.localdomain localhost4 localhost4.localdomain4::1        localhost localhost.localdomain localhost6 localhost6.localdomain6192.168.1.2 hadoop00192.168.1.3 hadoop11192.168.1.4 hadoop22


在三台机器上进行相同上述配置。


4、互相免密码登陆:

hadoop00免密码登陆到hadoop11:

[root@hadoop00 ~]# ssh-copy-id -i hadoop11
可以免密码登陆到本机:

[root@hadoop00 ~]# ssh hadoop11Last login: Tue Feb 24 20:03:48 2015 from192.168.1.1[root@hadoop11 ~]#

hadoop22免密码登陆到hadoop11:

[root@hadoop22 ~]# ssh-copy-id -i hadoop11

[root@hadoop22 ~]# ssh hadoop11Last login: Tue Feb 24 20:06:12 2015 fromhadoop00[root@hadoop11 ~]#

hadoop11免密码登陆到hadoop00和hadoop22上:

[root@hadoop11 ~]# scp /root/.ssh/authorized_keys hadoop00:/root/.ssh/

[root@hadoop11 ~]# scp /root/.ssh/authorized_keys hadoop22:/root/.ssh/


hadoop11可以免密码登录到hadoop00:

[root@hadoop11 ~]# ssh hadoop00Last login: Tue Feb 24 20:01:45 2015 from192.168.1.1[root@hadoop00 ~]#
 

hadoop11可以免密码登录到hadoop22:

[root@hadoop11 ~]# ssh hadoop22Last login: Tue Feb 24 20:04:37 2015 from192.168.1.1[root@hadoop22 ~]#

5、安装jdk

解压jdk与hadoop并改名改权限:

[root@hadoop00 local]# ll总用量 347836drwxr-xr-x. 9 root root      4096 11月 14 05:20 hadoopdrwxr-xr-x. 8 root root      4096 9月  18 08:44 jdk 

配置环境变量:

[root@hadoop00 local]# vim /etc/profile


配置内容:

export JAVA_HOME=/usr/local/jdk export JRE_HOME=/usr/local/jdk/jre exportCLASSPATH=.:$JAVA_HOME/lib:$JAVA_HOME/jre/lib exportPATH=$JAVA_HOME/bin:$JAVA_HOME/jre/bin:$PATH 


使配置生效:

 [root@hadoop00local]# source /etc/profile


查看jdk版本:

[root@hadoop00 local]# java -versionjava version "1.8.0_25"Java(TM) SE Runtime Environment (build 1.8.0_25-b17)Java HotSpot(TM) 64-Bit Server VM (build25.25-b02, mixed mode)


6、安装hadoop

6.1、在三台虚拟机上一次创建如下目录:

不创建也会自动生成这些目录:

[root@hadoop00 local]# mkdir/home/hadoop00/hadoop[root@hadoop00 local]# mkdir/home/hadoop00/hadoop/tmp[root@hadoop00 local]# mkdir/home/hadoop00/dfs[root@hadoop00 local]# mkdir/home/hadoop00/dfs/name[root@hadoop00 local]# mkdir/home/hadoop00/dfs/data[root@hadoop00 local]#

6.2、查看目录:/usr/local/hadoop/etc/hadoop 下的配置文件:

[root@hadoop00 hadoop]# pwd/usr/local/hadoop/etc/hadoop[root@hadoop00 hadoop]# lscapacity-scheduler.xml      hadoop-policy.xml        kms-log4j.properties        ssl-client.xml.exampleconfiguration.xsl           hdfs-site.xml            kms-site.xml                ssl-server.xml.examplecontainer-executor.cfg      httpfs-env.sh            log4j.properties            yarn-env.cmdcore-site.xml               httpfs-log4j.properties  mapred-env.cmd              yarn-env.shhadoop-env.cmd              httpfs-signature.secret  mapred-env.sh               yarn-site.xmlhadoop-env.sh               httpfs-site.xml          mapred-queues.xml.templatehadoop-metrics2.properties  kms-acls.xml             mapred-site.xml.templatehadoop-metrics.properties   kms-env.sh               slaves[root@hadoop00 hadoop]#

6.3、配置hadoop-env.sh:

[root@hadoop00 hadoop]# vim hadoop-env.sh


配置内容:

export JAVA_HOME=/usr/local/jdk

6.4、配置yarn-site.xml:

[root@hadoop00 hadoop]# vim yarn-env.sh


配置内容:

# some Java parametersexport JAVA_HOME=/usr/local/jdk


 

6.5、配置slaves:

[root@hadoop00 hadoop]# vim slaves


配置内容:

hadoop11hadoop22 

6.6、配置core-site.xml:

[root@hadoop00 hadoop]# vim core-site.xml


配置内容:

<configuration> <property>   <name>fs.defaultFS</name>   <value>hdfs://hadoop00:9000</value> </property> <property>   <name>io.file.buffer.size</name>   <value>131072</value>  </property>   <property>      <name>hadoop.tmp.dir</name>     <value>file:/home/hadoop00/hadoop/tmp</value>   </property>   <property>      <name>hadoop.proxyuser.root.hosts</name>      <value>*</value>   </property>   <property>      <name>hadoop.proxyuser.root.groups</name>       <value>*</value>   </property></configuration>


 

6.7、配置hdfs-site.xml:

[root@hadoop00 hadoop]# vim hdfs-site.xml


配置内容:

<configuration>  <property>    <name>dfs.namenode.secondary.http-address</name>    <value>hadoop00:9001</value>  </property>  <property>      <name>dfs.namenode.name.dir</name>       <value>file:/home/hadoop00/dfs/name</value>  </property>  <property>    <name>dfs.datanode.data.dir</name>    <value>file:/home/hadoop00/dfs/data</value>  </property>  <property>     <name>dfs.replication</name>     <value>3</value>  </property>  <property>    <name>dfs.webhdfs.enabled</name>     <value>true</value>   </property></configuration>


 

6.8、配置mapred-site.xml:

[root@hadoop00 hadoop]# vim mapred-site.xml


配置内容:

<configuration>  <property>     <name>mapreduce.framework.name</name>     <value>yarn</value>   </property>   <property>      <name>mapreduce.jobhistory.address</name>      <value>hadoop00:10020</value>    </property>    <property>      <name>mapreduce.jobhistory.webapp.address</name>      <value>hadoop00:19888</value>    </property></configuration>


 

6.9、配置yarn-site.xml:

[root@hadoop00 hadoop]# vim yarn-site.xml


配置内容:

<configuration> <property>    <name>yarn.nodemanager.aux-services</name>    <value>mapreduce_shuffle</value>  </property>  <property>     <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>     <value>org.apache.hadoop.mapred.ShuffleHandler</value>   </property>    <property>       <name>yarn.resourcemanager.address</name>       <value>hadoop00:8032</value>    </property>     <property>        <name>yarn.resourcemanager.scheduler.address</name>        <value>hadoop00:8030</value>   </property>   <property>        <name>yarn.resourcemanager.resource-tracker.address</name>        <value>hadoop00:8035</value>    </property>    <property>       <name>yarn.resourcemanager.admin.address</name>       <value>hadoop00:8033</value>    </property>   <property>       <name>yarn.resourcemanager.webapp.address</name>       <value>hadoop00:8088</value>   </property></configuration>


6.10、把hadoop00上的jdk和hadoop拷贝到hadoop11和hadoop22上:

[root@hadoop00 local]# scp -r jdk hadoop11:/usr/local/ [root@hadoop00 local]# scp -r jdk hadoop22:/usr/local/ [root@hadoop00 local]# scp -r hadoop hadoop11:/usr/local/ [root@hadoop00 local]# scp -r hadoop hadoop22:/usr/local/

6.11、拷贝配置文件 /etc/profile :

[root@hadoop00 ~]# scp /etc/profile hadoop11:/etc/profile                                                                        100% 1989     1.9KB/s   00:00[root@hadoop00 ~]# scp /etc/profile hadoop22:/etc/profile                                                                        100%1989     1.9KB/s   00:00[root@hadoop00 ~]#

6.12、在拷贝过去后执行 source /etc/profile 使配置生效:

[root@hadoop11 ~]# java -versionjava version "1.8.0_25"Java(TM) SE Runtime Environment (build 1.8.0_25-b17)Java HotSpot(TM) 64-Bit Server VM (build25.25-b02, mixed mode)[root@hadoop11 ~]#


[root@hadoop22 ~]# java -versionjava version "1.8.0_25"Java(TM) SE Runtime Environment (build 1.8.0_25-b17)Java HotSpot(TM) 64-Bit Server VM (build25.25-b02, mixed mode)[root@hadoop22 ~]#


7、格式化Hadoop:

执行命令:

bin/hdfs namenode –format

 

[root@hadoop00 hadoop]# bin/hdfs namenode-formatDEPRECATED: Use of this script to executehdfs command is deprecated.Instead use the hdfs command for it. 15/02/24 21:13:13 INFO namenode.NameNode:STARTUP_MSG:/************************************************************STARTUP_MSG: Starting NameNodeSTARTUP_MSG:   host = hadoop00/192.168.1.2STARTUP_MSG:   args = [-format]STARTUP_MSG:   version = 2.6.0STARTUP_MSG:   classpath =/usr/local/hadoop/etc/hadoop:/usr/local/hadoop/share/hadoop/common/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/common/lib/jsp-api-2.1.jar:/usr/local/hadoop/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-api-1.7.5.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-collections-3.2.1.jar:/usr/local/hadoop/share/hadoop/common/lib/xmlenc-0.52.jar:/usr/local/hadoop/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/jsr305-1.3.9.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-httpclient-3.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-el-1.0.jar:/usr/local/hadoop/share/hadoop/common/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-net-3.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/local/hadoop/share/hadoop/common/lib/curator-framework-2.6.0.jar:/usr/local/hadoop/share/hadoop/common/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/common/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-math3-3.1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/zookeeper-3.4.6.jar:/usr/local/hadoop/share/hadoop/common/lib/jets3t-0.9.0.jar:/usr/local/hadoop/share/hadoop/common/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-digester-1.8.jar:/usr/local/hadoop/share/hadoop/common/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/common/lib/jettison-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/usr/local/hadoop/share/hadoop/common/lib/htrace-core-3.0.4.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/hamcrest-core-1.3.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-annotations-2.6.0.jar:/usr/local/hadoop/share/hadoop/common/lib/jsch-0.1.42.jar:/usr/local/hadoop/share/hadoop/common/lib/gson-2.2.4.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-configuration-1.6.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/common/lib/mockito-all-1.8.5.jar:/usr/local/hadoop/share/hadoop/common/lib/httpclient-4.2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/stax-api-1.0-2.jar:/usr/local/hadoop/share/hadoop/common/lib/jasper-runtime-5.5.23.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/jasper-compiler-5.5.23.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop/share/hadoop/common/lib/curator-client-2.6.0.jar:/usr/local/hadoop/share/hadoop/common/lib/activation-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/common/lib/curator-recipes-2.6.0.jar:/usr/local/hadoop/share/hadoop/common/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/common/lib/httpcore-4.2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/junit-4.11.jar:/usr/local/hadoop/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-auth-2.6.0.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-json-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar:/usr/local/hadoop/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-2.6.0.jar:/usr/local/hadoop/share/hadoop/common/hadoop-nfs-2.6.0.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-2.6.0-tests.jar:/usr/local/hadoop/share/hadoop/hdfs:/usr/local/hadoop/share/hadoop/hdfs/lib/jsp-api-2.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jsr305-1.3.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-el-1.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/xml-apis-1.3.04.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/xercesImpl-2.9.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/htrace-core-3.0.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jasper-runtime-5.5.23.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.6.0-tests.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-nfs-2.6.0.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.6.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-collections-3.2.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jsr305-1.3.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-httpclient-3.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/yarn/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/usr/local/hadoop/share/hadoop/yarn/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guice-3.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jettison-1.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jline-0.9.94.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/javax.inject-1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-client-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/activation-1.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/yarn/lib/aopalliance-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-json-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-client-2.6.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.6.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-tests-2.6.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-common-2.6.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.6.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-common-2.6.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-registry-2.6.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.6.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.6.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.6.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.6.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-api-2.6.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/guice-3.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/javax.inject-1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/hadoop-annotations-2.6.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/junit-4.11.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.6.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.6.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.6.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.6.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.6.0-tests.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.6.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.6.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.6.0.jar:/contrib/capacity-scheduler/*.jar:/contrib/capacity-scheduler/*.jarSTARTUP_MSG:   build = https://git-wip-us.apache.org/repos/asf/hadoop.git-r e3496499ecb8d220fba99dc5ed4c99c8f9e33bb1;compiled by 'jenkins' on 2014-11-13T21:10ZSTARTUP_MSG:   java = 1.8.0_25************************************************************/15/02/24 21:13:13 INFO namenode.NameNode:registered UNIX signal handlers for [TERM, HUP, INT]15/02/24 21:13:13 INFO namenode.NameNode:createNameNode [-format]15/02/24 21:13:16 WARNutil.NativeCodeLoader: Unable to load native-hadoop library for yourplatform... using builtin-java classes where applicableFormatting using clusterid:CID-97137513-2225-4721-8947-72d3e10882fe15/02/24 21:13:17 INFOnamenode.FSNamesystem: No KeyProvider found.15/02/24 21:13:17 INFOnamenode.FSNamesystem: fsLock is fair:true15/02/24 21:13:18 INFO blockmanagement.DatanodeManager:dfs.block.invalidate.limit=100015/02/24 21:13:18 INFOblockmanagement.DatanodeManager:dfs.namenode.datanode.registration.ip-hostname-check=true15/02/24 21:13:18 INFOblockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec isset to 000:00:00:00.00015/02/24 21:13:18 INFOblockmanagement.BlockManager: The block deletion will start around 2015 二月 24 21:13:1815/02/24 21:13:18 INFO util.GSet: Computingcapacity for map BlocksMap15/02/24 21:13:18 INFO util.GSet: VM type       = 64-bit15/02/24 21:13:18 INFO util.GSet: 2.0% maxmemory 966.7 MB = 19.3 MB15/02/24 21:13:18 INFO util.GSet:capacity      = 2^21 = 2097152 entries15/02/24 21:13:18 INFOblockmanagement.BlockManager: dfs.block.access.token.enable=false15/02/24 21:13:18 INFOblockmanagement.BlockManager: defaultReplication         = 315/02/24 21:13:18 INFOblockmanagement.BlockManager: maxReplication             = 51215/02/24 21:13:18 INFOblockmanagement.BlockManager: minReplication             = 115/02/24 21:13:18 INFOblockmanagement.BlockManager: maxReplicationStreams      = 215/02/24 21:13:18 INFOblockmanagement.BlockManager: shouldCheckForEnoughRacks  = false15/02/24 21:13:18 INFOblockmanagement.BlockManager: replicationRecheckInterval = 300015/02/24 21:13:18 INFOblockmanagement.BlockManager: encryptDataTransfer        = false15/02/24 21:13:18 INFOblockmanagement.BlockManager: maxNumBlocksToLog          = 100015/02/24 21:13:18 INFOnamenode.FSNamesystem: fsOwner            = root (auth:SIMPLE)15/02/24 21:13:18 INFOnamenode.FSNamesystem: supergroup         = supergroup15/02/24 21:13:18 INFOnamenode.FSNamesystem: isPermissionEnabled = true15/02/24 21:13:18 INFOnamenode.FSNamesystem: HA Enabled: false15/02/24 21:13:18 INFO namenode.FSNamesystem:Append Enabled: true15/02/24 21:13:20 INFO util.GSet: Computingcapacity for map INodeMap15/02/24 21:13:20 INFO util.GSet: VMtype       = 64-bit15/02/24 21:13:20 INFO util.GSet: 1.0% maxmemory 966.7 MB = 9.7 MB15/02/24 21:13:20 INFO util.GSet: capacity      = 2^20 = 1048576 entries15/02/24 21:13:20 INFO namenode.NameNode:Caching file names occuring more than 10 times15/02/24 21:13:21 INFO util.GSet: Computingcapacity for map cachedBlocks15/02/24 21:13:21 INFO util.GSet: VMtype       = 64-bit15/02/24 21:13:21 INFO util.GSet: 0.25% maxmemory 966.7 MB = 2.4 MB15/02/24 21:13:21 INFO util.GSet:capacity      = 2^18 = 262144 entries15/02/24 21:13:21 INFOnamenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.999000012874603315/02/24 21:13:21 INFOnamenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 015/02/24 21:13:21 INFOnamenode.FSNamesystem: dfs.namenode.safemode.extension     = 3000015/02/24 21:13:21 INFOnamenode.FSNamesystem: Retry cache on namenode is enabled15/02/24 21:13:21 INFOnamenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cacheentry expiry time is 600000 millis15/02/24 21:13:21 INFO util.GSet: Computingcapacity for map NameNodeRetryCache15/02/24 21:13:21 INFO util.GSet: VMtype       = 64-bit15/02/24 21:13:21 INFO util.GSet:0.029999999329447746% max memory 966.7 MB = 297.0 KB15/02/24 21:13:21 INFO util.GSet:capacity      = 2^15 = 32768 entries15/02/24 21:13:21 INFO namenode.NNConf:ACLs enabled? false15/02/24 21:13:21 INFO namenode.NNConf:XAttrs enabled? true15/02/24 21:13:21 INFO namenode.NNConf:Maximum size of an xattr: 1638415/02/24 21:13:21 INFO namenode.FSImage:Allocated new BlockPoolId: BP-1020208461-192.168.1.2-142478360156215/02/24 21:13:22 INFO common.Storage:Storage directory /home/hadoop00/dfs/name has been successfully formatted.15/02/24 21:13:23 INFOnamenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 015/02/24 21:13:23 INFO util.ExitUtil:Exiting with status 015/02/24 21:13:23 INFO namenode.NameNode:SHUTDOWN_MSG:/************************************************************SHUTDOWN_MSG: Shutting down NameNode athadoop00/192.168.1.2************************************************************/[root@hadoop00 hadoop]#


 

 

8、启动HDFS:

执行命令:

sbin/start-dfs.sh

[root@hadoop00 hadoop]# sbin/start-dfs.sh15/02/24 21:15:07 WARNutil.NativeCodeLoader: Unable to load native-hadoop library for yourplatform... using builtin-java classes where applicableStarting namenodes on [hadoop00]hadoop00: Warning: Permanently added theRSA host key for IP address '192.168.1.2' to the list of known hosts.hadoop00: starting namenode, logging to/usr/local/hadoop/logs/hadoop-root-namenode-hadoop00.outhadoop22: starting datanode, logging to/usr/local/hadoop/logs/hadoop-root-datanode-hadoop22.outhadoop11: starting datanode, logging to/usr/local/hadoop/logs/hadoop-root-datanode-hadoop11.outStarting secondary namenodes [hadoop00]hadoop00: starting secondarynamenode,logging to /usr/local/hadoop/logs/hadoop-root-secondarynamenode-hadoop00.out15/02/24 21:17:32 WARNutil.NativeCodeLoader: Unable to load native-hadoop library for yourplatform... using builtin-java classes where applicable[root@hadoop00 hadoop]#

查看启动hdfs后进程:

[root@hadoop00 hadoop]# jps3201 SecondaryNameNode3314 Jps3030 NameNode[root@hadoop00 hadoop]#

[root@hadoop11 ~]# jps2161 DataNode2237 Jps[root@hadoop11 ~]#

[root@hadoop22 ~]# jps2945 Jps2870 DataNode[root@hadoop22 ~]# 

9、启动yarn:

执行命令:

sbin/start-yarn.sh

[root@hadoop00 hadoop]# sbin/start-yarn.shstarting yarn daemonsstarting resourcemanager, logging to/usr/local/hadoop/logs/yarn-root-resourcemanager-hadoop00.outhadoop11: starting nodemanager, logging to/usr/local/hadoop/logs/yarn-root-nodemanager-hadoop11.outhadoop22: starting nodemanager, logging to/usr/local/hadoop/logs/yarn-root-nodemanager-hadoop22.out[root@hadoop00 hadoop]#

查看启动hdfs和yarn后的进程:

[root@hadoop00 hadoop]# jps3201 SecondaryNameNode3363 ResourceManager3030 NameNode3423 Jps[root@hadoop00 hadoop]#

[root@hadoop11 ~]# jps2161 DataNode2275 NodeManager2307 Jps[root@hadoop11 ~]#

[root@hadoop22 ~]# jps3043 Jps2870 DataNode2983 NodeManager[root@hadoop22 ~]#


10、查看集群状态:

执行命令:

bin/hdfs dfsadmin -report


[root@hadoop00 hadoop]# bin/hdfs dfsadmin-report15/02/24 21:34:15 WARNutil.NativeCodeLoader: Unable to load native-hadoop library for yourplatform... using builtin-java classes where applicableConfigured Capacity: 14862139392 (13.84 GB)Present Capacity: 5560164352 (5.18 GB)DFS Remaining: 5560115200 (5.18 GB)DFS Used: 49152 (48 KB)DFS Used%: 0.00%Under replicated blocks: 0Blocks with corrupt replicas: 0Missing blocks: 0 -------------------------------------------------Live datanodes (2): Name: 192.168.1.3:50010 (hadoop11)Hostname: hadoop11Decommission Status : NormalConfigured Capacity: 7431069696 (6.92 GB)DFS Used: 24576 (24 KB)Non DFS Used: 4649984000 (4.33 GB)DFS Remaining: 2781061120 (2.59 GB)DFS Used%: 0.00%DFS Remaining%: 37.42%Configured Cache Capacity: 0 (0 B)Cache Used: 0 (0 B)Cache Remaining: 0 (0 B)Cache Used%: 100.00%Cache Remaining%: 0.00%Xceivers: 1Last contact: Tue Feb 24 21:35:42 CST 2015  Name: 192.168.1.4:50010 (hadoop22)Hostname: hadoop22Decommission Status : NormalConfigured Capacity: 7431069696 (6.92 GB)DFS Used: 24576 (24 KB)Non DFS Used: 4651991040 (4.33 GB)DFS Remaining: 2779054080 (2.59 GB)DFS Used%: 0.00%DFS Remaining%: 37.40%Configured Cache Capacity: 0 (0 B)Cache Used: 0 (0 B)Cache Remaining: 0 (0 B)Cache Used%: 100.00%Cache Remaining%: 0.00%Xceivers: 1Last contact: Tue Feb 24 21:35:41 CST 2015  [root@hadoop00 hadoop]#


11、在浏览器输入网址查看集群状态:

查看hdfs:

http://192.168.1.2:50070/dfshealth.jsp


下翻,同一页:


 

 

点击Live Nodes:

 

12、查看集群状态:

查看RM:

http://192.168.1.2:8088/cluster

 

 

 

 

 

 

运行wordcount案例:

 

13、创建一个空文件:

[root@hadoop00 hadoop00]# touch baozi.txt

随便往里写入一些单词,单词之间用空格分开:

</pre></p><p></p><p>在HDFS上创建输入文件路径:</p><p><pre name="code" class="html">[root@hadoop00 bin]# ./hadoop fs -mkdir /gogo15/02/25 13:08:57 WARNutil.NativeCodeLoader: Unable to load native-hadoop library for yourplatform... using builtin-java classes where applicable[root@hadoop00 bin]#

上传到HDFS中:

[root@hadoop00 bin]# ./hadoop fs -put /home/hadoop00/baozi.txt /gogo/15/02/25 13:12:01 WARNutil.NativeCodeLoader: Unable to load native-hadoop library for yourplatform... using builtin-java classes where applicable[root@hadoop00 bin]#
 

查看内容:

[root@hadoop00 bin]# ./hadoop fs -text /gogo/baozi.txt15/02/25 13:13:58 WARNutil.NativeCodeLoader: Unable to load native-hadoop library for yourplatform... using builtin-java classes where applicablehadoop hello hadoop hbase hadoop hbase[root@hadoop00 bin]#

用jar命令运行wordcount:

./hadoop jar /usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0 jarwordcount  /gogo/baozi.txt  /tutu/


[root@hadoop00 bin]# ./hadoop jar/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0.jarwordcount /gogo/baozi.          txt/tutu/15/02/25 13:17:15 WARNutil.NativeCodeLoader: Unable to load native-hadoop library for yourplatform... using builtin-java classes wh          ere applicable15/02/25 13:17:21 INFO client.RMProxy:Connecting to ResourceManager at hadoop00/192.168.1.2:803215/02/25 13:17:40 INFOinput.FileInputFormat: Total input paths to process : 115/02/25 13:17:43 INFOmapreduce.JobSubmitter: number of splits:115/02/25 13:17:51 INFO hdfs.DFSClient:Could not complete/tmp/hadoop-yarn/staging/root/.staging/job_1424840814928_0001/job.xmlretry          ing...15/02/25 13:17:51 INFOmapreduce.JobSubmitter: Submitting tokens for job: job_1424840814928_000115/02/25 13:18:40 INFO impl.YarnClientImpl:Submitted application application_1424840814928_000115/02/25 13:18:45 INFO mapreduce.Job: Theurl to track the job:http://hadoop00:8088/proxy/application_1424840814928_0001/15/02/25 13:18:45 INFO mapreduce.Job:Running job: job_1424840814928_000115/02/25 13:21:46 INFO mapreduce.Job: Jobjob_1424840814928_0001 running in uber mode : false15/02/25 13:21:48 INFO mapreduce.Job:  map 0% reduce 0%15/02/25 13:26:26 INFO mapreduce.Job:  map 67% reduce 0%15/02/25 13:26:27 INFO mapreduce.Job:  map 100% reduce 0%15/02/25 13:29:26 INFO mapreduce.Job:  map 100% reduce 67%15/02/25 13:29:38 INFO mapreduce.Job:  map 100% reduce 100%15/02/25 13:31:08 INFO mapreduce.Job: Jobjob_1424840814928_0001 completed successfully15/02/25 13:31:46 INFO mapreduce.Job: Counters:49       File System Counters                FILE: Number of bytes read=43                FILE: Number of byteswritten=211819                FILE: Number of readoperations=0                FILE: Number of large readoperations=0               FILE: Number of writeoperations=0                HDFS: Number of bytes read=139                HDFS: Number of byteswritten=25                HDFS: Number of readoperations=6                HDFS: Number of large readoperations=0                HDFS: Number of write operations=2       Job Counters                Launched map tasks=1                Launched reduce tasks=1                Data-local map tasks=1                Total time spent by all maps inoccupied slots (ms)=281371               Total time spent by allreduces in occupied slots (ms)=210134                Total time spent by all maptasks (ms)=281371                Total time spent by all reducetasks (ms)=210134                Total vcore-seconds taken byall map tasks=281371                Total vcore-seconds taken byall reduce tasks=210134                Total megabyte-seconds taken byall map tasks=288123904                Total megabyte-seconds taken byall reduce tasks=215177216       Map-Reduce Framework                Map input records=1                Map output records=6                Map output bytes=63                Map output materializedbytes=43                Input split bytes=100                Combine input records=6                Combine output records=3                Reduce input groups=3                Reduce shuffle bytes=43                Reduce input records=3                Reduce output records=3                Spilled Records=6                Shuffled Maps =1                Failed Shuffles=0                Merged Map outputs=1                GC time elapsed (ms)=21365                CPU time spent (ms)=16660                Physical memory (bytes)snapshot=108003328                Virtual memory (bytes)snapshot=4113276928                Total committed heap usage(bytes)=133984256       Shuffle Errors                BAD_ID=0                CONNECTION=0                IO_ERROR=0                WRONG_LENGTH=0                WRONG_MAP=0                WRONG_REDUCE=0       File Input Format Counters                Bytes Read=39       File Output Format Counters                Bytes Written=25


查看输出文件夹:

[root@hadoop00 bin]# ./hadoop fs -ls /tutu/15/02/25 13:38:09 WARNutil.NativeCodeLoader: Unable to load native-hadoop library for yourplatform... using builtin-java classes where applicableFound 2 items-rw-r--r--  1 root supergroup          02015-02-25 13:30 /tutu/_SUCCESS-rw-r--r--  1 root supergroup         252015-02-25 13:29 /tutu/part-r-00000[root@hadoop00 bin]#

 

查看最终结果:

[root@hadoop00 bin]# ./hadoop fs -text /tutu/part-r-0000015/02/25 13:38:51 WARNutil.NativeCodeLoader: Unable to load native-hadoop library for yourplatform... using builtin-java classes where applicablehadoop 3hbase  2hello  1[root@hadoop00 bin]#


关闭集群:

[root@hadoop00 sbin]# ./stop-all.shThis script is Deprecated. Instead usestop-dfs.sh and stop-yarn.sh15/02/25 14:44:08 WARNutil.NativeCodeLoader: Unable to load native-hadoop library for yourplatform... using builtin-java classes where applicableStopping namenodes on [hadoop00]hadoop00: stopping namenodehadoop22: stopping datanodehadoop11: stopping datanodeStopping secondary namenodes [hadoop00]hadoop00: stopping secondarynamenode15/02/25 14:45:08 WARNutil.NativeCodeLoader: Unable to load native-hadoop library for yourplatform... using builtin-java classes where applicablestopping yarn daemonsstopping resourcemanagerhadoop11: stopping nodemanagerhadoop22: stopping nodemanagerno proxyserver to stop[root@hadoop00 sbin]#


 

 

 

1 0