hadoop-2.2.0 分布式安装

来源:互联网 发布:sql清除挂起 编辑:程序博客网 时间:2024/04/29 04:55

hadoop安装包:hadoop-2.2.0.tar.gz

操作系统:     CentOS6.4

jdk版本:      jdk1.7.0_21

1. 配置namenode和datanode

  配置成功的关键在于确保各机器上的主机名和IP地址之间能正确解析。修改每台机器的/etc/hosts文件,如果该台机器作namenode用,则需要在文件中添加集群中所有机器的IP地址及其对应 主机名;如果该台机器仅作为datanode用,则只需要在文件中添加本机和namenode的IP地址及其对应的主机名。

(修改主机名的命令为:hostname新名称

1.1 master主机名称修改

vi /etc/sysconfig/networkNETWORKING=yesHOSTNAME=master

1.2 slaver01主机名称修改

vi /etc/sysconfig/networkNETWORKING=yesHOSTNAME=slaver01

1.3 slaver02主机名称修改

vi /etc/sysconfig/networkNETWORKING=yesHOSTNAME=slaver02

将上面两个文件修改完后,并不能立刻生效。如果要立刻生效的话,可以用 hostname your-hostname 作临时修改,它只是临时地修改主机名,系统重启后会恢复原样的。但修改上面两个文件是永久的,重启系统会得到新的主机名。

重启后查看主机名 uname -n

1.4 修改hosts文件

假设有三台机器,主机名分别为master,slaver01,slaver02,IP地址为192.168.142.129,192.168.142.130,192.168.142.131,

master作为namenode,

slaver01,slaver02作为datanode,刚两者的hosts文件配置是一样的,如下:

[root@slaver02~]# cat /etc/hosts192.168.159.129   master192.168.159.130   slaver01192.168.159.131   slaver02

2. ssh配置

  该配置主要是为了实现在机器间执行指令时不需要输入密码。在namenode上执行以下命令:

cd ~/.sshssh-keygen -t rsa 

--------------------然后一直按回车键,就会按照默认的选项将生成的密钥保存在.ssh/id_rsa文件中。

cat id_rsa.pub > authorized_keys

在各个节点的authorized_keys生成

然后合并各个节点生成的密钥

cd .sshscp root@slaver01:/root/.ssh/id_rsa.pub authorized_keys_from_slaver01scp root@slaver02:/root/.ssh/id_rsa.pub authorized_keys_from_slaver02cat authorized_keys_from_slaver01 authorized_keys_from_slaver02>>authorized_keys

这个就是合并后的authorized_keys文件。

把刚刚产生的authorized_keys文件拷一份到datanode上.

改变authorized_keys文件的许可权限。

chmod 644 authorized_keysscp authorized_keys root@slaver01:/root/.ssh/scp authorized_keys root@slaver02:/root/.ssh/

注意如果不是root用户安装的话,权限改成下面的。

~/.ssh权限设置为700
~/.ssh/authorized_keys的权限设置为600

这时从namenode所在机器向其他datanode所在机器发起ssh连接,只有在第一次登录时需要输入密码,以后则不需要。

3. 在所有机器上配置hadoop

    首先在namenode所在机器上配置,执行如下解压命令:

tar –xzvf hadoop-2.2.0.tar.gz--------------解压

     解压完成后,编辑配置文件,这里要涉及到的配置文件有7个:

~/hadoop-2.2.0/etc/hadoop/hadoop-env.sh

export JAVA_HOME=/usr/java/jdk1.7.0_21

~/hadoop-2.2.0/etc/hadoop/yarn-env.sh

export JAVA_HOME=/usr/java/jdk1.7.0_21

~/hadoop-2.2.0/etc/hadoop/slaves

[root@masterhadoop]# cat slavesslaver01slaver02

~/hadoop-2.2.0/etc/hadoop/core-site.xml

<property><name>fs.defaultFS</name><value>hdfs://master:9000</value></property><property><name>hadoop.tmp.dir</name><value>/usr/dfs/tmp</value></property>

~/hadoop-2.2.0/etc/hadoop/hdfs-site.xml

<property><name>dfs.replication</name><value>2</value></property>

~/hadoop-2.2.0/etc/hadoop/mapred-site.xml

<property><name>mapreduce.framework.name</name><value>yarn</value></property><property><name>mapreduce.jobhistory.address</name><value>master:10020</value></property><property><name>mapreduce.jobhistory.webapp.address</name><value>master:19888</value></property>
~/hadoop-2.2.0/etc/hadoop/yarn-site.xml

<property><name>yarn.nodemanager.aux-services</name><value>mapreduce_shuffle</value></property><property><name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name><value>org.apache.hadoop.mapred.ShuffleHandler</value></property><property><name>yarn.resourcemanager.address</name><value>master:8032</value></property><property><name>yarn.resourcemanager.scheduler.address</name><value>master:8030</value></property><property><name>yarn.resourcemanager.resource-tracker.address</name><value>master:8031</value></property><property><name>yarn.resourcemanager.admin.address</name><value>master:8033</value></property><property><name>yarn.resourcemanager.webapp.address</name><value>master:8088</value></property>

编辑conf/slaves,加入所有slaves/datanodes的主机名(或IP),每个主机名占一行,此处即为

slaver01slaver02

把namenode机器上配置好的hadoop安装包复制到其他的datanode机器上:

scp -r /hadoop-2.2.0 root@slaver01:/rootscp -r /hadoop-2.2.0 root@slaver02:/root

4.格式化分布式文件系统,启动守护进程的命令如下:

$hadoop namenode -format  

--------------------因为配置了环境变量,此处不需要输入hadoop命令的全路径/hadoop/bin/hadoop

     执行后的结果中会提示“ dfs/namehas been successfully formatted”。否则格式化失败。

     启动hadoop:

cd  /hadoop-2.2.0/./sbin/start-dfs.sh./sbin/start-yarn.sh  

启动成功后,分别在namenode和datanode所在机器上使用jps 命令查看,会在namenode所在机器上看到namenode,secondaryNamenode, ResourceManager

[root@masterhadoop-2.2.0]# jps

3066Jps

2672SecondaryNameNode

2532NameNode

2806ResourceManager

会在datanode1所在机器上看到datanode,tasktracker.否则启动失败,检查配置是否有问题。

[root@masterhadoop-2.2.0]# ssh slaver01

Lastlogin: Fri Jan 31 00:43:37 2014 from master

[root@slaver01~]# jps

2543Jps

2465NodeManager

2399DataNode

datanode1所在机器上看到datanode,NodeManager.

查看集群状态:

./bin/hdfs dfs admin -report

14/02/02 16:35:26 INFO namenode.NameNode: STARTUP_MSG:

/************************************************************

STARTUP_MSG: Starting NameNode

STARTUP_MSG:  host = master/192.168.159.129

STARTUP_MSG:  args = [-format./bin/hdfs, dfsadmin, -report]

STARTUP_MSG:  version = 2.2.0

STARTUP_MSG:  classpath = /root/hadoop-2.2.0/etc/hadoop:/root/hadoop-2.2.0/share/hadoop/common/lib/jsr305-1.3.9.jar:/root/hadoop-2.2.0/share/hadoop/common/lib/commons-math-2.1.jar:/root/hadoop-2.2.0/share/hadoop/common/lib/netty-3.6.2.Final.jar:/root/hadoop-2.2.0/share/hadoop/common/lib/hadoop-auth-2.2.0.jar:/root/hadoop-2.2.0/share/hadoop/common/lib/commons-digester-1.8.jar:/root/hadoop-2.2.0/share/hadoop/common/lib/junit-4.8.2.jar:/root/hadoop-2.2.0/share/hadoop/common/lib/jersey-server-1.9.jar:/root/hadoop-2.2.0/share/hadoop/common/lib/jetty-util-6.1.26.jar:/root/hadoop-2.2.0/share/hadoop/common/lib/guava-11.0.2.jar:/root/hadoop-2.2.0/share/hadoop/common/lib/jets3t-0.6.1.jar:/root/hadoop-2.2.0/share/hadoop/common/lib/commons-cli-1.2.jar:/root/hadoop-2.2.0/share/hadoop/common/lib/commons-io-2.1.jar:/root/hadoop-2.2.0/share/hadoop/common/lib/commons-codec-1.4.jar:/root/hadoop-2.2.0/share/hadoop/common/lib/jackson-xc-1.8.8.jar:/root/hadoop-2.2.0/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/root/hadoop-2.2.0/share/hadoop/common/lib/stax-api-1.0.1.jar:/root/hadoop-2.2.0/share/hadoop/common/lib/jackson-mapper-asl-1.8.8.jar:/root/hadoop-2.2.0/share/hadoop/common/lib/hadoop-annotations-2.2.0.jar:/root/hadoop-2.2.0/share/hadoop/common/lib/xz-1.0.jar:/root/hadoop-2.2.0/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/root/hadoop-2.2.0/share/hadoop/common/lib/servlet-api-2.5.jar:/root/hadoop-2.2.0/share/hadoop/common/lib/jersey-core-1.9.jar:/root/hadoop-2.2.0/share/hadoop/common/lib/jersey-json-1.9.jar:/root/hadoop-2.2.0/share/hadoop/common/lib/commons-httpclient-3.1.jar:/root/hadoop-2.2.0/share/hadoop/common/lib/xmlenc-0.52.jar:/root/hadoop-2.2.0/share/hadoop/common/lib/log4j-1.2.17.jar:/root/hadoop-2.2.0/share/hadoop/common/lib/activation-1.1.jar:/root/hadoop-2.2.0/share/hadoop/common/lib/jasper-compiler-5.5.23.jar:/root/hadoop-2.2.0/share/hadoop/common/lib/jetty-6.1.26.jar:/root/hadoop-2.2.0/share/hadoop/common/lib/jackson-jaxrs-1.8.8.jar:/root/hadoop-2.2.0/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/root/hadoop-2.2.0/share/hadoop/common/lib/commons-collections-3.2.1.jar:/root/hadoop-2.2.0/share/hadoop/common/lib/commons-net-3.1.jar:/root/hadoop-2.2.0/share/hadoop/common/lib/paranamer-2.3.jar:/root/hadoop-2.2.0/share/hadoop/common/lib/mockito-all-1.8.5.jar:/root/hadoop-2.2.0/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/root/hadoop-2.2.0/share/hadoop/common/lib/slf4j-api-1.7.5.jar:/root/hadoop-2.2.0/share/hadoop/common/lib/commons-el-1.0.jar:/root/hadoop-2.2.0/share/hadoop/common/lib/jackson-core-asl-1.8.8.jar:/root/hadoop-2.2.0/share/hadoop/common/lib/jasper-runtime-5.5.23.jar:/root/hadoop-2.2.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar:/root/hadoop-2.2.0/share/hadoop/common/lib/commons-lang-2.5.jar:/root/hadoop-2.2.0/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/root/hadoop-2.2.0/share/hadoop/common/lib/jsch-0.1.42.jar:/root/hadoop-2.2.0/share/hadoop/common/lib/commons-configuration-1.6.jar:/root/hadoop-2.2.0/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/root/hadoop-2.2.0/share/hadoop/common/lib/asm-3.2.jar:/root/hadoop-2.2.0/share/hadoop/common/lib/jsp-api-2.1.jar:/root/hadoop-2.2.0/share/hadoop/common/lib/commons-logging-1.1.1.jar:/root/hadoop-2.2.0/share/hadoop/common/lib/avro-1.7.4.jar:/root/hadoop-2.2.0/share/hadoop/common/lib/commons-compress-1.4.1.jar:/root/hadoop-2.2.0/share/hadoop/common/lib/jettison-1.1.jar:/root/hadoop-2.2.0/share/hadoop/common/lib/zookeeper-3.4.5.jar:/root/hadoop-2.2.0/share/hadoop/common/hadoop-nfs-2.2.0.jar:/root/hadoop-2.2.0/share/hadoop/common/hadoop-common-2.2.0-tests.jar:/root/hadoop-2.2.0/share/hadoop/common/hadoop-common-2.2.0.jar:/root/hadoop-2.2.0/share/hadoop/hdfs:/root/hadoop-2.2.0/share/hadoop/hdfs/lib/jsr305-1.3.9.jar:/root/hadoop-2.2.0/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/root/hadoop-2.2.0/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/root/hadoop-2.2.0/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/root/hadoop-2.2.0/share/hadoop/hdfs/lib/guava-11.0.2.jar:/root/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/root/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-io-2.1.jar:/root/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/root/hadoop-2.2.0/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/root/hadoop-2.2.0/share/hadoop/hdfs/lib/jackson-mapper-asl-1.8.8.jar:/root/hadoop-2.2.0/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/root/hadoop-2.2.0/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/root/hadoop-2.2.0/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/root/hadoop-2.2.0/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/root/hadoop-2.2.0/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/root/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/root/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-el-1.0.jar:/root/hadoop-2.2.0/share/hadoop/hdfs/lib/jackson-core-asl-1.8.8.jar:/root/hadoop-2.2.0/share/hadoop/hdfs/lib/jasper-runtime-5.5.23.jar:/root/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-lang-2.5.jar:/root/hadoop-2.2.0/share/hadoop/hdfs/lib/asm-3.2.jar:/root/hadoop-2.2.0/share/hadoop/hdfs/lib/jsp-api-2.1.jar:/root/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-logging-1.1.1.jar:/root/hadoop-2.2.0/share/hadoop/hdfs/hadoop-hdfs-nfs-2.2.0.jar:/root/hadoop-2.2.0/share/hadoop/hdfs/hadoop-hdfs-2.2.0-tests.jar:/root/hadoop-2.2.0/share/hadoop/hdfs/hadoop-hdfs-2.2.0.jar:/root/hadoop-2.2.0/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/root/hadoop-2.2.0/share/hadoop/yarn/lib/jersey-server-1.9.jar:/root/hadoop-2.2.0/share/hadoop/yarn/lib/commons-io-2.1.jar:/root/hadoop-2.2.0/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/root/hadoop-2.2.0/share/hadoop/yarn/lib/aopalliance-1.0.jar:/root/hadoop-2.2.0/share/hadoop/yarn/lib/jackson-mapper-asl-1.8.8.jar:/root/hadoop-2.2.0/share/hadoop/yarn/lib/hadoop-annotations-2.2.0.jar:/root/hadoop-2.2.0/share/hadoop/yarn/lib/xz-1.0.jar:/root/hadoop-2.2.0/share/hadoop/yarn/lib/jersey-core-1.9.jar:/root/hadoop-2.2.0/share/hadoop/yarn/lib/log4j-1.2.17.jar:/root/hadoop-2.2.0/share/hadoop/yarn/lib/snappy-java-1.0.4.1.jar:/root/hadoop-2.2.0/share/hadoop/yarn/lib/paranamer-2.3.jar:/root/hadoop-2.2.0/share/hadoop/yarn/lib/junit-4.10.jar:/root/hadoop-2.2.0/share/hadoop/yarn/lib/hamcrest-core-1.1.jar:/root/hadoop-2.2.0/share/hadoop/yarn/lib/guice-3.0.jar:/root/hadoop-2.2.0/share/hadoop/yarn/lib/jackson-core-asl-1.8.8.jar:/root/hadoop-2.2.0/share/hadoop/yarn/lib/asm-3.2.jar:/root/hadoop-2.2.0/share/hadoop/yarn/lib/javax.inject-1.jar:/root/hadoop-2.2.0/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/root/hadoop-2.2.0/share/hadoop/yarn/lib/avro-1.7.4.jar:/root/hadoop-2.2.0/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/root/hadoop-2.2.0/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/root/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.2.0.jar:/root/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.2.0.jar:/root/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.2.0.jar:/root/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-api-2.2.0.jar:/root/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-client-2.2.0.jar:/root/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-server-common-2.2.0.jar:/root/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.2.0.jar:/root/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-common-2.2.0.jar:/root/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.2.0.jar:/root/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-server-tests-2.2.0.jar:/root/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-site-2.2.0.jar:/root/hadoop-2.2.0/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/root/hadoop-2.2.0/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/root/hadoop-2.2.0/share/hadoop/mapreduce/lib/commons-io-2.1.jar:/root/hadoop-2.2.0/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/root/hadoop-2.2.0/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/root/hadoop-2.2.0/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.8.8.jar:/root/hadoop-2.2.0/share/hadoop/mapreduce/lib/hadoop-annotations-2.2.0.jar:/root/hadoop-2.2.0/share/hadoop/mapreduce/lib/xz-1.0.jar:/root/hadoop-2.2.0/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/root/hadoop-2.2.0/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/root/hadoop-2.2.0/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/root/hadoop-2.2.0/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/root/hadoop-2.2.0/share/hadoop/mapreduce/lib/junit-4.10.jar:/root/hadoop-2.2.0/share/hadoop/mapreduce/lib/hamcrest-core-1.1.jar:/root/hadoop-2.2.0/share/hadoop/mapreduce/lib/guice-3.0.jar:/root/hadoop-2.2.0/share/hadoop/mapreduce/lib/jackson-core-asl-1.8.8.jar:/root/hadoop-2.2.0/share/hadoop/mapreduce/lib/asm-3.2.jar:/root/hadoop-2.2.0/share/hadoop/mapreduce/lib/javax.inject-1.jar:/root/hadoop-2.2.0/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/root/hadoop-2.2.0/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/root/hadoop-2.2.0/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/root/hadoop-2.2.0/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/root/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.2.0.jar:/root/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar:/root/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.2.0.jar:/root/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.2.0.jar:/root/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.2.0.jar:/root/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.2.0.jar:/root/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.2.0-tests.jar:/root/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.2.0.jar:/root/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.2.0.jar:/contrib/capacity-scheduler/*.jar

STARTUP_MSG:  build = https://svn.apache.org/repos/asf/hadoop/common -r 1529768;compiled by 'hortonmu' on 2013-10-07T06:28Z

STARTUP_MSG:  java = 1.7.0_21

************************************************************/

14/02/02 16:35:26 INFO namenode.NameNode: registeredUNIX signal handlers for [TERM, HUP, INT]

Usage: java NameNode [-backup] | [-checkpoint] |[-format [-clusterid cid ] [-force] [-nonInteractive] ] | [-upgrade] |[-rollback] | [-finalize] | [-importCheckpoint] | [-initializeSharedEdits] |[-bootstrapStandby] | [-recover [ -force ] ]

 

14/02/02 16:35:26 INFO namenode.NameNode: SHUTDOWN_MSG:

/************************************************************

SHUTDOWN_MSG: Shutting down NameNode atmaster/192.168.159.129

************************************************************/

[root@master tmp]#

[root@master tmp]#

[root@master tmp]# ssh master

Last login: Sun Feb 2 16:10:04 2014 from slaver01

[root@master ~]# ./bin/hdfs dfsadmin -report

-bash: ./bin/hdfs: No such file or directory

[root@master ~]# cd ~/hadoop-2.2.0/

[root@master hadoop-2.2.0]#cd  ~/hadoop-2.2.0/

[root@master hadoop-2.2.0]#./bin/hdfs dfsadmin -report

Configured Capacity: 12127322112 (11.29 GB)Present Capacity: 1482874880 (1.38 GB)DFS Remaining: 1482825728 (1.38 GB)DFS Used: 49152 (48 KB)DFS Used%: 0.00%Under replicated blocks: 0Blocks with corrupt replicas: 0Missing blocks: 0 -------------------------------------------------Datanodes available: 2 (2 total, 0 dead) Live datanodes:Name: 192.168.159.131:50010 (slaver02)Hostname: slaver02Decommission Status : NormalConfigured Capacity: 6063661056 (5.65 GB)DFS Used: 24576 (24 KB)Non DFS Used: 5880156160 (5.48 GB)DFS Remaining: 183480320 (174.98 MB)DFS Used%: 0.00%DFS Remaining%: 3.03%Last contact: Sun Feb 02 16:36:12 CST 2014  Name: 192.168.159.130:50010 (slaver01)Hostname: slaver01Decommission Status : NormalConfigured Capacity: 6063661056 (5.65 GB)DFS Used: 24576 (24 KB)Non DFS Used: 4764291072 (4.44 GB)DFS Remaining: 1299345408 (1.21 GB)DFS Used%: 0.00%DFS Remaining%: 21.43%Last contact: Sun Feb 02 16:36:12 CST 2014

[root@master hadoop-2.2.0]#

    停止hadoop:

./sbin/stop-dfs.sh./sbin/stop-yarn.sh

5:测试Hadoop

mkdir /usr/testcd /usr/testwget http://www.gutenberg.org/cache/epub/20417/pg20417.txtcd /root/hadoop-2.2.0./bin/hdfs dfs -mkdir /tmp./bin/hdfs dfs -ls /tmp

Found 2 items

drwx------  - root supergroup          02014-02-02 16:42 /tmp/hadoop-yarn

-rw-r--r--  2 root supergroup     6745702014-02-02 18:12 /tmp/pg20417.txt

[root@master hadoop-2.2.0]# 

[root@master hadoop-2.2.0]#bin/hadoop jar ./share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar wordcount /tmp/pg20417.txt /tmp-output10114/02/02 18:34:18 INFO client.RMProxy:Connecting to ResourceManager at master/192.168.159.129:803214/02/02 18:34:19 INFOinput.FileInputFormat: Total input paths to process : 114/02/02 18:34:19 INFOmapreduce.JobSubmitter: number of splits:114/02/02 18:34:19 INFOConfiguration.deprecation: user.name is deprecated. Instead, usemapreduce.job.user.name14/02/02 18:34:19 INFOConfiguration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar14/02/02 18:34:19 INFOConfiguration.deprecation: mapred.output.value.class is deprecated. Instead,use mapreduce.job.output.value.class14/02/02 18:34:19 INFOConfiguration.deprecation: mapreduce.combine.class is deprecated. Instead, usemapreduce.job.combine.class14/02/02 18:34:19 INFOConfiguration.deprecation: mapreduce.map.class is deprecated. Instead, usemapreduce.job.map.class14/02/02 18:34:19 INFOConfiguration.deprecation: mapred.job.name is deprecated. Instead, usemapreduce.job.name14/02/02 18:34:19 INFOConfiguration.deprecation: mapreduce.reduce.class is deprecated. Instead, usemapreduce.job.reduce.class14/02/02 18:34:19 INFOConfiguration.deprecation: mapred.input.dir is deprecated. Instead, usemapreduce.input.fileinputformat.inputdir14/02/02 18:34:19 INFOConfiguration.deprecation: mapred.output.dir is deprecated. Instead, usemapreduce.output.fileoutputformat.outputdir14/02/02 18:34:19 INFOConfiguration.deprecation: mapred.map.tasks is deprecated. Instead, usemapreduce.job.maps14/02/02 18:34:19 INFOConfiguration.deprecation: mapred.output.key.class is deprecated. Instead, usemapreduce.job.output.key.class14/02/02 18:34:19 INFOConfiguration.deprecation: mapred.working.dir is deprecated. Instead, usemapreduce.job.working.dir14/02/02 18:34:19 INFOmapreduce.JobSubmitter: Submitting tokens for job: job_1391327122483_000514/02/02 18:34:20 INFO impl.YarnClientImpl:Submitted application application_1391327122483_0005 to ResourceManager atmaster/192.168.159.129:803214/02/02 18:34:20 INFO mapreduce.Job: Theurl to track the job: http://master:8088/proxy/application_1391327122483_0005/14/02/02 18:34:20 INFO mapreduce.Job:Running job: job_1391327122483_000514/02/02 18:34:36 INFO mapreduce.Job: Jobjob_1391327122483_0005 running in uber mode : false14/02/02 18:34:36 INFO mapreduce.Job:  map 0% reduce 0%14/02/02 18:34:44 INFO mapreduce.Job:  map 100% reduce 0%14/02/02 18:34:53 INFO mapreduce.Job:  map 100% reduce 100%14/02/02 18:34:53 INFO mapreduce.Job: Jobjob_1391327122483_0005 completed successfully14/02/02 18:34:54 INFO mapreduce.Job:Counters: 43       File System Counters                FILE: Number of bytesread=267026                FILE: Number of byteswritten=691953                FILE: Number of readoperations=0                FILE: Number of large readoperations=0                FILE: Number of writeoperations=0                HDFS: Number of bytesread=674669                HDFS: Number of byteswritten=196192                HDFS: Number of readoperations=6                HDFS: Number of large readoperations=0                HDFS: Number of writeoperations=2       Job Counters                Launched map tasks=1                Launched reduce tasks=1                Data-local map tasks=1                Total time spent by all maps inoccupied slots (ms)=6947                Total time spent by all reducesin occupied slots (ms)=5625       Map-Reduce Framework                Map input records=12760                Map output records=109844                Map output bytes=1086547                Map output materializedbytes=267026                Input split bytes=99                Combine input records=109844                Combine output records=18040                Reduce input groups=18040                Reduce shuffle bytes=267026                Reduce input records=18040                Reduce output records=18040                Spilled Records=36080                Shuffled Maps =1                Failed Shuffles=0                Merged Map outputs=1                GC time elapsed (ms)=415                CPU time spent (ms)=4090                Physical memory (bytes)snapshot=216936448                Virtual memory (bytes)snapshot=779874304                Total committed heap usage(bytes)=129454080       Shuffle Errors                BAD_ID=0                CONNECTION=0                IO_ERROR=0                WRONG_LENGTH=0                WRONG_MAP=0                WRONG_REDUCE=0       File Input Format Counters                Bytes Read=674570       File Output Format Counters                Bytes Written=196192

 

 

 

0 0