7.测试hadoop安装成功与否,并跑mapreduce实例

来源:互联网 发布:《知否》盛明兰作者 编辑:程序博客网 时间:2024/04/27 04:02

hadoop2.6.5集群安装及mapreduce测试运行
http://blog.csdn.net/fanfanrenrenmi/article/details/54232184


【准备工作】在每一次测试之前,必须把前一次测试完的文件删除掉,具体命令见下:

#################################在master机器上:su hadoop  #切换用户################################rm -r /home/hadoop/hadoop/*    #删除mkdir /home/hadoop/hadoop/datanode /home/hadoop/hadoop/namenode /home/hadoop/hadoop/tmp  #创建chmod -R 777 /home/hadoop/hadoop/datanode /home/hadoop/hadoop/namenode /home/hadoop/hadoop/tmp  #修改权限################################ssh slave1rm -r /home/hadoop/hadoop/*    #删除mkdir /home/hadoop/hadoop/datanode /home/hadoop/hadoop/namenode /home/hadoop/hadoop/tmp  #创建chmod -R 777 /home/hadoop/hadoop/datanode /home/hadoop/hadoop/namenode /home/hadoop/hadoop/tmp  #修改权限#################################ssh slave2rm -r /home/hadoop/hadoop/*    #删除mkdir /home/hadoop/hadoop/datanode /home/hadoop/hadoop/namenode /home/hadoop/hadoop/tmp  #创建chmod -R 777 /home/hadoop/hadoop/datanode /home/hadoop/hadoop/namenode /home/hadoop/hadoop/tmp  #修改权限ssh master################################

=============================================

开 始 测 试

=============================================

(一)

1)格式化 hdfs (在 master 机器上)

    hdfs namenode -format显示下面内容:17/08/12 22:13:49 INFO namenode.NameNode: SHUTDOWN_MSG: /************************************************************SHUTDOWN_MSG: Shutting down NameNode at master/192.168.222.134************************************************************/

2)启动 hdfs (在 master 机器上)

    start-dfs.sh 显示下面内容:hadoop@master:~$ start-dfs.sh Starting namenodes on [master]master: starting namenode, logging to /data/hadoop-2.6.5/logs/hadoop-hadoop-namenode-master.outslave1: starting datanode, logging to /data/hadoop-2.6.5/logs/hadoop-hadoop-datanode-slave1.outslave2: starting datanode, logging to /data/hadoop-2.6.5/logs/hadoop-hadoop-datanode-slave2.outStarting secondary namenodes [master]master: starting secondarynamenode, logging to /data/hadoop-2.6.5/logs/hadoop-hadoop-secondarynamenode-master.out

3)在master机器上jps

hadoop@master:~$ jps   # 3 个10260 NameNode10581 Jps10469 SecondaryNameNode

4)在 slave1 和slave2 上使用jps

hadoop@slave1:~/hadoop$ jps   # 2 个6688 Jps6603 DataNode==================================hadoop@slave2:~$ jps   # 2 个6600 DataNode6682 Jps
解释:jps命令是查看当前启动的节点    上面说明了在 master 节点上成功启动了NameNode 和 SecondaryNameNode,在 slave 节点上成功启动了DataNode,也就说明 HDFS 启动成功。

===========

(二)

1)在 master上

start-yarn.sh   #启动 yarn显示下面内容:hadoop@master:~$ start-yarn.sh   #启动 yarnstarting yarn daemonsstarting resourcemanager, logging to /data/hadoop-2.6.5/logs/yarn-hadoop-resourcemanager-master.outslave2: nodemanager running as process 6856. Stop it first.slave1: starting nodemanager, logging to /data/hadoop-2.6.5/logs/yarn-hadoop-nodemanager-slave1.out

2)在master上jps

hadoop@master:~$ jps   # 4 个10260 NameNode10469 SecondaryNameNode10649 ResourceManager10921 Jps

3)在 slave1 和slave2 上jps

hadoop@slave1:~/hadoop$ jps   # 3 个6771 NodeManager6887 Jps6603 DataNode=========================================hadoop@slave2:~$ jps   # 3 个7057 Jps6600 DataNode6856 NodeManager
    上面说明成功启动了 ResourceManager 和 NodeManager,也就是说 yarn 启动成功。

(三)访问 WebUI

    在 master、slave1 和 slave2 任意一台机器上打开 firefox,然后输入 http://master:8088/,如果看到如下的图片,就说明我们的 hadoop 集群搭建成功了。

这里写图片描述

(四)测试完成后,用下面命令进行关闭:

    stop-all.sh显示见下:hadoop@master:~$ stop-all.shThis script is Deprecated. Instead use stop-dfs.sh and stop-yarn.shStopping namenodes on [master]master: stopping namenodeslave1: stopping datanodeslave2: stopping datanodeStopping secondary namenodes [master]master: stopping secondarynamenodestopping yarn daemonsstopping resourcemanagerslave1: stopping nodemanagerslave2: stopping nodemanagerno proxyserver to stop再用jps分别查看master、slaver1、slave2机器的状态,发现已经关闭。

(五)清理产生的文件

【记得执行下面代码清空上次生成的文件,以免对下次测试造成影响】#################################在master机器上:su hadoop  #切换用户################################rm -r /home/hadoop/hadoop/*    #删除mkdir /home/hadoop/hadoop/datanode /home/hadoop/hadoop/namenode /home/hadoop/hadoop/tmp  #创建chmod -R 777 /home/hadoop/hadoop/datanode /home/hadoop/hadoop/namenode /home/hadoop/hadoop/tmp  #修改权限################################ssh slave1rm -r /home/hadoop/hadoop/*    #删除mkdir /home/hadoop/hadoop/datanode /home/hadoop/hadoop/namenode /home/hadoop/hadoop/tmp  #创建chmod -R 777 /home/hadoop/hadoop/datanode /home/hadoop/hadoop/namenode /home/hadoop/hadoop/tmp  #修改权限#################################ssh slave2rm -r /home/hadoop/hadoop/*    #删除mkdir /home/hadoop/hadoop/datanode /home/hadoop/hadoop/namenode /home/hadoop/hadoop/tmp  #创建chmod -R 777 /home/hadoop/hadoop/datanode /home/hadoop/hadoop/namenode /home/hadoop/hadoop/tmp  #修改权限ssh master################################

=============================================

应用mapreduce

=============================================

hadoop fs 查看hdfs操作系统命令集合
1.启动hadoop集群     start-all.sh2.创建hdfs目录     hadoop fs -mkdir /input3.上传文件     hadoop fs -put /data/hadoop-2.6.5/README.txt /input/4.修改文件名称 hadoop fs -mv /input/README.txt /input/readme.txt5.查看文件 hadoop fs -ls /input 运行输出情况见下:hadoop@master:~$ hadoop fs -ls /input Found 1 items-rw-r--r--   3 hadoop supergroup       1366 2017-08-13 19:58 /input/readme.txt【注解】输出文件夹为output,无需新建,若已存在需删除6.运行hadoop自带例子     hadoop jar /data/hadoop-2.6.5/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.5.jar wordcount /input /output运行输出情况见下:hadoop@master:~$ hadoop jar /data/hadoop-2.6.5/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.5.jar wordcount /input /output17/08/13 20:11:18 INFO client.RMProxy: Connecting to ResourceManager at master/192.168.222.139:803217/08/13 20:11:21 INFO input.FileInputFormat: Total input paths to process : 117/08/13 20:11:21 INFO mapreduce.JobSubmitter: number of splits:117/08/13 20:11:22 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1502625091562_000117/08/13 20:11:23 INFO impl.YarnClientImpl: Submitted application application_1502625091562_000117/08/13 20:11:23 INFO mapreduce.Job: The url to track the job: http://master:8088/proxy/application_1502625091562_0001/17/08/13 20:11:23 INFO mapreduce.Job: Running job: job_1502625091562_000117/08/13 20:11:45 INFO mapreduce.Job: Job job_1502625091562_0001 running in uber mode : false17/08/13 20:11:45 INFO mapreduce.Job:  map 0% reduce 0%17/08/13 20:11:59 INFO mapreduce.Job:  map 100% reduce 0%17/08/13 20:12:29 INFO mapreduce.Job:  map 100% reduce 100%17/08/13 20:12:30 INFO mapreduce.Job: Job job_1502625091562_0001 completed successfully17/08/13 20:12:30 INFO mapreduce.Job: Counters: 49    File System Counters        FILE: Number of bytes read=1836        FILE: Number of bytes written=218883        FILE: Number of read operations=0        FILE: Number of large read operations=0        FILE: Number of write operations=0        HDFS: Number of bytes read=1466        HDFS: Number of bytes written=1306        HDFS: Number of read operations=6        HDFS: Number of large read operations=0        HDFS: Number of write operations=2    Job Counters         Launched map tasks=1        Launched reduce tasks=1        Data-local map tasks=1        Total time spent by all maps in occupied slots (ms)=11022        Total time spent by all reduces in occupied slots (ms)=26723        Total time spent by all map tasks (ms)=11022        Total time spent by all reduce tasks (ms)=26723        Total vcore-milliseconds taken by all map tasks=11022        Total vcore-milliseconds taken by all reduce tasks=26723        Total megabyte-milliseconds taken by all map tasks=11286528        Total megabyte-milliseconds taken by all reduce tasks=27364352    Map-Reduce Framework        Map input records=31        Map output records=179        Map output bytes=2055        Map output materialized bytes=1836        Input split bytes=100        Combine input records=179        Combine output records=131        Reduce input groups=131        Reduce shuffle bytes=1836        Reduce input records=131        Reduce output records=131        Spilled Records=262        Shuffled Maps =1        Failed Shuffles=0        Merged Map outputs=1        GC time elapsed (ms)=245        CPU time spent (ms)=2700        Physical memory (bytes) snapshot=291491840        Virtual memory (bytes) snapshot=3782098944        Total committed heap usage (bytes)=138350592    Shuffle Errors        BAD_ID=0        CONNECTION=0        IO_ERROR=0        WRONG_LENGTH=0        WRONG_MAP=0        WRONG_REDUCE=0    File Input Format Counters         Bytes Read=1366    File Output Format Counters         Bytes Written=13067.查看文件输出结果 hadoop fs -ls /output运行输出情况见下:hadoop@master:~$ hadoop fs -ls /outputFound 2 items-rw-r--r--   3 hadoop supergroup          0 2017-08-13 20:12 /output/_SUCCESS-rw-r--r--   3 hadoop supergroup       1306 2017-08-13 20:12 /output/part-r-000008.查看词频统计结果 hadoop fs -cat /output/part-r-00000运行输出情况见下:hadoop@master:~$ hadoop fs -cat /output/part-r-00000(BIS),  1(ECCN)  1(TSU)   1(see    15D002.C.1,  1740.13) 1<http://www.wassenaar.org/> 1Administration  1Apache  1BEFORE  1BIS 1Bureau  1Commerce,   1Commodity   1Control 1Core    1Department  1ENC 1Exception   1Export  2For 1Foundation  1Government  1Hadoop  1Hadoop, 1Industry    1Jetty   1License 1Number  1Regulations,    1SSL 1Section 1Security    1See 1Software    2Technology  1The 4This    1U.S.    1Unrestricted    1about   1algorithms. 1and 6and/or  1another 1any 1as  1asymmetric  1at: 2both    1by  1check   1classified  1code    1code.   1concerning  1country 1country's   1country,    1cryptographic   3currently   1details 1distribution    2eligible    1encryption  3exception   1export  1following   1for 3form    1from    1functions   1has 1have    1http://hadoop.apache.org/core/  1http://wiki.apache.org/hadoop/  1if  1import, 2in  1included    1includes    2information 2information.    1is  1it  1latest  1laws,   1libraries   1makes   1manner  1may 1more    2mortbay.org.    1object  1of  5on  2or  2our 2performing  1permitted.  1please  2policies    1possession, 2project 1provides    1re-export   2regulations 1reside  1restrictions    1security    1see 1software    2software,   2software.   2software:   1source  1the 8this    3to  2under   1use,    2uses    1using   2visit   1website 1which   2wiki,   1with    1written 1you 1your    19.将hdfs上文件导出到本地 【注解】先在/home/hadoop/下新建一个/home/hadoop/example目录用于接受产生的文件su hadoop mkdir /home/hadoop/example再执行:hadoop@master:~$ hadoop fs -get /output/part-r-00000 /home/hadoop/example    执行完成后,在/home/hadoop/example目录下生成part-r-00000文件,见下图:此时测试成功,即安装Hadoop并跑实例成功。

这里写图片描述

原创粉丝点击