Hadoop基础教程-第5章 YARN:资源调度平台(5.4 YARN集群运行)(草稿)
来源:互联网 发布:php mvc项目 编辑:程序博客网 时间:2024/06/05 23:54
第5章 YARN:资源调度平台
5.4 YARN集群运行
HDFS已经启动
[root@node1 ~]# jps2247 NameNode2584 Jps2348 DataNode
[root@node2 ~]# jps2279 Jps2137 DataNode2201 SecondaryNameNode
[root@node3 ~]# jps5179 DataNode7295 Jps
5.4.1 分发文件
[root@node1 hadoop]# scp yarn-site.xml node2:/opt/hadoop-2.7.3/etc/hadoop/yarn-site.xml 100% 938 0.9KB/s 00:00 [root@node1 hadoop]# scp mapred-site.xml node2:/opt/hadoop-2.7.3/etc/hadoop/mapred-site.xml 100% 856 0.8KB/s 00:00 [root@node1 hadoop]# scp yarn-site.xml node3:/opt/hadoop-2.7.3/etc/hadoop/yarn-site.xml 100% 938 0.9KB/s 00:00 [root@node1 hadoop]# scp mapred-site.xml node3:/opt/hadoop-2.7.3/etc/hadoop/mapred-site.xml 100% 856 0.8KB/s 00:00
5.4.2 启动YARN
[root@node1 ~]# start-yarn.shstarting yarn daemonsstarting resourcemanager, logging to /opt/hadoop-2.7.3/logs/yarn-root-resourcemanager-node1.outnode3: starting nodemanager, logging to /opt/hadoop-2.7.3/logs/yarn-root-nodemanager-node3.outnode2: starting nodemanager, logging to /opt/hadoop-2.7.3/logs/yarn-root-nodemanager-node2.outnode1: starting nodemanager, logging to /opt/hadoop-2.7.3/logs/yarn-root-nodemanager-node1.out
[root@node1 ~]# jps2753 NodeManager3041 Jps2247 NameNode2649 ResourceManager2348 DataNode
[root@node2 ~]# jps2341 NodeManager2137 DataNode2201 SecondaryNameNode2443 Jps
[root@node3 ~]# jps7350 NodeManager5179 DataNode7451 Jps
5.4.3 Web页面
http://192.168.80.131:8088
5.4.4 Hadoop自带样例程序
[root@node1 ~]# cd /opt/hadoop-2.7.3/share/hadoop/mapreduce/[root@node1 mapreduce]# lltotal 4972-rw-r--r-- 1 root root 537521 Aug 17 2016 hadoop-mapreduce-client-app-2.7.3.jar-rw-r--r-- 1 root root 773501 Aug 17 2016 hadoop-mapreduce-client-common-2.7.3.jar-rw-r--r-- 1 root root 1554595 Aug 17 2016 hadoop-mapreduce-client-core-2.7.3.jar-rw-r--r-- 1 root root 189714 Aug 17 2016 hadoop-mapreduce-client-hs-2.7.3.jar-rw-r--r-- 1 root root 27598 Aug 17 2016 hadoop-mapreduce-client-hs-plugins-2.7.3.jar-rw-r--r-- 1 root root 61745 Aug 17 2016 hadoop-mapreduce-client-jobclient-2.7.3.jar-rw-r--r-- 1 root root 1551594 Aug 17 2016 hadoop-mapreduce-client-jobclient-2.7.3-tests.jar-rw-r--r-- 1 root root 71310 Aug 17 2016 hadoop-mapreduce-client-shuffle-2.7.3.jar-rw-r--r-- 1 root root 295812 Aug 17 2016 hadoop-mapreduce-examples-2.7.3.jardrwxr-xr-x 2 root root 4096 Aug 17 2016 libdrwxr-xr-x 2 root root 30 Aug 17 2016 lib-examplesdrwxr-xr-x 2 root root 4096 Aug 17 2016 sources
求解PI值
[root@node1 mapreduce]# hadoop jar hadoop-mapreduce-examples-2.7.3.jar pi 3 3Number of Maps = 3Samples per Map = 3Wrote input for Map #0Wrote input for Map #1Wrote input for Map #2Starting Job17/05/23 10:57:55 INFO client.RMProxy: Connecting to ResourceManager at node1/192.168.80.131:803217/05/23 10:57:56 INFO input.FileInputFormat: Total input paths to process : 317/05/23 10:57:56 INFO mapreduce.JobSubmitter: number of splits:317/05/23 10:57:57 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1495550966527_000117/05/23 10:57:58 INFO impl.YarnClientImpl: Submitted application application_1495550966527_000117/05/23 10:57:58 INFO mapreduce.Job: The url to track the job: http://node1:8088/proxy/application_1495550966527_0001/17/05/23 10:57:58 INFO mapreduce.Job: Running job: job_1495550966527_000117/05/23 10:58:17 INFO mapreduce.Job: Job job_1495550966527_0001 running in uber mode : false17/05/23 10:58:17 INFO mapreduce.Job: map 0% reduce 0%17/05/23 10:59:02 INFO mapreduce.Job: map 100% reduce 0%17/05/23 10:59:15 INFO mapreduce.Job: map 100% reduce 100%17/05/23 10:59:16 INFO mapreduce.Job: Job job_1495550966527_0001 completed successfully17/05/23 10:59:16 INFO mapreduce.Job: Counters: 49 File System Counters FILE: Number of bytes read=72 FILE: Number of bytes written=475761 FILE: Number of read operations=0 FILE: Number of large read operations=0 FILE: Number of write operations=0 HDFS: Number of bytes read=777 HDFS: Number of bytes written=215 HDFS: Number of read operations=15 HDFS: Number of large read operations=0 HDFS: Number of write operations=3 Job Counters Launched map tasks=3 Launched reduce tasks=1 Data-local map tasks=3 Total time spent by all maps in occupied slots (ms)=127167 Total time spent by all reduces in occupied slots (ms)=9302 Total time spent by all map tasks (ms)=127167 Total time spent by all reduce tasks (ms)=9302 Total vcore-milliseconds taken by all map tasks=127167 Total vcore-milliseconds taken by all reduce tasks=9302 Total megabyte-milliseconds taken by all map tasks=130219008 Total megabyte-milliseconds taken by all reduce tasks=9525248 Map-Reduce Framework Map input records=3 Map output records=6 Map output bytes=54 Map output materialized bytes=84 Input split bytes=423 Combine input records=0 Combine output records=0 Reduce input groups=2 Reduce shuffle bytes=84 Reduce input records=6 Reduce output records=0 Spilled Records=12 Shuffled Maps =3 Failed Shuffles=0 Merged Map outputs=3 GC time elapsed (ms)=1847 CPU time spent (ms)=12410 Physical memory (bytes) snapshot=711430144 Virtual memory (bytes) snapshot=8312004608 Total committed heap usage (bytes)=436482048 Shuffle Errors BAD_ID=0 CONNECTION=0 IO_ERROR=0 WRONG_LENGTH=0 WRONG_MAP=0 WRONG_REDUCE=0 File Input Format Counters Bytes Read=354 File Output Format Counters Bytes Written=97Job Finished in 81.368 secondsEstimated value of Pi is 3.55555555555555555556
[root@node1 mapreduce]# hadoop jar hadoop-mapreduce-examples-2.7.3.jar wordcount /user/root/input /user/root/output17/05/23 11:01:34 INFO client.RMProxy: Connecting to ResourceManager at node1/192.168.80.131:803217/05/23 11:01:36 INFO input.FileInputFormat: Total input paths to process : 217/05/23 11:01:36 INFO mapreduce.JobSubmitter: number of splits:217/05/23 11:01:37 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1495550966527_000217/05/23 11:01:37 INFO impl.YarnClientImpl: Submitted application application_1495550966527_000217/05/23 11:01:37 INFO mapreduce.Job: The url to track the job: http://node1:8088/proxy/application_1495550966527_0002/17/05/23 11:01:37 INFO mapreduce.Job: Running job: job_1495550966527_000217/05/23 11:01:58 INFO mapreduce.Job: Job job_1495550966527_0002 running in uber mode : false17/05/23 11:01:58 INFO mapreduce.Job: map 0% reduce 0%17/05/23 11:02:15 INFO mapreduce.Job: map 100% reduce 0%17/05/23 11:02:25 INFO mapreduce.Job: map 100% reduce 100%17/05/23 11:02:26 INFO mapreduce.Job: Job job_1495550966527_0002 completed successfully17/05/23 11:02:26 INFO mapreduce.Job: Counters: 49 File System Counters FILE: Number of bytes read=89 FILE: Number of bytes written=355953 FILE: Number of read operations=0 FILE: Number of large read operations=0 FILE: Number of write operations=0 HDFS: Number of bytes read=301 HDFS: Number of bytes written=46 HDFS: Number of read operations=9 HDFS: Number of large read operations=0 HDFS: Number of write operations=2 Job Counters Launched map tasks=2 Launched reduce tasks=1 Data-local map tasks=2 Total time spent by all maps in occupied slots (ms)=29625 Total time spent by all reduces in occupied slots (ms)=7154 Total time spent by all map tasks (ms)=29625 Total time spent by all reduce tasks (ms)=7154 Total vcore-milliseconds taken by all map tasks=29625 Total vcore-milliseconds taken by all reduce tasks=7154 Total megabyte-milliseconds taken by all map tasks=30336000 Total megabyte-milliseconds taken by all reduce tasks=7325696 Map-Reduce Framework Map input records=6 Map output records=14 Map output bytes=140 Map output materialized bytes=95 Input split bytes=216 Combine input records=14 Combine output records=7 Reduce input groups=6 Reduce shuffle bytes=95 Reduce input records=7 Reduce output records=6 Spilled Records=14 Shuffled Maps =2 Failed Shuffles=0 Merged Map outputs=2 GC time elapsed (ms)=574 CPU time spent (ms)=4590 Physical memory (bytes) snapshot=514162688 Virtual memory (bytes) snapshot=6236823552 Total committed heap usage (bytes)=301146112 Shuffle Errors BAD_ID=0 CONNECTION=0 IO_ERROR=0 WRONG_LENGTH=0 WRONG_MAP=0 WRONG_REDUCE=0 File Input Format Counters Bytes Read=85 File Output Format Counters Bytes Written=46
求解wordcount过程中,我们可以观察页面http://192.168.80.131:8088,
阅读全文
0 0
- Hadoop基础教程-第5章 YARN:资源调度平台(5.4 YARN集群运行)(草稿)
- Hadoop基础教程-第5章 YARN:资源调度平台(5.3 YARN集群配置)(草稿)
- Hadoop基础教程-第5章 YARN:资源调度平台(5.1 YARN介绍)(草稿)
- Hadoop基础教程-第5章 YARN:资源调度平台(5.6 YARN的命令)(草稿)
- Hadoop基础教程-第5章 YARN:资源调度平台(5.5 YARN的调度器)(草稿)
- Hadoop基础教程-第5章 YARN:资源调度平台(5.2 YARN参数解读与调优)
- Hadoop基础教程-第9章 HA高可用(9.4 YARN 高可用)(草稿)
- Hadoop基础教程-第8章 Zookeeper(8.4 Zookeeper集群模式)(草稿)
- Hadoop-2.7.3集群(YARN)搭建
- Hadoop基础教程-第9章 HA高可用(9.3 HDFS 高可用运行)(草稿)
- Yarn 资源调度策略
- Yarn 资源调度器
- YARN资源调度策略
- 资源调度框架YARN
- Yarn资源调度策略
- Yarn集群资源规划
- Hadoop Yarn 集群搭建
- hadoop集群部署(yarn)
- EOJ----连续正整数之和
- 技术实操|Apache Spark 内存管理详解(上篇)
- java 堆 栈 方法区的简单分析
- Swift 枚举关联值
- IBM x3650 M4服务器,电源断电后,来电自动开机
- Hadoop基础教程-第5章 YARN:资源调度平台(5.4 YARN集群运行)(草稿)
- 重要开发网站-持续更新
- M_CYOUSA
- 内存管理
- where条件语句
- 1120. Friend Numbers (20)[字符串处理]
- Linux中用户名和用户组的管理
- 2-3树
- 图