Hadoop源码分析32 TaskTracker流程
来源:互联网 发布:普中科技单片机论坛 编辑:程序博客网 时间:2024/06/05 09:27
提交作业:
hadoop
只有一个Map任务
server3-RPC请求:getProtocolVersion(TaskUmbilicalProtocol, 19) from 127.0.0.1:42644
返回:19
server3-RPC请求:getTask(JvmContext={pid= 22310}) from 127.0.0.1:42645
返回:JvmTask={shouldDie=false,
server3-RPC请求:statusUpdate(attempt_201404230054_0005_m_000002_0(Setup[0]),
返回:true
server3-RPC请求:done(attempt_201404230054_0005_m_000002_0(Setup[0]),JvmContext={pid= 22310}) from 127.0.0.1:49814
返回:null
server3-RPC请求:getProtocolVersion(TaskUmbilicalProtocol,19) from 127.0.0.1:42656
返回:19
server3-RPC请求:getTask(JvmContext={pid=22364}) from 127.0.0.1:42657
返回:JvmTask={shouldDie=false,
server3-RPC请求:statusUpdate(attempt_201404230054_0005_m_000000_0(Maps[0]), MapTaskStatus={RUNNING}, JvmContext={ = 22364}) from 127.0.0.1:42663
返回:true
server3-RPC请求:done(attempt_201404230054_0005_m_000000_0(Maps[0])}, JvmContext={ pid=22364}) from 127.0.0.1:42667
返回:null
server3-RPC请求:getProtocolVersion(TaskUmbilicalProtocol,19) from 127.0.0.1:42669
返回:19
server3-RPC请求:getTask(JvmContext={pid=22451}) from 127.0.0.1:42670
返回:JvmTask={shouldDie=false,
server2-RPC请求:getProtocolVersion(TaskUmbilicalProtocol,19) from 127.0.0.1:49869
返回:19
server2-RPC请求:getTask(JvmContext={pid= 23395}) from 127.0.0.1:49873
返回:JvmTask={shouldDie=false,
server3-RPC请求:getMapCompletionEvents(job_201404230054_0005,0, 10000, attempt_201404230054_0005_r_000000_0(Reduces[0]), JvmContext={pid=22451})) from 127.0.0.1:42674
返回:[Task Id :attempt_201404230054_0005_m_000000_0(Maps[0]), Status :SUCCEEDED]
server2-RPC请求:getMapCompletionEvents(job_201404230054_0005,0, 10000, attempt_201404230054_0005_r_000001_0(Reduces[1]), JvmContext={pid=23395})) from 127.0.0.1:49877
返回:[Task Id :attempt_201404230054_0005_m_000000_0(Maps[0]), Status :SUCCEEDED]
server2-RPC请求:statusUpdate(attempt_201404230054_0005_r_000001_0(Reduces[1]),
返回:
server3-HTTP请求:GET/mapOutput?job=job_201404230054_0005&map=attempt_201404230054_0005_m_000000_0&reduce=1
HTTP/1.1
UrlHash:pNffeghQzeSCbw2A5M5vWUGr
User-Agent:Java/1.7.0_07
Host:server3:50060
Accept:text/html, image/gif, image/jpeg, *; q=.2, **; q=.2
Connection:keep-alive
返回:/tmp/hadoop-admin/mapred/local/taskTracker/admin/jobcache/job_201404230054_0005/attempt_201404230054_0005_m_000000_0/output/file.out
server3-RPC请求:getMapCompletionEvents(job_201404230054_0005,1, 10000, attempt_201404230054_0005_r_000000_0(Reduces[0]), JvmContext@318e136f)from 127.0.0.1:42674
返回:[]
server3-RPC请求:statusUpdate(attempt_201404230054_0005_r_000000_0(Reduces[0]),
返回:
server2-RPC请求:statusUpdate(attempt_201404230054_0005_r_000001_0(Reduces[1]),ReduceTaskStatus={RUNNING}, JvmContext@25be342d) from127.0.0.1:49877
返回:
server3-RPC请求:commitPending(attempt_201404230054_0005_r_000000_0(Reduces[0]),
返回:null
server2-RPC请求:commitPending(attempt_201404230054_0005_r_000001_0(Reduces[1]),ReduceTaskStatus={COMMIT_PENDING}, JvmContext@5f84f3d2) from127.0.0.1:49877
返回:
server3-RPC请求:statusUpdate(attempt_201404230054_0005_r_000000_0(Reduces[0]),
返回:
server2-RPC请求:statusUpdate(attempt_201404230054_0005_r_000001_0(Reduces[1]),
返回:
server3-RPC请求:canCommit(attempt_201404230054_0005_r_000000_0(Reduces[0]), JvmContext@7e9e20a6)from 127.0.0.1:42674
返回:true
server2-RPC请求:canCommit(attempt_201404230054_0005_r_000001_0(Reduces[1]),
返回:true
server3-RPC请求:statusUpdate(attempt_201404230054_0005_r_000000_0(Reduces[0]),
返回:
server2-RPC请求:done(attempt_201404230054_0005_r_000001_0(Reduces[1]),
返回:null
server3-RPC请求:done(attempt_201404230054_0005_r_000000_0(Reduces[0]),JvmContext@3fda03d2) from 127.0.0.1:42717
返回:null
server3-RPC请求:getProtocolVersion(TaskUmbilicalProtocol,19) from 127.0.0.1:42719
返回:19
server3-RPC请求:getTask(JvmContext@3c122241)from 127.0.0.1:42720
返回:JvmTask={shouldDie=false,
server3-RPC请求:statusUpdate(attempt_201404230054_0005_m_000001_0(Cleanup[0]),
返回:
server3-RPC请求:done(attempt_201404230054_0005_m_000001_0(Cleanup[0]),JvmContext@339435a5) from 127.0.0.1:42728
返回:
- Hadoop源码分析32 TaskTracker流程
- Hadoop源码之TaskTracker
- _00008 Hadoop TaskTracker源码浅析
- HADOOP MR架构分析 JobTracker 和 TaskTracker
- mapreduce源码分析之TaskTracker的启动
- mapreduce源码分析之TaskTracker接受HeartbeatResponse
- TaskTracker中HttpServer doGet源码分析
- Hadoop源码分析7: IPC流程(2) 流程
- Hadoop源码分析16: IPC流程(11) 整体流程
- Hadoop之wordcount源码分析和MapReduce流程分析
- JobTracker响应TaskTracker心跳及调度task源码级分析
- Hadoop–TaskTracker 相关
- Hadoop JobTracker和TaskTracker
- hadoop jobtracker与tasktracker
- Hadoop源码流程分析4-Task节点执行任务
- Hadoop源码分析笔记(八):HDFS主要流程
- hadoop源码 - HDFS write流程分析(Hadoop2.0)
- Hadoop源码分析7: IPC流程(1) 主要类
- Hadoop源码分析27 JobTracker空载处理心跳
- Hadoop源码分析28 JobTracker 处理JobClient请求
- Hadoop源码分析29 split和splitmetainfo
- Hadoop源码分析30 JobInProgress 的 TaskInProgress 执行情况
- Hadoop源码分析31 TaskTracke成员
- Hadoop源码分析32 TaskTracker流程
- Hadoop源码分析33 Child的主要流程
- Hadoop源码分析34 Child的Map
- Collection测试
- Hadoop源码分析35 QuickSort & HeapSort
- Hadoop源码分析36 Child的Reduce分析
- 傅里叶变换
- 数学公式和标点符号的英文读法
- 浅谈PROFINET IO通信的实时性