namenode异常退出分析及解决办法

来源:互联网 发布:手机淘宝旧版本 编辑:程序博客网 时间:2024/05/28 23:12

-----journalnode异常日志
2017-09-04 02:39:21,667 INFO org.apache.hadoop.hdfs.server.namenode.FileJournalManager: Finalizing edits file /data/hadoop/journalnode/nn/XXXXXXX/current/edits_inprogress_0000000000294625237 -> /data/hadoop/journalnode/nn/XXXXXXX/current/edits_0000000000294625237-0000000000294625632
2017-09-04 02:40:10,136 WARN org.apache.hadoop.ipc.Server: IPC Server handler 1 on 8485, call org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocol.startLogSegment from 10.117.68.10:41380 Call#1549233 Retry#0: output error
2017-09-04 02:40:10,177 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 8485 caught an exception
java.nio.channels.ClosedChannelException
        at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:265)
        at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:474)
        at org.apache.hadoop.ipc.Server.channelWrite(Server.java:2534)
        at org.apache.hadoop.ipc.Server.access$1900(Server.java:130)
        at org.apache.hadoop.ipc.Server$Responder.processResponse(Server.java:965)
        at org.apache.hadoop.ipc.Server$Responder.doRespond(Server.java:1030)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2068)
        
        
        

---------namenode
2017-09-04 02:39:10,315 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* BlockManager: ask 10.117.210.216:50010 to delete [blk_1106388932_32648321]
2017-09-04 02:39:11,501 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Rescanning after 30000 milliseconds
2017-09-04 02:39:11,501 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Scanned 0 directive(s) and 0 block(s) in 0 millisecond(s).
2017-09-04 02:39:13,315 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* BlockManager: ask 10.51.20.155:50010 to delete [blk_1106388932_32648321]
2017-09-04 02:39:16,316 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* BlockManager: ask 10.117.68.10:50010 to delete [blk_1106388932_32648321]
2017-09-04 02:39:17,505 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: there are no corrupt file blocks.
2017-09-04 02:39:20,711 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocateBlock: /user/hadoop/.staging/job_1504439356030_2177/job_1504439356030_2177_1.jhist. BP-1512605171-10.117.68.10-1461148241300 blk_1106388933_32648322{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-81a1b17d-50e8-44c1-8be8-09268a086f52:NORMAL|RBW], ReplicaUnderConstruction[[DISK]DS-946f16ae-f0cb-4cdc-a09a-9f319537bba2:NORMAL|RBW], ReplicaUnderConstruction[[DISK]DS-4c528262-b957-4c5d-8313-2aaaddbe0a00:NORMAL|RBW]]}
2017-09-04 02:39:21,229 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* fsync: /user/hadoop/.staging/job_1504439356030_2177/job_1504439356030_2177_1.jhist for DFSClient_NONMAPREDUCE_-845063282_1
2017-09-04 02:39:21,354 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Roll Edit Log from 10.51.20.155
2017-09-04 02:39:21,354 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Rolling edit logs
2017-09-04 02:39:21,354 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Ending log segment 294625237
2017-09-04 02:39:21,661 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 396 Total time for transactions(ms): 16 Number of transactions batched in Syncs: 2 Number of syncs: 198 SyncTimes(ms): 5058 2980
2017-09-04 02:39:21,672 INFO org.apache.hadoop.hdfs.server.namenode.FileJournalManager: Finalizing edits file /data/hadoop/dfs/nn/current/edits_inprogress_0000000000294625237 -> /data/hadoop/dfs/nn/current/edits_0000000000294625237-0000000000294625632
2017-09-04 02:39:21,690 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Starting log segment at 294625633
2017-09-04 02:39:27,692 INFO org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 6001 ms (timeout=20000 ms) for a response for startLogSegment(294625633). Succeeded so far: [10.51.20.155:8485]
2017-09-04 02:39:28,698 INFO org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 7008 ms (timeout=20000 ms) for a response for startLogSegment(294625633). Succeeded so far: [10.51.20.155:8485]
2017-09-04 02:39:29,700 INFO org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 8009 ms (timeout=20000 ms) for a response for startLogSegment(294625633). Succeeded so far: [10.51.20.155:8485]
2017-09-04 02:39:30,701 INFO org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 9010 ms (timeout=20000 ms) for a response for startLogSegment(294625633). Succeeded so far: [10.51.20.155:8485]
2017-09-04 02:39:31,701 INFO org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 10011 ms (timeout=20000 ms) for a response for startLogSegment(294625633). Succeeded so far: [10.51.20.155:8485]
2017-09-04 02:39:32,703 INFO org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 11012 ms (timeout=20000 ms) for a response for startLogSegment(294625633). Succeeded so far: [10.51.20.155:8485]
2017-09-04 02:39:33,704 INFO org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 12013 ms (timeout=20000 ms) for a response for startLogSegment(294625633). Succeeded so far: [10.51.20.155:8485]
2017-09-04 02:39:34,704 INFO org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 13014 ms (timeout=20000 ms) for a response for startLogSegment(294625633). Succeeded so far: [10.51.20.155:8485]
2017-09-04 02:39:35,706 WARN org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 14015 ms (timeout=20000 ms) for a response for startLogSegment(294625633). Succeeded so far: [10.51.20.155:8485]
2017-09-04 02:39:36,706 WARN org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 15016 ms (timeout=20000 ms) for a response for startLogSegment(294625633). Succeeded so far: [10.51.20.155:8485]
2017-09-04 02:39:37,708 WARN org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 16017 ms (timeout=20000 ms) for a response for startLogSegment(294625633). Succeeded so far: [10.51.20.155:8485]
2017-09-04 02:39:38,709 WARN org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 17018 ms (timeout=20000 ms) for a response for startLogSegment(294625633). Succeeded so far: [10.51.20.155:8485]
2017-09-04 02:39:39,709 WARN org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 18019 ms (timeout=20000 ms) for a response for startLogSegment(294625633). Succeeded so far: [10.51.20.155:8485]
2017-09-04 02:39:40,711 WARN org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 19020 ms (timeout=20000 ms) for a response for startLogSegment(294625633). Succeeded so far: [10.51.20.155:8485]
2017-09-04 02:39:41,502 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Rescanning after 30000 milliseconds
2017-09-04 02:39:41,692 FATAL org.apache.hadoop.hdfs.server.namenode.FSEditLog: Error: starting log segment 294625633 failed for required journal (JournalAndStream(mgr=QJM to [10.117.68.10:8485, 10.51.20.155:8485, 10.117.210.216:8485], stream=null))
java.io.IOException: Timed out waiting 20000ms for a quorum of nodes to respond.
        at org.apache.hadoop.hdfs.qjournal.client.AsyncLoggerSet.waitForWriteQuorum(AsyncLoggerSet.java:137)
        at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.startLogSegment(QuorumJournalManager.java:403)
        at org.apache.hadoop.hdfs.server.namenode.JournalSet$JournalAndStream.startLogSegment(JournalSet.java:107)
        at org.apache.hadoop.hdfs.server.namenode.JournalSet$3.apply(JournalSet.java:222)
        at org.apache.hadoop.hdfs.server.namenode.JournalSet.mapJournalsAndReportErrors(JournalSet.java:393)
        at org.apache.hadoop.hdfs.server.namenode.JournalSet.startLogSegment(JournalSet.java:219)
        at org.apache.hadoop.hdfs.server.namenode.FSEditLog.startLogSegment(FSEditLog.java:1181)
        at org.apache.hadoop.hdfs.server.namenode.FSEditLog.rollEditLog(FSEditLog.java:1150)
        at org.apache.hadoop.hdfs.server.namenode.FSImage.rollEditLog(FSImage.java:1231)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.rollEditLog(FSNamesystem.java:6119)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.rollEditLog(NameNodeRpcServer.java:908)
        at org.apache.hadoop.hdfs.protocolPB.NamenodeProtocolServerSideTranslatorPB.rollEditLog(NamenodeProtocolServerSideTranslatorPB.java:139)
        at org.apache.hadoop.hdfs.protocol.proto.NamenodeProtocolProtos$NamenodeProtocolService$2.callBlockingMethod(NamenodeProtocolProtos.java:11214)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:587)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1026)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1642)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)
2017-09-04 02:39:41,712 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1
2017-09-04 02:39:41,717 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
------zkfailover日志

2017-09-04 02:39:29,219 FATAL org.apache.hadoop.ha.ActiveStandbyElector: Received stat error from Zookeeper. code:CONNECTIONLOSS. Not retrying further znode monitoring connection errors.
2017-09-04 02:39:29,468 INFO org.apache.zookeeper.ZooKeeper: Session: 0x35e3b531fd611b6 closed
2017-09-04 02:39:29,468 FATAL org.apache.hadoop.ha.ZKFailoverController: Fatal error occurred:Received stat error from Zookeeper. code:CONNECTIONLOSS. Not retrying further znode monitoring connection errors.
2017-09-04 02:39:29,468 INFO org.apache.hadoop.ipc.Server: Stopping server on 8019
2017-09-04 02:39:29,468 WARN org.apache.hadoop.ha.ActiveStandbyElector: Ignoring stale result from old client with sessionId 0x35e3b531fd611b6
2017-09-04 02:39:29,469 WARN org.apache.hadoop.ha.ActiveStandbyElector: Ignoring stale result from old client with sessionId 0x35e3b531fd611b6
2017-09-04 02:39:29,469 WARN org.apache.hadoop.ha.ActiveStandbyElector: Ignoring stale result from old client with sessionId 0x35e3b531fd611b6
2017-09-04 02:39:29,469 INFO org.apache.hadoop.ha.ActiveStandbyElector: Yielding from election
2017-09-04 02:39:29,469 INFO org.apache.hadoop.ipc.Server: Stopping IPC Server Responder
2017-09-04 02:39:29,469 INFO org.apache.hadoop.ipc.Server: Stopping IPC Server listener on 8019
2017-09-04 02:39:29,469 WARN org.apache.hadoop.ha.ActiveStandbyElector: Ignoring stale result from old client with sessionId 0x35e3b531fd611b6
2017-09-04 02:39:29,469 INFO org.apache.hadoop.ha.HealthMonitor: Stopping HealthMonitor thread
2017-09-04 02:39:29,469 INFO org.apache.zookeeper.ClientCnxn: EventThread shut down





-------standby namenode
2017-09-04 09:43:20,108 WARN org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:hadoop (auth:SIMPLE) cause:org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby
2017-09-04 09:43:20,108 INFO org.apache.hadoop.ipc.Server: IPC Server handler 27 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo from 10.117.68.10:54209 Call#74327 Retry#14: org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby
2017-09-04 09:43:20,109 WARN org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:mapred (auth:SIMPLE) cause:org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby


原因:namenode开启新日志段时,需要大多数journalnode写成功并响应,由于规定时间内只得到一个jn响应,active namenode认为异常然后自动退出服务;zkfailover捕捉到namenode异常,但由于2点39分zk同步日志耗时太长,session超时,进而导致zkfailover服务关闭,没有引发热切,之前的standby namenode依旧是standby,从而整个hadoop不可用
解决办法:

1、在namenode对应的配置文件中调大写journanode超时参数(默认是20000ms)

hdfs-site.xml中增加配置:

dfs.qjournal.start-segment.timeout.ms=90000
dfs.qjournal.select-input-streams.timeout.ms = 90000

dfs.qjournal.write-txns.timeout.ms = 90000

core-site.xml中增加配置:

ipc.client.connect.timeout = 90000

2、关闭zk优先同步日志功能

forceSync=no



-------------------------------该问题后续又出现了------------------------------

上面调整后,最近又出现了:
WARN org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 89107 ms (timeout=90000 ms) for a response for startLogSegment(365455459). Succeeded so far: [10.117.68.10:8485]
Error: starting log segment 365455459 failed for required journal (JournalAndStream(mgr=QJM to [10.117.68.10:8485, 10.51.20.155:8485, 10.117.210.216:8485], stream=null))
java.io.IOException: Timed out waiting 90000ms for a quorum of nodes to respond.
namenode在90s(默认20s)内仅获得一个jn节点的响应,未取得绝大多数jn节点响应(jn日志中提示ipc通信异常),故active namenode退出服务.
zk异常分析:
查看zk日志,报通信异常:
java.lang.Exception: shutdown Leader! reason: Not sufficient followers synced, only synced with sids: [ 2 ]
leader失去follewers的响应,导致zk server 关闭,zk节点重新进入leader选举模式,期间zk不对外提供服务,此时,zkfailovercontroller、rm等服务均因无法与zk取得联系而退出,整个集群将处于瘫痪状态:
1、namenode由于zkfailovercontroller服务异常而不能实现热切,备用namenode将一直处于standby状态。
2、集群因rm异常而不再提供计算服务
从日志中看到,本次选举从2点49分开始,经过多轮选举,直到2点52分,zk集群选举成功并重新对外提供服务。

因为是最近发生的异常,怀疑应用侧做过调整,与开发沟通,得知新上线了ctr应用,并配置了oozie调度,触发时间在2点30分,运行时长大概40分钟左右,与问题发生点匹配。
为进一步验证是否该应用引起,让开发重新调起该作业,通过跟踪发现,oozie中当同时运行到两个hive job(oozie中名字分别是tracker_click_dwnl_node和combine_request_node,每个运行都在20分钟左右)时,集群管理服务CM发出一堆性能、组件间rpc异常等告警。查看yarn监控界面,这两个job各分到1个节点运行(总共3个节点),此时,运行这两个job的节点对外响应特慢甚至无法响应,从而引发各种服务异常甚至退出。
解决办法:
1、通知开发人员调整oozie中job依赖,避免两个hive job同一时间段运行,这样即使1个节点异常也不影响服务正常使用。
2、将zk节点间通信超时时间延长 syncLimit 由默认值 5tick改成30tick.
3、若后续还有问题,ctr job跑的是线上数据,量较大,最好迁移ctr应用并优化程序
4、由于测试集群只有3个节点,上面承载了各种组件服务、spark、hive应用等等,只要2个节点同时异常,集群基本瘫痪,最好扩资源和节点(最后的办法)。


原创粉丝点击