Hadoop运维中遇到的问题(持续更新中......)

来源:互联网 发布:seo查询平台语句 编辑:程序博客网 时间:2024/05/09 00:59

1. HDFS相关问题

症状1:HDFS HA场景下,启动时出现两个NN都为standby

Namenode日志中提示:
WARN org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer: Unable to trigger a roll of the active NN
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException): Operation category JOURNAL is not supported in state standby
        at org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:87)
        at org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.checkOperation(NameNode.java:1691)
......
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1026)
Failover Controller日志中提示:
WARN org.apache.hadoop.ha.ActiveStandbyElector: Exception handling the winning of election
java.lang.IllegalArgumentException: Unable to determine service address for namenode 'namenode123'
        at org.apache.hadoop.hdfs.tools.NNHAServiceTarget.<init>(NNHAServiceTarget.java:76)
        at org.apache.hadoop.hdfs.tools.DFSZKFailoverController.dataToTarget(DFSZKFailoverController.java:69)
......
        at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:498)

版本:2.5.0+cdh5.2.0

原因分析:
此类问题一般是由ZKFC引起。此时ZK将会有一个ZNode:/hadoop-ha/${dfs.nameservices}/ActiveBreadCrumb,查看其内容应该会发现namenode123关键字。这表明ZKFC在寻找nameservice ID为namenode123的NN,但是本次启动时NN的nameservice ID却不是namenode123(本次nameservice ID可在hdfs-site.xml中看到)。暂不清楚ZKFC为何会检测一个旧的nameservice ID。

解决办法:
在ZK中初始化HA状态,可通过CM界面操作,或者执行sudo -u hdfs hdfs zkfc -formatZK,然后重启HDFS即可。


症状1:NN异常的接收到SIGNAL 15,从而进程退出

Namenode日志中除了如下提示之外,别无异常:
ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: RECEIVED SIGNAL 15: SIGTERM

该问题出现了多次,并且都出现在同一个时间点。

版本:2.5.0+cdh5.2.0

原因分析:
因为多次出现在同一个时间点,怀疑与定时任务相关。查看系统日志/var/log/messages时,在问题出现的时间点发现了如下类似的提示:
Jan 7 14:00:01 host1 ntpd[32101]: ntpd exiting on signal 15
Jan 7 13:59:59 host1 ntpd[44764]: ntpd 4.2.4p8@1.1612-o Fri Feb 22 11:23:27 UTC 2013 (1)
Jan 7 13:59:59 host1 ntpd[44765]: precision = 0.143 usec
Jan 7 13:59:59 host1 ntpd[44765]: Listening on interface #0 wildcard, 0.0.0.0#123 Disabled
Jan 7 13:59:59 host1 ntpd[44765]: Listening on interface #1 wildcard, ::#123 Disabled
......

看来是ntpd服务停止时发送了signal 15,但是NN进程为何会接收到该信号呢?

后来发现了一个CentOs的bug,和我们的场景非常相似。虽然该问题的根本原因,目前还没有水落石出,但是我本人倾向于CentOS bug的说法。

解决办法:
ntpd服务停止命令是采用ntpdate方式同步时间时加入的,抛弃这种时间同步方式,采用ntpd同步方式即可。这两种同步时间方式的区别大家可以百度一下“ntpdate和nptd的区别”。


2. YARN相关问题

症状1:NodeManager不能启动,错误如下

org.apache.hadoop.yarn.exceptions.YarnRuntimeException: Failed to initialize LocalizationService
        at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.serviceInit(ResourceLocalizationService.java:247)
        at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
        at org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:107)
        at org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.serviceInit(ContainerManagerImpl.java:234)
        at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
        at org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:107)
        at org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceInit(NodeManager.java:250)
        at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
        at org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartNodeManager(NodeManager.java:445)
        at org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:492)
Caused by: EPERM: Operation not permitted
        at org.apache.hadoop.io.nativeio.NativeIO$POSIX.chmodImpl(Native Method)
        at org.apache.hadoop.io.nativeio.NativeIO$POSIX.chmod(NativeIO.java:228)
        at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:642)
        at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:434)
        at org.apache.hadoop.fs.FileSystem.primitiveMkdir(FileSystem.java:1063)
        at org.apache.hadoop.fs.DelegateToFileSystem.mkdir(DelegateToFileSystem.java:157)
        at org.apache.hadoop.fs.FilterFs.mkdir(FilterFs.java:197)
        at org.apache.hadoop.fs.FileContext$4.next(FileContext.java:721)
        at org.apache.hadoop.fs.FileContext$4.next(FileContext.java:717)
        at org.apache.hadoop.fs.FSLinkResolver.resolve(FSLinkResolver.java:90)
        at org.apache.hadoop.fs.FileContext.mkdir(FileContext.java:717)
        at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.serviceInit(ResourceLocalizationService.java:244)
        ... 9 more

版本:2.5.0+cdh5.2.0

原因分析:虽然类似于:https://issues.apache.org/jira/browse/YARN-42,但是具体原因不详

解决办法:

在不能启动的NodeManaer节点中,清除yarn.nodemanager.local-dirs目录中所有内容,然后再启动NodeManager即可。

症状2:NodeManager不能启动,错误如下

INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl: Adding container_1415870304563_7717_01_000011 to recently stopped containers
INFO org.apache.hadoop.service.AbstractService: Service org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl failed in state INITED; cause: java.lang.NullPointerException
java.lang.NullPointerException
        at org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.recoverContainer(ContainerManagerImpl.java:289)
        at org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.recover(ContainerManagerImpl.java:252)
        at org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.serviceInit(ContainerManagerImpl.java:235)
        at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
        at org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:107)
        at org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceInit(NodeManager.java:250)
        at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
        at org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartNodeManager(NodeManager.java:445)
        at org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:492)
INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.LogAggregationService: org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.LogAggregationService waiting for pending aggregation during exit

版本:2.5.0+cdh5.2.0

原因分析:https://issues.apache.org/jira/browse/YARN-2816

解决办法:

在不能启动的NodeManaer节点中,rm -r /tmp/hadoop-yarn/* ,然后重新启动NodeManager即可。

症状3:在Yarn的8088页面中看到为Unhealthy Nodes状态的NodeManager

在ResourceManager日志中也发现提示:Node hostname:8041 reported UNHEALTHY with details: 1/1 log-dirs turned bad: /var/log/hadoop-yarn/container

版本:2.5.0+cdh5.2.0

原因分析:/var/log/hadoop-yarn/container是默认的NodeManager 容器日志目录(yarn.nodemanager.log-dirs),查看发现其用户和组不是默认值yarn和hadoop

解决办法:

删除NodeManager 容器日志目录,然后重启NodeManager,将会默认重建容器日志目录,并将用户和组变为默认值,即可解决此问题。

3. HBase相关问题

症状1:一台RegionServer连不上ZooKeeper,导致HBase读写阻塞。问题RegionServer日志中错误如下

WARN org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Possibly transient ZooKeeper, quorum=zk1:2181,zk2:2181,zk3:2181, exception=org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase/meta-region-server
ERROR org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: ZooKeeper getData failed after 4 attempts
WARN org.apache.hadoop.hbase.zookeeper.ZKUtil: hconnection-0x35e44a80, quorum=zk1:2181,zk2:2181,zk3:2181, baseZNode=/hbase Unable to get data of znode /hbase/meta-region-server
org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase/meta-region-server
        at org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
        at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)

......

版本:2.5.0+cdh5.2.0

原因分析:

查看ZK日志,发现了警告“Too many connections from /192.168.*.* - max is 60”,其中IP正好是问题的RegionServer。在问题机器上执行“netstat -antp | grep 2181 | wc -l”,结果为180+,而正常机器上为20左右,看来确实这台机器上打开了过多的的ZK客户端,所以导致RegionServer不能连接ZK了。

解决办法:

清除问题机器上过多的ZK客户端即可。我这里最终发现是问题机器上的HiveServer2服务占用了过多的ZK客户端,所以重启该服务即可。至于HiveServer2为何会出现这个问题?应该是其bug HIVE-8596所致。


0 0