Hadoop之HMaster会自动停掉,怎么回事?

来源:互联网 发布:java实现zip的解压 编辑:程序博客网 时间:2024/04/28 12:18

问题: 搭建hadoop的测试环境完后, 发现HMaster会自动停掉, 启动后, 刚开始能jps看到HMaster的进程, 几秒后再jps就不见了. 怎么回事?
Log中是:
[root@master2 bin]# more /data/hbase-0.96.0-hadoop2/logs/hbase-root-master-master2.out
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/data/hbase-0.96.0-hadoop2/lib/slf4j-log4j12-1.6.4.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/data/hadoop-2.2.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBin
der.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
解决办法:
解决办法:将hbase中master的lib/下的slf4j-log4j12-1.7.5.jar删除,与/home/xuhui/hadoop-2.2.0/share/hadoop/common/lib下的jar重复包含了
删去了还是不行?怎么解决?Log文件中是空的
解决办法:
清空hadoop的tmp文件夹依然不好用! 怎么破?
解决办法:
重新格式化namenode,所有机器上:
hdfs namenode -format
这里写图片描述

将hbase/lib下的hadoop的jar包替换为hadoop中的2.2的jar包:
这里写图片描述
还是不行啊? 怎么破?
Log如下:
2015-05-14 13:58:46,284 INFO [master:master1:60000] Configuration.deprecation: fs.default.name is deprecated. Instead, use fs.defaultFS

2015-05-14 13:58:47,002 WARN [Thread-45] hdfs.DFSClient: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /hbase/hbase.version could only be replicated to 0 nodes instead of minReplication (=1). There are 0 datanode(s) running and no node(s) are excluded in this operation.

2015-05-14 13:58:47,010 WARN [master:master1:60000] util.FSUtils: Unable to create version file at hdfs://master1:9001/hbase, retrying
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /hbase/hbase.version could only be replicated to 0 nodes instead of minReplication (=1). There are 0 datanode(s) running and no node(s) are excluded in this operation.

2015-05-14 13:58:57,051 WARN [Thread-48] hdfs.DFSClient: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /hbase/hbase.version could only be replicated to 0 nodes instead of minReplication (=1). There are 0 datanode(s) running and no node(s) are excluded in this operation.

2015-05-14 13:59:17,088 FATAL [master:master1:60000] master.HMaster: Unhandled exception. Starting shutdown.
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /hbase/hbase.version could only be replicated to 0 nodes instead of minReplication (=1). There are 0 datanode(s) running and no node(s) are excluded in this operation.

2015-05-14 13:59:17,165 ERROR [Thread-5] hdfs.DFSClient: Failed to close file /hbase/hbase.version
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /hbase/hbase.version could only be replicated to 0 nodes instead of minReplication (=1). There are 0 datanode(s) running and no node(s) are excluded in this operation.

解决办法:
经过清理各个namenode,datanode, 重新格式化namenode. Natanode也启动起来了, hbase的master也启动起来了.

0 0
原创粉丝点击