搭建Hadoop遇见问题集合

来源:互联网 发布:log4j ubuntu 编辑:程序博客网 时间:2024/05/30 13:41

1. 在使用./sbin/start-dfs.sh或./sbin/start-all.sh启动时会报出这样如下警告:

Java HotSpot(TM) 64-Bit Server VM warning: You have loaded library /usr/local/hadoop-2.2.0/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now.

….

Java: ssh: Could not resolve hostname Java: Name or service not known

HotSpot(TM): ssh: Could not resolve hostname HotSpot(TM): Name or service not known

64-Bit: ssh: Could not resolve hostname 64-Bit: Name or service not known

….

这个问题的错误原因会发生在64位的操作系统上,原因是从官方下载的hadoop使用的本地库文件(例如lib/native/libhadoop.so.1.0.0)都是基于32位编译的,运行在64位系统上就会出现上述错误。

解决方法之一是在64位系统上重新编译hadoop,另一种方法是在hadoop-env.sh和yarn-env.sh中添加如下两行:

export HADOOP_COMMON_LIB_NATIVE_DIR=${HADOOP_HOME}/lib/native

export HADOOP_OPTS=”-Djava.library.path=$HADOOP_HOME/lib”

注:/usr/zkt/hadoop2.2.0/hadoop-2.2.0为自定义的下载hadoop文件的解压路径

2. hadoop安装完以后,经常会提示一下警告:

WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable。。。

搜了好多文章,都说是跟系统位数有关系,我使用的是Centos 6.6 64位操作系统。

前两天在做Docker镜像的时候发现了一个步骤可以解决这个问题,亲试了一下,果然不再提示了。
原因查找:
查看本地文件:
[root@db96 hadoop]# file /usr/cloud/hadoop/lib/native/libhadoop.so.1.0.0
/usr/cloud/hadoop/lib/native/libhadoop.so.1.0.0: ELF 32-bit LSB shared object,
Intel 80386, version 1 (SYSV), dynamically linked, not stripped
是32位的hadoop,安装在了64位的linux系统上。lib包编译环境不一样,所以不能使用。
悲剧了,装好的集群没法用。

解决办法1:重新编译hadoop.//就是重新编译hadoop软件。 (本例文是在从库db99上编译。你也可以在master db96上编译
//只要机器的环境一直。)

参考手动编译步骤:64位的linux装的hadoop是32位的,需要手工编译
http://blog.csdn.net/wulantian/article/details/38111999
解决办法2:
首先下载hadoop-native-64-2.4.0.tar:
http://dl.bintray.com/sequenceiq/sequenceiq-bin/hadoop-native-64-2.4.0.tar
如果你是hadoop2.6的可以下载下面这个:
http://dl.bintray.com/sequenceiq/sequenceiq-bin/hadoop-native-64-2.6.0.tar
下载完以后,解压到hadoop的native目录下,覆盖原有文件即可。操作如下:
tar -xvf hadoop-native-64-2.4.0.tar -C hadoop/lib/native/

3. 执行start-dfs.sh后,datenode没有启动

2014-06-18 20:34:59,622 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for block pool Block pool <registering> (Datanode Uuid unassigned) service to localhost/127.0.0.1:9000java.io.IOException: Incompatible clusterIDs in /usr/local/hadoop/hdfs/data: namenode clusterID = CID-af6f15aa-efdd-479b-bf55-77270058e4f7; datanode clusterID = CID-736d1968-8fd1-4bc4-afef-5c72354c39ce

从日志中可以看出,原因是因为datanode的clusterID 和 namenode的clusterID 不匹配。

打开hdfs-site.xml里配置的datanode和namenode对应的目录,分别打开current文件夹里的VERSION,可以看到clusterID项正如日志里记录的一样,确实不一致,修改datanode里VERSION文件的clusterID 与namenode里的一致,再重新启动dfs(执行start-dfs.sh)再执行jps命令可以看到datanode已正常启动。

出现该问题的原因:在第一次格式化dfs后,启动并使用了hadoop,后来又重新执行了格式化命令(hdfs namenode -format),这时namenode的clusterID会重新生成,而datanode的clusterID 保持不变。
解决办法:直接删除Hadoop根目录的tmp文件夹,然后在格式化即可;注意:如果·是集群模式下一定要替换掉从节点的tmp目录

4. qoop导入mysql数据出错

用sqoop导入mysql数据出现以下错误:

14/12/03 16:37:58 ERROR manager.SqlManager: Error reading from database: java.sql.SQLException: Streaming result set com.mysql.jdbc.RowDataDynamic@54b0a583 is still active. No statements may be issued when any streaming result sets are open and in use on a given connection. Ensure that you have called .close() on any active streaming result sets before attempting more queries.java.sql.SQLException: Streaming result set com.mysql.jdbc.RowDataDynamic@54b0a583 is still active. No statements may be issued when any streaming result sets are open and in use on a given connection. Ensure that you have called .close() on any active streaming result sets before attempting more queries.        at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:930)        at com.mysql.jdbc.MysqlIO.checkForOutstandingStreamingData(MysqlIO.java:2694)        at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:1868)        at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2109)        at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2642)        at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2571)        at com.mysql.jdbc.StatementImpl.executeQuery(StatementImpl.java:1464)        at com.mysql.jdbc.ConnectionImpl.getMaxBytesPerChar(ConnectionImpl.java:3030)        at com.mysql.jdbc.Field.getMaxBytesPerCharacter(Field.java:592)        at com.mysql.jdbc.ResultSetMetaData.getPrecision(ResultSetMetaData.java:444)        at org.apache.sqoop.manager.SqlManager.getColumnInfoForRawQuery(SqlManager.java:285)        at org.apache.sqoop.manager.SqlManager.getColumnTypesForRawQuery(SqlManager.java:240)        at org.apache.sqoop.manager.SqlManager.getColumnTypes(SqlManager.java:226)        at org.apache.sqoop.manager.ConnManager.getColumnTypes(ConnManager.java:295)        at org.apache.sqoop.orm.ClassWriter.getColumnTypes(ClassWriter.java:1773)        at org.apache.sqoop.orm.ClassWriter.generate(ClassWriter.java:1578)        at org.apache.sqoop.tool.CodeGenTool.generateORM(CodeGenTool.java:96)        at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:478)        at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:601)        at org.apache.sqoop.Sqoop.run(Sqoop.java:143)        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)        at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:179)        at org.apache.sqoop.Sqoop.runTool(Sqoop.java:218)        at org.apache.sqoop.Sqoop.runTool(Sqoop.java:227)        at org.apache.sqoop.Sqoop.main(Sqoop.java:236)14/12/03 16:37:58 ERROR tool.ImportTool: Encountered IOException running import job: java.io.IOException: No columns to generate for ClassWriter        at org.apache.sqoop.orm.ClassWriter.generate(ClassWriter.java:1584)        at org.apache.sqoop.tool.CodeGenTool.generateORM(CodeGenTool.java:96)        at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:478)        at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:601)        at org.apache.sqoop.Sqoop.run(Sqoop.java:143)        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)        at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:179)        at org.apache.sqoop.Sqoop.runTool(Sqoop.java:218)      at org.apache.sqoop.Sqoop.runTool(Sqoop.java:227)        at org.apache.sqoop.Sqoop.main(Sqoop.java:236)

这个是由于mysql-connector-java的bug造成的,出错时我用的是mysql-connector-java-5.1.10-bin.jar,更新成mysql-connector-java-5.1.32-bin.jar就可以了。mysql-connector-java-5.1.32-bin.jar的下载地址为http://dev.mysql.com/get/Downloads/Connector-J/mysql-connector-java-5.1.32.tar.gz。下载完后解压,在解压的目录下可以找到mysql-connector-java-5.1.32-bin.jar。

5.启动HBase,HMaster启动后又挂掉

错误日志如下:

2014-09-15 14:09:18,944 INFO  [master:master:60000] zookeeper.ZooKeeper: Session: 0x24877e2a1e60002 closed2014-09-15 14:09:18,944 INFO  [master:master:60000-EventThread] zookeeper.ClientCnxn: EventThread shut down2014-09-15 14:09:19,045 INFO  [master,60000,1410761351315.splitLogManagerTimeoutMonitor] master.SplitLogManager$TimeoutMonitor: master,60000,1410761351315.splitLogManagerTimeoutMonitor exiting2014-09-15 14:09:19,050 INFO  [master:master:60000] zookeeper.ZooKeeper: Session: 0x24877e2a1e60001 closed2014-09-15 14:09:19,051 INFO  [main-EventThread] zookeeper.ClientCnxn: EventThread shut down2014-09-15 14:09:19,051 INFO  [master:master:60000] master.HMaster: HMaster main thread exiting2014-09-15 14:09:19,051 ERROR [main] master.HMasterCommandLine: Master exitingjava.lang.RuntimeException: HMaster Aborted        at org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:194)        at org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:135)        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)        at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126)        at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2793)

说明:这是因为我之前重新格式化了namenode后出现了这个问题,查了很久才发现是存储在Hbase的数据有丢失,而zk下的/hbase没删的原因

解决方案:在三个zk主机中选择leader主机,通过ZK客户端zkCli.sh进入ZK,删除/hbase目录:

rmr /hbase

如果你的不是以上原因,则考虑一下原因:
HMaster启动后自动关闭可能有多种原因,按照自己的经验,可以试着尝试以下方法:
1. 重新格式化namenode,重启HMaster看问题是否依旧存在;
2. 检查/hbase目录的hdfs权限设置是否有问题;
3. 清楚各个节点上的hbase数据,然后重新启动hbase,自己猜测是各节点状态不一致导致的HMaster无法启动问题。
当然,应该还会有其他的情况,随着学习的深入慢慢来积累总结吧!

0 0
原创粉丝点击