hadoop集群升级手札

来源:互联网 发布:逆战总是网络连接断开 编辑:程序博客网 时间:2024/05/17 01:17

之前集群的配置为hadoop-0.20.3,hbase-0.90.4,zookeeper-3.3.4,hive-0.8.1。hadoop还算稳定,基本没什么bug,而hive基于hbse查询时真是问题百出,hbase各种bug,比如丢数据,丢表,regionserver频繁宕机,各种打补丁,改错误搞得我脑袋都要爆了。于是决定给hbase来一个彻底的升级替换。

一.   先是把hbase升级为0.94.0,升级(就是安装包简单的替换)还算顺利,但启动的时候却报错

2012-06-26 15:59:19,051 WARN org.apache.hadoop.hbase.util.FSUtils: Unable to create cluster ID file in hdfs://server:9000/hbase, retrying in 10000msec: org.apache.hadoop.ipc.RemoteException: java.io.IOException: java.lang.NoSuchMethodException: org.apache.hadoop.hdfs.protocol.ClientProtocol.create(java.lang.String, org.apache.hadoop.fs.permission.FsPermission, java.lang.String, boolean, boolean, short, long)        at java.lang.Class.getMethod(Class.java:1605)        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:517)        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1383)        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1379)        at java.security.AccessController.doPrivileged(Native Method)        at javax.security.auth.Subject.doAs(Subject.java:396)        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1059)        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1377)        at org.apache.hadoop.ipc.Client.call(Client.java:1066)        at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)        at $Proxy10.create(Unknown Source)        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)        at java.lang.reflect.Method.invoke(Method.java:597)        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)        at $Proxy10.create(Unknown Source)        at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.<init>(DFSClient.java:3245)        at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:713)        at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:182)        at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:555)        at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:536)        at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:443)        at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:435)        at org.apache.hadoop.hbase.util.FSUtils.setClusterId(FSUtils.java:463)        at org.apache.hadoop.hbase.master.MasterFileSystem.checkRootDir(MasterFileSystem.java:357)        at org.apache.hadoop.hbase.master.MasterFileSystem.createInitialFileSystemLayout(MasterFileSystem.java:127)        at org.apache.hadoop.hbase.master.MasterFileSystem.<init>(MasterFileSystem.java:112)        at org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:480)        at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:343)        at java.lang.Thread.run(Thread.java:662)


 

查看源码可知hadoop-0.20.3中ClientProtocol.create方法如下

public void create(String src, FsPermission masked,String clientName,                              boolean overwrite, short replication,long blockSize) throws IOException;

与hbase0.94.0中调用的create方法不兼容。

二.  所以只能把hadoop给升级了,查看了release版本,决定把0.20.3升级为1.0.3版本

hadoop升级不像hbase,涉及到数据版本可能不兼容(这在1.0版本之前尤为明显),所以相对hbase来说升级比较繁琐。

除了简单替换了hadoop安装包的文件之外,还得给HDFS的数据和元数据升级。

然而HDFS的数据和元数据的升级有一定的风险,可能会造成数据丢失等,所以在生产线上集群升级之前最好在测试集群测试通过才进行。下面说说升级步骤:

 

1.确保文件系统健康(fsck工具全面检查)

2.清空HDFS和Mapreduce的临时数据

3.确保上一次升级已经定妥。

4.关闭MapReduce,终止tasktracker上运行的任何孤儿任务。

5.关闭HDFS,并备份元数据

6.替换安装文件(集群所有机器)

7.使用-upgrade选项启动HDFS升级   (start-dfs.sh -upgrade)

8.等待,直到升级完成(hadoop dfsadmin -upgradeProgress status查看升级状态)

9.检查HDFS是否运行正常(fsck工具)

10.启动MapReduce

11.回滚(start-dfs.sh -roollback)或定妥升级(hadoop dfsadmin -finalizeUpgrade)

三.  升级zookeeper到3.4.3版本(过程略)

四.  启动hbase,此时会有如下提示

SLF4J: Class path contains multiple SLF4J bindings.SLF4J: Found binding in [jar:file:/usr/local/hbase/lib/slf4j-log4j12-1.5.8.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J: Found binding in [jar:file:/usr/local/hadoop/lib/slf4j-log4j12-1.4.3.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.

因为hadoop lib下的slf4j-log4j12-1.5.8.jar于hbase lib的slf4j-log4j12-1.4.3.jar冲突,删除Hbase lib下的slf4j-log4j12-1.4.3.jar即可。

此时hbase能正常启动。升级完毕。

 

五.  替换hive中相关的jar包,及配置文件,启动hive的时候,又出现了问题

2012-06-28 16:27:55,303 ERROR DataNucleus.Plugin (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires "org.eclipse.core.resources" but it cannot be resolved.2012-06-28 16:27:55,303 ERROR DataNucleus.Plugin (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires "org.eclipse.core.resources" but it cannot be resolved.2012-06-28 16:27:55,307 ERROR DataNucleus.Plugin (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires "org.eclipse.core.runtime" but it cannot be resolved.2012-06-28 16:27:55,307 ERROR DataNucleus.Plugin (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires "org.eclipse.core.runtime" but it cannot be resolved.2012-06-28 16:27:55,307 ERROR DataNucleus.Plugin (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires "org.eclipse.text" but it cannot be resolved.2012-06-28 16:27:55,307 ERROR DataNucleus.Plugin (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires "org.eclipse.text" but it cannot be resolved.2012-06-28 16:27:58,605 WARN  parse.SemanticAnalyzer (SemanticAnalyzer.java:genBodyPlan(5821)) - Common Gby keys:null2012-06-28 16:27:58,850 WARN  hbase.HBaseConfiguration (HBaseConfiguration.java:<init>(48)) - instantiating HBaseConfiguration() is deprecated. Please use HBaseConfiguration#create() to construct a plain Configuration2012-06-28 16:27:59,006 WARN  hbase.HBaseConfiguration (HBaseConfiguration.java:<init>(48)) - instantiating HBaseConfiguration() is deprecated. Please use HBaseConfiguration#create() to construct a plain Configuration2012-06-28 16:27:59,539 ERROR CliDriver (SessionState.java:printError(380)) - Failed with exception java.io.IOException:java.lang.NullPointerExceptionjava.io.IOException: java.lang.NullPointerException        at org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:341)        at org.apache.hadoop.hive.ql.exec.FetchTask.fetch(FetchTask.java:154)        at org.apache.hadoop.hive.ql.Driver.getResults(Driver.java:1383)        at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:266)        at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:212)        at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:403)        at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:671)        at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:554)        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)        at java.lang.reflect.Method.invoke(Method.java:597)        at org.apache.hadoop.util.RunJar.main(RunJar.java:156)Caused by: java.lang.NullPointerException        at org.apache.hadoop.hbase.mapreduce.TableRecordReaderImpl.nextKeyValue(TableRecordReaderImpl.java:173)        at org.apache.hadoop.hbase.mapreduce.TableRecordReader.nextKeyValue(TableRecordReader.java:135)        at org.apache.hadoop.hive.hbase.HiveHBaseTableInputFormat$1.next(HiveHBaseTableInputFormat.java:209)        at org.apache.hadoop.hive.hbase.HiveHBaseTableInputFormat$1.next(HiveHBaseTableInputFormat.java:1)        at org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:326)        ... 12 more

Bundle "org.eclipse.jdt.core" requires系列的问题先不说,可以忽略,对hive的正常使用没影响。

首先以为是hive的版本不兼容hbase,于是悻悻呼把hive升级到0.9.0版本。

问题还是没解决,却出现了新的问题,如下

 

2012-06-28 11:16:47,822 ERROR DataNucleus.Plugin (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires "org.eclipse.core.resources" but it cannot be resolved.2012-06-28 11:16:47,822 ERROR DataNucleus.Plugin (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires "org.eclipse.core.resources" but it cannot be resolved.2012-06-28 11:16:47,826 ERROR DataNucleus.Plugin (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires "org.eclipse.core.runtime" but it cannot be resolved.2012-06-28 11:16:47,826 ERROR DataNucleus.Plugin (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires "org.eclipse.core.runtime" but it cannot be resolved.2012-06-28 11:16:47,826 ERROR DataNucleus.Plugin (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires "org.eclipse.text" but it cannot be resolved.2012-06-28 11:16:47,826 ERROR DataNucleus.Plugin (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires "org.eclipse.text" but it cannot be resolved.2012-06-28 11:16:52,709 WARN  client.ZooKeeperSaslClient (ZooKeeperSaslClient.java:<init>(123)) - SecurityException: java.lang.SecurityException: Unable to locate a login configuration occurred when trying to find JAAS configuration.2012-06-28 11:16:52,720 WARN  zookeeper.ClientCnxn (ClientCnxn.java:run(1057)) - Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnectjava.net.ConnectException: Connection refusedat sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:286)at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1035)2012-06-28 11:16:52,837 WARN  zookeeper.RecoverableZooKeeper (RecoverableZooKeeper.java:retryOrThrow(195)) - Possibly transient ZooKeeper exception: org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase/master

说是连不上zookeeper,奇怪了配置文件啥都没变。最后在hive-site.xml文件中加上zookeeper的相关配置,问题解决。

<property>      <name>hbase.zookeeper.quorum</name>      <value>Debian,hadoop25,hadoop26</value>    </property>    <property>      <name>hbase.zookeeper.property.clientPort</name>      <value>2181</value>    </property>

这个问题解决,但是执行mapreduce的时候出现了新问题,task执行的时候报错

MapAttempt TASK_TYPE="MAP" TASKID="task_201206281426_0002_m_000000" TASK_ATTEMPT_ID="attempt_201206281426_0002_m_000000_0" TASK_STATUS="FAILED" FINISH_TIME="1340876831790" HOSTNAME="Debian" ERROR="Error: java\.lang\.ClassNotFoundException: com\.google\.protobuf\.Messageat java\.net\.URLClassLoader$1\.run(URLClassLoader\.java:202)at java\.security\.AccessController\.doPrivileged(Native Method)at java\.net\.URLClassLoader\.findClass(URLClassLoader\.java:190)at java\.lang\.ClassLoader\.loadClass(ClassLoader\.java:306)at sun\.misc\.Launcher$AppClassLoader\.loadClass(Launcher\.java:301)at java\.lang\.ClassLoader\.loadClass(ClassLoader\.java:247)at org\.apache\.hadoop\.hbase\.io\.HbaseObjectWritable\.<clinit>(HbaseObjectWritable\.java:263)at org\.apache\.hadoop\.hbase\.ipc\.Invocation\.write(Invocation\.java:138)at org\.apache\.hadoop\.hbase\.ipc\.HBaseClient$Connection\.sendParam(HBaseClient\.java:537)at org\.apache\.hadoop\.hbase\.ipc\.HBaseClient\.call(HBaseClient\.java:898)at org\.apache\.hadoop\.hbase\.ipc\.WritableRpcEngine$Invoker\.invoke(WritableRpcEngine\.java:150)at $Proxy3\.getProtocolVersion(Unknown Source)at org\.apache\.hadoop\.hbase\.ipc\.WritableRpcEngine\.getProxy(WritableRpcEngine\.java:183)at org\.apache\.hadoop\.hbase\.ipc\.HBaseRPC\.getProxy(HBaseRPC\.java:303)at org\.apache\.hadoop\.hbase\.ipc\.HBaseRPC\.getProxy(HBaseRPC\.java:280)at org\.apache\.hadoop\.hbase\.ipc\.HBaseRPC\.getProxy(HBaseRPC\.java:332)at org\.apache\.hadoop\.hbase\.ipc\.HBaseRPC\.waitForProxy(HBaseRPC\.java:236)at org\.apache\.hadoop\.hbase\.client\.HConnectionManager$HConnectionImplementation\.getHRegionConnection(HConnectionManager\.java:1284)at org\.apache\.hadoop\.hbase\.client\.HConnectionManager$HConnectionImplementation\.getHRegionConnection(HConnectionManager\.java:1240)at org\.apache\.hadoop\.hbase\.client\.HConnectionManager$HConnectionImplementation\.getHRegionConnection(HConnectionManager\.java:1227)at org\.apache\.hadoop\.hbase\.client\.HConnectionManager$HConnectionImplementation\.locateRegionInMeta(HConnectionManager\.java:936)at org\.apache\.hadoop\.hbase\.client\.HConnectionManager$HConnectionImplementation\.locateRegion(HConnectionManager\.java:832)at org\.apache\.hadoop\.hbase\.client\.HConnectionManager$HConnectionImplementation\.locateRegion(HConnectionManager\.java:801)at org\.apache\.hadoop\.hbase\.client\.HConnectionManager$HConnectionImplementation\.locateRegionInMeta(HConnectionManager\.java:933)at org\.apache\.hadoop\.hbase\.client\.HConnectionManager$HConnectionImplementation\.locateRegion(HConnectionManager\.java:836)at org\.apache\.hadoop\.hbase\.client\.HConnectionManager$HConnectionImplementation\.locateRegion(HConnectionManager\.java:801)at org\.apache\.hadoop\.hbase\.client\.HTable\.finishSetup(HTable\.java:234)at org\.apache\.hadoop\.hbase\.client\.HTable\.<init>(HTable\.java:174)at org\.apache\.hadoop\.hive\.hbase\.HiveHBaseTableInputFormat\.getRecordReader(HiveHBaseTableInputFormat\.java:92)at org\.apache\.hadoop\.hive\.ql\.io\.HiveInputFormat\.getRecordReader(HiveInputFormat\.java:240)at org\.apache\.hadoop\.hive\.ql\.io\.CombineHiveInputFormat\.getRecordReader(CombineHiveInputFormat\.java:522)at org\.apache\.hadoop\.mapred\.MapTask$TrackedRecordReader\.<init>(MapTask\.java:197)at org\.apache\.hadoop\.mapred\.MapTask\.runOldMapper(MapTask\.java:418)at org\.apache\.hadoop\.mapred\.MapTask\.run(MapTask\.java:372)at org\.apache\.hadoop\.mapred\.Child$4\.run(Child\.java:255)at java\.security\.AccessController\.doPrivileged(Native Method)at javax\.security\.auth\.Subject\.doAs(Subject\.java:396)at org\.apache\.hadoop\.security\.UserGroupInformation\.doAs(UserGroupInformation\.java:1121)at org\.apache\.hadoop\.mapred\.Child\.main(Child\.java:249)" .

 因为缺少protobuf-java-2.4.0a.jar,故在hive-site.xml中配置该jar即可,如下

<property>  <name>hive.aux.jars.path</name>  <value>file:///usr/local/hive/lib/hive-hbase-handler-0.9.0.jar,file:///usr/local/hive/lib/hbase-0.92.1.jar,file:///usr/local/hive/lib/zookeeper-3.4.3.jar,file:///usr/local/hive/lib/protobuf-java-2.4.0a.jar</value></property>

解决完该问题,问题又重新指向了原来hive-0.8.1版本遇到的问题


2012-06-28 16:27:55,303 ERROR DataNucleus.Plugin (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires "org.eclipse.core.resources" but it cannot be resolved.2012-06-28 16:27:55,303 ERROR DataNucleus.Plugin (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires "org.eclipse.core.resources" but it cannot be resolved.2012-06-28 16:27:55,307 ERROR DataNucleus.Plugin (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires "org.eclipse.core.runtime" but it cannot be resolved.2012-06-28 16:27:55,307 ERROR DataNucleus.Plugin (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires "org.eclipse.core.runtime" but it cannot be resolved.2012-06-28 16:27:55,307 ERROR DataNucleus.Plugin (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires "org.eclipse.text" but it cannot be resolved.2012-06-28 16:27:55,307 ERROR DataNucleus.Plugin (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires "org.eclipse.text" but it cannot be resolved.2012-06-28 16:27:58,605 WARN  parse.SemanticAnalyzer (SemanticAnalyzer.java:genBodyPlan(5821)) - Common Gby keys:null2012-06-28 16:27:58,850 WARN  hbase.HBaseConfiguration (HBaseConfiguration.java:<init>(48)) - instantiating HBaseConfiguration() is deprecated. Please use HBaseConfiguration#create() to construct a plain Configuration2012-06-28 16:27:59,006 WARN  hbase.HBaseConfiguration (HBaseConfiguration.java:<init>(48)) - instantiating HBaseConfiguration() is deprecated. Please use HBaseConfiguration#create() to construct a plain Configuration2012-06-28 16:27:59,539 ERROR CliDriver (SessionState.java:printError(380)) - Failed with exception java.io.IOException:java.lang.NullPointerExceptionjava.io.IOException: java.lang.NullPointerException        at org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:341)        at org.apache.hadoop.hive.ql.exec.FetchTask.fetch(FetchTask.java:154)        at org.apache.hadoop.hive.ql.Driver.getResults(Driver.java:1383)        at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:266)        at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:212)        at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:403)        at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:671)        at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:554)        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)        at java.lang.reflect.Method.invoke(Method.java:597)        at org.apache.hadoop.util.RunJar.main(RunJar.java:156)Caused by: java.lang.NullPointerException        at org.apache.hadoop.hbase.mapreduce.TableRecordReaderImpl.nextKeyValue(TableRecordReaderImpl.java:173)        at org.apache.hadoop.hbase.mapreduce.TableRecordReader.nextKeyValue(TableRecordReader.java:135)        at org.apache.hadoop.hive.hbase.HiveHBaseTableInputFormat$1.next(HiveHBaseTableInputFormat.java:209)        at org.apache.hadoop.hive.hbase.HiveHBaseTableInputFormat$1.next(HiveHBaseTableInputFormat.java:1)        at org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:326)        ... 12 more


 从错误内容更可以看出因为hbase高版本的代码出现了更改导致hive在调用hbase的api时出现了NPE的错误,看样子Hbase0.9.4还没有得到hive的完美支持。于是呼貌似只能把hbase降级为hive0.9.0原生支持的hbase-0.92版本,虽说很不甘心,但结果往往是残酷的。

在给Hbase降级的时候出现了一个致命的问题,整个之前hbase的数据都丢失了,所有的regionserver没有负载任何region,看样子是hbase的meta出现了问题,也有可能是我瞎折腾导致的,没办法项目急着用,而且我的数据恢复也不是很费劲,所以只能先重新导入数据了,备份了hbase的meta,有时间得好好分析下是什么导致出现了此问题。

 

因为我是采用的flume采集的数据,在涉及到连接hadoop的地方出现错误

java.io.IOException: failure to login        at org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:490)        at org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:452)        at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source)        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)        at java.lang.reflect.Method.invoke(Method.java:597)        at org.apache.hadoop.hbase.util.Methods.call(Methods.java:37)        at org.apache.hadoop.hbase.security.User.call(User.java:586)        at org.apache.hadoop.hbase.security.User.callStatic(User.java:576)        at org.apache.hadoop.hbase.security.User.access$400(User.java:50)        at org.apache.hadoop.hbase.security.User$SecureHadoopUser.<init>(User.java:393)        at org.apache.hadoop.hbase.security.User$SecureHadoopUser.<init>(User.java:388)        at org.apache.hadoop.hbase.security.User.getCurrent(User.java:139)        at org.apache.hadoop.hbase.ipc.HBaseRPC.getProxy(HBaseRPC.java:280)        at org.apache.hadoop.hbase.ipc.HBaseRPC.getProxy(HBaseRPC.java:332)        at org.apache.hadoop.hbase.ipc.HBaseRPC.waitForProxy(HBaseRPC.java:236)        at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getHRegionConnection(HConnectionManager.java:1278)        at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getHRegionConnection(HConnectionManager.java:1235)        at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getHRegionConnection(HConnectionManager.java:1222)        at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:918)        at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:814)        at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.relocateRegion(HConnectionManager.java:788)        at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:1024)        at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:818)        at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:782)        at org.apache.hadoop.hbase.client.HTable.finishSetup(HTable.java:249)        at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:213)        at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:171)        at org.mostar.flume.hbase.MetaHBaseSink.open(MetaHBaseSink.java:138)        at com.cloudera.flume.core.EventSinkDecorator.open(EventSinkDecorator.java:75)        at com.cloudera.flume.core.connector.DirectDriver$PumperThread.run(DirectDriver.java:88)Caused by: javax.security.auth.login.LoginException: java.lang.NoClassDefFoundError: Could not initialize class org.apache.hadoop.security.KerberosName        at org.apache.hadoop.security.User.<init>(User.java:44)        at org.apache.hadoop.security.User.<init>(User.java:39)        at org.apache.hadoop.security.UserGroupInformation$HadoopLoginModule.commit(UserGroupInformation.java:130)        at sun.reflect.GeneratedMethodAccessor8.invoke(Unknown Source)        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)        at java.lang.reflect.Method.invoke(Method.java:597)        at javax.security.auth.login.LoginContext.invoke(LoginContext.java:769)        at javax.security.auth.login.LoginContext.access$000(LoginContext.java:186)        at javax.security.auth.login.LoginContext$5.run(LoginContext.java:706)        at java.security.AccessController.doPrivileged(Native Method)        at javax.security.auth.login.LoginContext.invokeCreatorPriv(LoginContext.java:703)        at javax.security.auth.login.LoginContext.login(LoginContext.java:576)        at org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:471)        at org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:452)        at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source)        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)        at java.lang.reflect.Method.invoke(Method.java:597)        at org.apache.hadoop.hbase.util.Methods.call(Methods.java:37)        at org.apache.hadoop.hbase.security.User.call(User.java:586)        at org.apache.hadoop.hbase.security.User.callStatic(User.java:576)        at org.apache.hadoop.hbase.security.User.access$400(User.java:50)        at org.apache.hadoop.hbase.security.User$SecureHadoopUser.<init>(User.java:393)        at org.apache.hadoop.hbase.security.User$SecureHadoopUser.<init>(User.java:388)        at org.apache.hadoop.hbase.security.User.getCurrent(User.java:139)        at org.apache.hadoop.hbase.ipc.HBaseRPC.getProxy(HBaseRPC.java:280)        at org.apache.hadoop.hbase.ipc.HBaseRPC.getProxy(HBaseRPC.java:332)        at org.apache.hadoop.hbase.ipc.HBaseRPC.waitForProxy(HBaseRPC.java:236)        at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getHRegionConnection(HConnectionManager.java:1278)        at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getHRegionConnection(HConnectionManager.java:1235)        at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getHRegionConnection(HConnectionManager.java:1222)        at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:918)        at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:814)        at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.relocateRegion(HConnectionManager.java:788)        at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:1024)        at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:818)        at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:782)        at org.apache.hadoop.hbase.client.HTable.finishSetup(HTable.java:249)        at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:213)        at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:171)        at org.mostar.flume.hbase.MetaHBaseSink.open(MetaHBaseSink.java:138)        at com.cloudera.flume.core.EventSinkDecorator.open(EventSinkDecorator.java:75)        at com.cloudera.flume.core.connector.DirectDriver$PumperThread.run(DirectDriver.java:88)        at javax.security.auth.login.LoginContext.invoke(LoginContext.java:872)        at javax.security.auth.login.LoginContext.access$000(LoginContext.java:186)        at javax.security.auth.login.LoginContext$5.run(LoginContext.java:706)        at java.security.AccessController.doPrivileged(Native Method)        at javax.security.auth.login.LoginContext.invokeCreatorPriv(LoginContext.java:703)        at javax.security.auth.login.LoginContext.login(LoginContext.java:576)        at org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:471)        ... 29 more


 

此时是因为hadoop-core-1.0.3.jar不包含org.apache.hadoop.security.KerberosName类,把commons-configuration-1.6.jar复制到当前应用的lib下即可。

然后hadoop集群升级最终完成。由hadoop-0.20.3,hbase-0.90.4,zookeeper-3.3.4,hive-0.8.1升级为hadoop-1.0.3,hbase-0.92.1,zookeeper-3.4.3,hive-0.9.0。期待Hbase能表现得出色点。