Hadoop错误集锦(持续更新)

来源:互联网 发布:erp系统数据录入 编辑:程序博客网 时间:2024/05/16 02:56

将自己在hadoop学习中遇到的错误贴出来,后续学习的朋友遇到时可以迅速解决问题


java.net.NoRouteToHostException: No route to host        at java.net.PlainSocketImpl.socketConnect(Native Method)        at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)        at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)        at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)        at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)        at java.net.Socket.connect(Socket.java:579)        at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:354)        at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:327)        at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:393)        at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:365)        at java.lang.Thread.run(Thread.java:745)



原因:

防火墙没关闭,关闭防火墙即可


=============================================================

15/05/02 21:37:02 INFO mapred.ClientServiceDelegate: Application state is completed. FinalApplicationStatus=SUCCEEDED. Redirecting to job history server15/05/02 21:37:03 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:10020. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)15/05/02 21:37:04 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:10020. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)15/05/02 21:37:05 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:10020. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)15/05/02 21:37:06 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:10020. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)15/05/02 21:37:07 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:10020. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)15/05/02 21:37:08 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:10020. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)15/05/02 21:37:09 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:10020. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)15/05/02 21:37:10 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:10020. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)15/05/02 21:37:11 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:10020. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)15/05/02 21:37:12 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:10020. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)15/05/02 21:37:13 INFO mapred.ClientServiceDelegate: Application state is completed. FinalApplicationStatus=SUCCEEDED. Redirecting to job history server15/05/02 21:37:14 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:10020. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)15/05/02 21:37:15 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:10020. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)15/05/02 21:37:16 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:10020. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)15/05/02 21:37:17 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:10020. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)15/05/02 21:37:18 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:10020. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)15/05/02 21:37:19 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:10020. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)15/05/02 21:37:20 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:10020. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)15/05/02 21:37:21 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:10020. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)15/05/02 21:37:22 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:10020. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)15/05/02 21:37:23 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:10020. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)15/05/02 21:37:23 INFO mapred.ClientServiceDelegate: Application state is completed. FinalApplicationStatus=SUCCEEDED. Redirecting to job history server15/05/02 21:37:24 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:10020. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)15/05/02 21:37:25 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:10020. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)15/05/02 21:37:26 INFO ipc.Client: Retrying connect to server: 


.....好多

解决方法:

这类问题一般是namenode两个有一个宕掉了,格式化namenode,然后重新启动即可

hadoop namenode -format

==============================================================

Exception in thread "main" java.lang.RuntimeException: java.io.IOException: No FileSystem for scheme: hdfsat org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.setOutputPath(FileOutputFormat.java:164)at com.hadoop.db.NumMapper.main(NumMapper.java:52)Caused by: java.io.IOException: No FileSystem for scheme: hdfsat org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2385)at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2392)at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:89)at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2431)at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2413)at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:368)at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)at org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.setOutputPath(FileOutputFormat.java:160)... 1 more


解决方法:
缺少依赖,在pom.xml文件中加入
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-hdfs</artifactId>
<version>2.2.0</version>
</dependency>

保存即可

==================================================================

Exception in thread "main" java.io.IOException: Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.
at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:120)
at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:82)
at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:75)
at org.apache.hadoop.mapreduce.Job$9.run(Job.java:1255)
at org.apache.hadoop.mapreduce.Job$9.run(Job.java:1251)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556)
at org.apache.hadoop.mapreduce.Job.connect(Job.java:1250)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1279)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1303)
at com.hadoop.db.NumMapper.main(NumMapper.java:54)
解决方法:
同样在pom.xml文件中加入

<dependency><groupId>org.apache.hadoop</groupId><artifactId>hadoop-mapreduce-client-common</artifactId><version>2.2.0</version></dependency>

保存即可

==================================================================

15/06/18 10:59:12 INFO mapreduce.Job: Running job: job_1434596257004_000215/06/18 10:59:15 INFO mapreduce.Job: Job job_1434596257004_0002 running in uber mode : false15/06/18 10:59:15 INFO mapreduce.Job:  map 0% reduce 0%15/06/18 10:59:15 INFO mapreduce.Job: Job job_1434596257004_0002 failed with state FAILED due to: Application application_1434596257004_0002 failed 2 times due to AM Container for appattempt_1434596257004_0002_000002 exited with  exitCode: 1 due to: Exception from container-launch: org.apache.hadoop.util.Shell$ExitCodeException:         at org.apache.hadoop.util.Shell.runCommand(Shell.java:464)        at org.apache.hadoop.util.Shell.run(Shell.java:379)        at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:589)        at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)        at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:283)        at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:79)        at java.util.concurrent.FutureTask.run(FutureTask.java:262)        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)        at java.lang.Thread.run(Thread.java:745).Failing this attempt.. Failing the application.15/06/18 10:59:16 INFO mapreduce.Job: Counters: 0


解决方法:

在mapred-site.xml和yarn-site.xml中加入

mapreduce.application.classpath和yarn.application.classpath

内容都如下

                         /itcast/hadoop-2.2.0/etc/hadoop,                        /itcast/hadoop-2.2.0/share/hadoop/common/*,                        /itcast/hadoop-2.2.0/share/hadoop/common/lib/*,                        /itcast/hadoop-2.2.0/share/hadoop/hdfs/*,                        /itcast/hadoop-2.2.0/share/hadoop/hdfs/lib/*,                        /itcast/hadoop-2.2.0/share/hadoop/mapreduce/*,                        /itcast/hadoop-2.2.0/share/hadoop/mapreduce/lib/*,                        /itcast/hadoop-2.2.0/share/hadoop/yarn/*,                        /itcast/hadoop-2.2.0/share/hadoop/yarn/lib/*


注意这里是绝对路径,之前一直写的是$HADOOP_HOME,所以一直无法解决


===========================================================================

Error: java.lang.RuntimeException: java.lang.NoSuchMethodException: cn.itcast.hadoop.mr.SecSort$sortMap.<init>()        at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:131)        at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:721)        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:339)        at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:162)        at java.security.AccessController.doPrivileged(Native Method)        at javax.security.auth.Subject.doAs(Subject.java:415)        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)        at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:157)Caused by: java.lang.NoSuchMethodException: cn.itcast.hadoop.mr.SecSort$sortMap.<init>()        at java.lang.Class.getConstructor0(Class.java:2892)        at java.lang.Class.getDeclaredConstructor(Class.java:2058)        at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:125)        ... 7 more

MR程序中报以上错误:Mapper类需要是static的

=======================================================================================

Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/hadoop/hbase/HBaseConfiguration


找不到HBase对应支持的jar包,需要把用到的HBase的Jar导入$HADOOP_HOME/share/hadoop/mapreduce下,即可


=======================================================================================

, TaskAttempt 3 failed, info=[Error: Failure while running task:java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: java.io.IOException: java.io.EOFException: Read past end of RLE integer from compressed stream Stream for column 2 kind DATA position: 410 length: 410 range: 0 offset: 410 limit: 410 range 0 = 0 to 410 uncompressed: 1010 to 1010at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:186)at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:138)at org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:337)at org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:179)at org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:171)at java.security.AccessController.doPrivileged(Native Method)at javax.security.auth.Subject.doAs(Subject.java:415)at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)at org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.callInternal(TezTaskRunner.java:171)at org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.callInternal(TezTaskRunner.java:167)at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)at java.util.concurrent.FutureTask.run(FutureTask.java:262)at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)at java.lang.Thread.run(Thread.java:745)Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: java.io.IOException: java.io.EOFException: Read past end of RLE integer from compressed stream Stream for column 2 kind DATA position: 410 length: 410 range: 0 offset: 410 limit: 410 range 0 = 0 to 410 uncompressed: 1010 to 1010at org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.pushRecord(MapRecordSource.java:71)at org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.run(MapRecordProcessor.java:294)at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:163)


Hive报以上错误:Hive版本过低,此乃ORC文件格式的Bug,升级Hive版本或者换成RCFile等其他格式即可

0 0
原创粉丝点击