【Spark】Spark 执行出现的兼容性坑

来源:互联网 发布:花返网络股份有限公司 编辑:程序博客网 时间:2024/05/16 05:07

原创文章,转载请标注来自http://blog.csdn.net/lsttoy/article/details/53331578

以下bug猜测为scala版本不匹配出现的error

16/11/24 17:53:54 INFO HadoopRDD: Input split: file:/home/hadoop/input/lekkoTest.txt:0+12516/11/24 17:53:54 ERROR Executor: Exception in task 0.0 in stage 0.0 (TID 0)java.lang.AbstractMethodError: lekko.spark.SparkDemo$1.call(Ljava/lang/Object;)Ljava/util/Iterator;        at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$1$1.apply(JavaRDDLike.scala:124)        at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$1$1.apply(JavaRDDLike.scala:124)        at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:434)        at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440)        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)        at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:192)        at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:63)        at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:79)        at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:47)        at org.apache.spark.scheduler.Task.run(Task.scala:86)        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)        at java.lang.Thread.run(Thread.java:745)16/11/24 17:53:54 WARN TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0, localhost): java.lang.AbstractMethodError: lekko.spark.SparkDemo$1.call(Ljava/lang/Object;)Ljava/util/Iterator;        at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$1$1.apply(JavaRDDLike.scala:124)        at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$1$1.apply(JavaRDDLike.scala:124)        at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:434)        at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440)        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)        at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:192)        at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:63)        at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:79)        at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:47)        at org.apache.spark.scheduler.Task.run(Task.scala:86)        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)        at java.lang.Thread.run(Thread.java:745)

这个bug看起来是工具上的报错,非业务逻辑代码报错,然后去官网看了看版本。
因为目前我的scala安装的最新版本2.12.X,后来在spark官网那边查到以下消息

Spark runs on Java 7+, Python 2.6+/3.4+ and R 3.1+. For the Scala API, Spark 2.0.2 uses **Scala 2.11.** You will need to use a compatible Scala version (**2.11.x**).

因此需要更改版本。

第二,根据以下的代码可以看到

lekko.spark.SparkDemo$1.call(Ljava/lang/Object;)Ljava/util/Iterator;

问题会可能在处理该逻辑的问题。
因此,同时关注对应的代码,进行修正即可

0 0
原创粉丝点击