spark学习-52-Spark的org.apache.spark.SparkException: Task not serializable

来源:互联网 发布:耐玩的手机小游戏知乎 编辑:程序博客网 时间:2024/06/07 10:40

报错这个一般是org.apache.spark.SparkException: Task not serializable

17/12/06 14:20:10 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 28.4 KB, free 872.6 MB)17/12/06 14:20:10 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 192.168.1.161:51006 (size: 28.4 KB, free: 873.0 MB)17/12/06 14:20:10 INFO SparkContext: Created broadcast 0 from newAPIHadoopRDD at SparkOnHbaseSecond.java:92Exception in thread "main" org.apache.spark.SparkException: Task not serializable    at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:298)    at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:288)    at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:108)    at org.apache.spark.SparkContext.clean(SparkContext.scala:2101)    at org.apache.spark.rdd.RDD$$anonfun$map$1.apply(RDD.scala:370)at org.apache.spark.rdd.RDD$$anonfun$map$1.apply(RDD.scala:369)    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)    at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)    at org.apache.spark.rdd.RDD.map(RDD.scala:369)    at org.apache.spark.api.java.JavaRDDLike$class.map(JavaRDDLike.scala:93)    at org.apache.spark.api.java.AbstractJavaRDDLike.map(JavaRDDLike.scala:45)    at sparlsql.hbase.www.second.SparkOnHbaseSecond.main(SparkOnHbaseSecond.java:94)Caused by: java.io.NotSerializableException: sparlsql.hbase.www.second.SparkOnHbaseSecondSerialization stack:    - object not serializable (class: sparlsql.hbase.www.second.SparkOnHbaseSecond, value: sparlsql.hbase.www.second.SparkOnHbaseSecond@602ae7b6)    - field (class: sparlsql.hbase.www.second.SparkOnHbaseSecond$1, name: val$sparkOnHbase, type: class sparlsql.hbase.www.second.SparkOnHbaseSecond)    - object (class sparlsql.hbase.www.second.SparkOnHbaseSecond$1, sparlsql.hbase.www.second.SparkOnHbaseSecond$1@37af1f93)    - field (class: org.apache.spark.api.java.JavaPairRDD$$anonfun$toScalaFunction$1, name: fun$1, type: interface org.apache.spark.api.java.function.Function)- object (class org.apache.spark.api.java.JavaPairRDD$$anonfun$toScalaFunction$1, <function1>)    at org.apache.spark.serializer.SerializationDebugger$.improveException(SerializationDebugger.scala:40)    at org.apache.spark.serializer.JavaSerializationStream.writeObject(JavaSerializer.scala:46)    at org.apache.spark.serializer.JavaSerializerInstance.serialize(JavaSerializer.scala:100)    at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:295)    ... 12 more17/12/06 14:20:10 INFO SparkContext: Invoking stop() from shutdown hook

第一种是类没序列化

public class SparkOnHbaseBack {    private String  aa = "1234";     public static void main(String[] args) throws Exception {        SparkSession spark=SparkSession.builder()                  .appName("lcc_java_read_hbase_register_to_table")                  .master("local[4]")                .getOrCreate();          JavaRDD<Row> personsRDD = myRDD.map(new Function<Tuple2<ImmutableBytesWritable,Result>,Row>() {            @Override            public Row call(Tuple2<ImmutableBytesWritable, Result> tuple) throws Exception {                // TODO Auto-generated method stub                // System.out.println("====tuple=========="+aa);                这里使用了main外的字段,把字段放到这个map方法里面就好了                或者类继承实现public class SparkOnHbaseSecond implements java.io.Serializable 类似这样        });
阅读全文
0 0
原创粉丝点击