Spark加载放在Tomcat容器中的mlib模型报错:org.apache.hadoop.fs.ChecksumException

来源:互联网 发布:linux创建目录下的文件 编辑:程序博客网 时间:2024/06/08 17:34

一、问题

使用Intellij IDEA构建工程,将spark mlib训练的模型放到resources下,训练的模型包括data和metadata两个部分,其中在程序加载metadata时,报校验和异常。Java代码加载模型的代码如下:
SparkConf conf = new SparkConf().setMaster("local").setAppName("modelPredict").set("spark.sql.warehouse.dir", System.getProperty("riskArsenalWeb.root") + "/spark-warehouse/");SparkContext sc = new SparkContext(conf);String path = GetPredictResultServiceImpl.class.getClassLoader().getResource("/model/rfMllibModel").toString();RandomForestModel rfModel = RandomForestModel.load(sc, path);
具体异常如下:
org.apache.hadoop.fs.ChecksumException: Checksum file not a length multiple of checksum size in file:/D:/Git/risk-arsenal/risk-arsenal-web/target/risk-arsenal-web/WEB-INF/classes/model/rfMllibModel/metadata/part-00000 at 0 checksumpos: 8 sumLenread: 11at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.readChunk(ChecksumFileSystem.java:233) ~[hadoop-common-2.2.0.jar:na]at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:275) [hadoop-common-2.2.0.jar:na]at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:227) [hadoop-common-2.2.0.jar:na]at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:195) [hadoop-common-2.2.0.jar:na]at java.io.DataInputStream.read(DataInputStream.java:100) [na:1.7.0_79]at org.apache.hadoop.util.LineReader.readDefaultLine(LineReader.java:211) [hadoop-common-2.2.0.jar:na]at org.apache.hadoop.util.LineReader.readLine(LineReader.java:174) [hadoop-common-2.2.0.jar:na]at org.apache.hadoop.mapred.LineRecordReader.next(LineRecordReader.java:206) [hadoop-mapreduce-client-core-2.2.0.jar:na]at org.apache.hadoop.mapred.LineRecordReader.next(LineRecordReader.java:45) [hadoop-mapreduce-client-core-2.2.0.jar:na]at org.apache.spark.rdd.HadoopRDD$$anon$1.getNext(HadoopRDD.scala:255) [spark-core_2.11-2.0.0.jar:2.0.0]at org.apache.spark.rdd.HadoopRDD$$anon$1.getNext(HadoopRDD.scala:209) [spark-core_2.11-2.0.0.jar:2.0.0]at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:73) [spark-core_2.11-2.0.0.jar:2.0.0]at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:39) [spark-core_2.11-2.0.0.jar:2.0.0]at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408) [scala-library-2.11.8.jar:na]at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:389) [scala-library-2.11.8.jar:na]at scala.collection.Iterator$class.foreach(Iterator.scala:893) [scala-library-2.11.8.jar:na]at scala.collection.AbstractIterator.foreach(Iterator.scala:1336) [scala-library-2.11.8.jar:na]at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:59) [scala-library-2.11.8.jar:na]at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:104) [scala-library-2.11.8.jar:na]at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:48) [scala-library-2.11.8.jar:na]at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:310) [scala-library-2.11.8.jar:na]at scala.collection.AbstractIterator.to(Iterator.scala:1336) [scala-library-2.11.8.jar:na]at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:302) [scala-library-2.11.8.jar:na]at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1336) [scala-library-2.11.8.jar:na]at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:289) [scala-library-2.11.8.jar:na]at scala.collection.AbstractIterator.toArray(Iterator.scala:1336) [scala-library-2.11.8.jar:na]at org.apache.spark.rdd.RDD$$anonfun$take$1$$anonfun$29.apply(RDD.scala:1305) [spark-core_2.11-2.0.0.jar:2.0.0]at org.apache.spark.rdd.RDD$$anonfun$take$1$$anonfun$29.apply(RDD.scala:1305) [spark-core_2.11-2.0.0.jar:2.0.0]at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1897) [spark-core_2.11-2.0.0.jar:2.0.0]at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1897) [spark-core_2.11-2.0.0.jar:2.0.0]at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70) [spark-core_2.11-2.0.0.jar:2.0.0]at org.apache.spark.scheduler.Task.run(Task.scala:85) [spark-core_2.11-2.0.0.jar:2.0.0]at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274) [spark-core_2.11-2.0.0.jar:2.0.0]at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [na:1.7.0_79]at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [na:1.7.0_79]at java.lang.Thread.run(Thread.java:745) [na:1.7.0_79]2016-09-13 15:08:31.475 [Executor task launch worker-0] ERROR org.apache.spark.executor.Executor[91] - Exception in task 0.0 in stage 0.0 (TID 0)org.apache.hadoop.fs.ChecksumException: Checksum file not a length multiple of checksum size in file:/D:/Git/risk-arsenal/risk-arsenal-web/target/risk-arsenal-web/WEB-INF/classes/model/rfMllibModel/metadata/part-00000 at 0 checksumpos: 8 sumLenread: 11at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.readChunk(ChecksumFileSystem.java:233) ~[hadoop-common-2.2.0.jar:na]at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:275) ~[hadoop-common-2.2.0.jar:na]at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:227) ~[hadoop-common-2.2.0.jar:na]at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:195) ~[hadoop-common-2.2.0.jar:na]at java.io.DataInputStream.read(DataInputStream.java:100) ~[na:1.7.0_79]at org.apache.hadoop.util.LineReader.readDefaultLine(LineReader.java:211) ~[hadoop-common-2.2.0.jar:na]at org.apache.hadoop.util.LineReader.readLine(LineReader.java:174) ~[hadoop-common-2.2.0.jar:na]at org.apache.hadoop.mapred.LineRecordReader.next(LineRecordReader.java:206) ~[hadoop-mapreduce-client-core-2.2.0.jar:na]at org.apache.hadoop.mapred.LineRecordReader.next(LineRecordReader.java:45) ~[hadoop-mapreduce-client-core-2.2.0.jar:na]at org.apache.spark.rdd.HadoopRDD$$anon$1.getNext(HadoopRDD.scala:255) ~[spark-core_2.11-2.0.0.jar:2.0.0]at org.apache.spark.rdd.HadoopRDD$$anon$1.getNext(HadoopRDD.scala:209) ~[spark-core_2.11-2.0.0.jar:2.0.0]at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:73) ~[spark-core_2.11-2.0.0.jar:2.0.0]at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:39) ~[spark-core_2.11-2.0.0.jar:2.0.0]at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408) ~[scala-library-2.11.8.jar:na]at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:389) ~[scala-library-2.11.8.jar:na]at scala.collection.Iterator$class.foreach(Iterator.scala:893) ~[scala-library-2.11.8.jar:na]at scala.collection.AbstractIterator.foreach(Iterator.scala:1336) ~[scala-library-2.11.8.jar:na]at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:59) ~[scala-library-2.11.8.jar:na]at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:104) ~[scala-library-2.11.8.jar:na]at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:48) ~[scala-library-2.11.8.jar:na]at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:310) ~[scala-library-2.11.8.jar:na]at scala.collection.AbstractIterator.to(Iterator.scala:1336) ~[scala-library-2.11.8.jar:na]at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:302) ~[scala-library-2.11.8.jar:na]at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1336) ~[scala-library-2.11.8.jar:na]at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:289) ~[scala-library-2.11.8.jar:na]at scala.collection.AbstractIterator.toArray(Iterator.scala:1336) ~[scala-library-2.11.8.jar:na]at org.apache.spark.rdd.RDD$$anonfun$take$1$$anonfun$29.apply(RDD.scala:1305) ~[spark-core_2.11-2.0.0.jar:2.0.0]at org.apache.spark.rdd.RDD$$anonfun$take$1$$anonfun$29.apply(RDD.scala:1305) ~[spark-core_2.11-2.0.0.jar:2.0.0]at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1897) ~[spark-core_2.11-2.0.0.jar:2.0.0]at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1897) ~[spark-core_2.11-2.0.0.jar:2.0.0]at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70) ~[spark-core_2.11-2.0.0.jar:2.0.0]at org.apache.spark.scheduler.Task.run(Task.scala:85) ~[spark-core_2.11-2.0.0.jar:2.0.0]at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274) ~[spark-core_2.11-2.0.0.jar:2.0.0]at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [na:1.7.0_79]at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [na:1.7.0_79]at java.lang.Thread.run(Thread.java:745) [na:1.7.0_79]2016-09-13 15:08:31.577 [task-result-getter-0] WARN  org.apache.spark.scheduler.TaskSetManager[66] - Lost task 0.0 in stage 0.0 (TID 0, localhost): org.apache.hadoop.fs.ChecksumException: Checksum file not a length multiple of checksum size in file:/D:/Git/risk-arsenal/risk-arsenal-web/target/risk-arsenal-web/WEB-INF/classes/model/rfMllibModel/metadata/part-00000 at 0 checksumpos: 8 sumLenread: 11at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.readChunk(ChecksumFileSystem.java:233)at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:275)at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:227)at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:195)at java.io.DataInputStream.read(DataInputStream.java:100)at org.apache.hadoop.util.LineReader.readDefaultLine(LineReader.java:211)at org.apache.hadoop.util.LineReader.readLine(LineReader.java:174)at org.apache.hadoop.mapred.LineRecordReader.next(LineRecordReader.java:206)at org.apache.hadoop.mapred.LineRecordReader.next(LineRecordReader.java:45)at org.apache.spark.rdd.HadoopRDD$$anon$1.getNext(HadoopRDD.scala:255)at org.apache.spark.rdd.HadoopRDD$$anon$1.getNext(HadoopRDD.scala:209)at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:73)at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:39)at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:389)at scala.collection.Iterator$class.foreach(Iterator.scala:893)at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:59)at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:104)at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:48)at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:310)at scala.collection.AbstractIterator.to(Iterator.scala:1336)at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:302)at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1336)at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:289)at scala.collection.AbstractIterator.toArray(Iterator.scala:1336)at org.apache.spark.rdd.RDD$$anonfun$take$1$$anonfun$29.apply(RDD.scala:1305)at org.apache.spark.rdd.RDD$$anonfun$take$1$$anonfun$29.apply(RDD.scala:1305)at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1897)at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1897)at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)at org.apache.spark.scheduler.Task.run(Task.scala:85)at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)at java.lang.Thread.run(Thread.java:745)

二、解决

该异常最终也没有发现报错的原因,但是发现将model放到其他目录下就可以,放到classes这个目录下就报该异常,感觉莫名其妙,这个异常后续还会继续追踪。目前的解决方案是将model放在了webapp目录下,即Web应用的根目录。
Spring 在 org.springframework.web.util 包中提供了几个特殊用途的 Servlet 监听器,正确地使用它们可以完成一些特定需求的功能。比如某些第三方工具支持通过 ${key} 的方式引用系统参数(即可以通过 System.getProperty() 获取的属性),WebAppRootListener 可以将 Web 应用根目录添加到系统参数中,对应的属性名可以通过名为“webAppRootKey”的 Servlet 上下文参数指定,默认为“webapp.root”。下面是该监听器的具体的配置,在web.xml中进行配置:
<context-param>     <param-name>webAppRootKey</param-name>     <param-value>webApp.root</param-value> ① Web 应用根目录以该属性名添加到系统参数中 </context-param> …② 负责将 Web 应用根目录以 webAppRootKey 上下文参数指定的属性名添加到系统参数中 <listener>     <listener-class>     org.springframework.web.util.WebAppRootListener     </listener-class>  </listener> 
这样,您就可以在程序中通过 System.getProperty("webApp.root") 获取 Web 应用的根目录了。不过更常见的使用场景是在第三方工具的配置文件中通过 ${webApp.root} 引用 Web 应用的根目录。比如以下的 log4j.properties 配置文件就通过 ${webApp.root} 设置了日志文件的地址:
log4j.rootLogger=INFO,R  log4j.appender.R=org.apache.log4j.RollingFileAppender  log4j.appender.R.File=${webApp.root}/WEB-INF/logs/log4j.log #指定日志文件的地址 log4j.appender.R.MaxFileSize=100KB  log4j.appender.R.MaxBackupIndex=1  log4j.appender.R.layout.ConversionPattern=%d %5p [%t] (%F:%L) - %m%n


Over...

IBM developerworks中关于Spring工具类的盘点:

Spring 的优秀工具类盘点,第 1 部分: 文件资源操作和 Web 相关工具类


0 0
原创粉丝点击