MLlib-Kmeans遇到的异常

来源:互联网 发布:77pepecom现在域名 编辑:程序博客网 时间:2024/06/05 04:01

在用MLlib跑KMeans的时候死活跑不起来,由于造训练数据和测试数据的时候是我亲自造的,始终坚信我的数据绝对没有问题,回想起来造数据的时候使用过CV大法,在最后确认数据之后发现最后一条数据和其他的数据不同,没有做处理造成了下面的结果,真是piapia打脸那,果然程序是不会骗你的。。

造成此问题的原因是数据源的维度不同导致的非法参数异常。

以下是在stackoverflow中查到的答案:
http://stackoverflow.com/questions/30737361/getting-java-lang-illegalargumentexception-requirement-failed-while-calling-spa

附上异常的样子~

Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: Task 1 in stage 2.0 failed 4 times, most recent failure: Lost task 1.3 in stage 2.0 (TID 8, s12180): java.lang.IllegalArgumentException: requirement failed    at scala.Predef$.require(Predef.scala:221)    at org.apache.spark.mllib.util.MLUtils$.fastSquaredDistance(MLUtils.scala:330)    at org.apache.spark.mllib.clustering.KMeans$.fastSquaredDistance(KMeans.scala:595)    at org.apache.spark.mllib.clustering.KMeans$$anonfun$findClosest$1.apply(KMeans.scala:569)at org.apache.spark.mllib.clustering.KMeans$$anonfun$findClosest$1.apply(KMeans.scala:563)    at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)    at org.apache.spark.mllib.clustering.KMeans$.findClosest(KMeans.scala:563)    at org.apache.spark.mllib.clustering.KMeans$.pointCost(KMeans.scala:586)    at org.apache.spark.mllib.clustering.KMeans$$anonfun$initKMeansParallel$1$$anonfun$apply$3.apply$mcDI$sp(KMeans.scala:400)    at org.apache.spark.mllib.clustering.KMeans$$anonfun$initKMeansParallel$1$$anonfun$apply$3.apply(KMeans.scala:399)    at org.apache.spark.mllib.clustering.KMeans$$anonfun$initKMeansParallel$1$$anonfun$apply$3.apply(KMeans.scala:399)    at scala.Array$.tabulate(Array.scala:331)    at org.apache.spark.mllib.clustering.KMeans$$anonfun$initKMeansParallel$1.apply(KMeans.scala:399)at org.apache.spark.mllib.clustering.KMeans$$anonfun$initKMeansParallel$1.apply(KMeans.scala:398)    at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)    at org.apache.spark.storage.MemoryStore.unrollSafely(MemoryStore.scala:285)    at org.apache.spark.CacheManager.putInBlockManager(CacheManager.scala:171)    at org.apache.spark.CacheManager.getOrCompute(CacheManager.scala:78)    at org.apache.spark.rdd.RDD.iterator(RDD.scala:268)    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)    at org.apache.spark.scheduler.Task.run(Task.scala:89)    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)    at java.lang.Thread.run(Thread.java:745)Driver stacktrace:    at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1431)at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1419)    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1418)at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1418)at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799)    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799)at scala.Option.foreach(Option.scala:236)at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:799)at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1640)at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1599)at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1588)at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)    at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:620)    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1843)    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1963)    at org.apache.spark.rdd.RDD$$anonfun$aggregate$1.apply(RDD.scala:1114)at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)at org.apache.spark.rdd.RDD.withScope(RDD.scala:316)at org.apache.spark.rdd.RDD.aggregate(RDD.scala:1107)at org.apache.spark.mllib.clustering.KMeans.initKMeansParallel(KMeans.scala:404)at org.apache.spark.mllib.clustering.KMeans.runAlgorithm(KMeans.scala:249)at org.apache.spark.mllib.clustering.KMeans.run(KMeans.scala:213)at org.apache.spark.mllib.clustering.KMeans$.train(KMeans.scala:528)at org.apache.spark.mllib.clustering.KMeans$.train(KMeans.scala:551)at KMeansRun$.main(KMeansRun.scala:24)at KMeansRun.main(KMeansRun.scala)at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)at java.lang.reflect.Method.invoke(Method.java:606)at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)    at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)    at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)Caused by: java.lang.IllegalArgumentException: requirement failed    at scala.Predef$.require(Predef.scala:221)    at org.apache.spark.mllib.util.MLUtils$.fastSquaredDistance(MLUtils.scala:330)    at org.apache.spark.mllib.clustering.KMeans$.fastSquaredDistance(KMeans.scala:595)    at org.apache.spark.mllib.clustering.KMeans$$anonfun$findClosest$1.apply(KMeans.scala:569)at org.apache.spark.mllib.clustering.KMeans$$anonfun$findClosest$1.apply(KMeans.scala:563)    at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)    at org.apache.spark.mllib.clustering.KMeans$.findClosest(KMeans.scala:563)    at org.apache.spark.mllib.clustering.KMeans$.pointCost(KMeans.scala:586)    at org.apache.spark.mllib.clustering.KMeans$$anonfun$initKMeansParallel$1$$anonfun$apply$3.apply$mcDI$sp(KMeans.scala:400)    at org.apache.spark.mllib.clustering.KMeans$$anonfun$initKMeansParallel$1$$anonfun$apply$3.apply(KMeans.scala:399)    at org.apache.spark.mllib.clustering.KMeans$$anonfun$initKMeansParallel$1$$anonfun$apply$3.apply(KMeans.scala:399)    at scala.Array$.tabulate(Array.scala:331)    at org.apache.spark.mllib.clustering.KMeans$$anonfun$initKMeansParallel$1.apply(KMeans.scala:399)at org.apache.spark.mllib.clustering.KMeans$$anonfun$initKMeansParallel$1.apply(KMeans.scala:398)    at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)    at org.apache.spark.storage.MemoryStore.unrollSafely(MemoryStore.scala:285)    at org.apache.spark.CacheManager.putInBlockManager(CacheManager.scala:171)    at org.apache.spark.CacheManager.getOrCompute(CacheManager.scala:78)    at org.apache.spark.rdd.RDD.iterator(RDD.scala:268)    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)    at org.apache.spark.scheduler.Task.run(Task.scala:89)    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)    at java.lang.Thread.run(Thread.java:745)
1 0
原创粉丝点击