spark shell 运行 README.md 报错解决

来源:互联网 发布:迅雷手游网络加速器 编辑:程序博客网 时间:2024/06/06 01:05


val textFile = sc.textFile("/usr/local/spark/README.md")textFile: org.apache.spark.rdd.RDD[String] = /usr/local/spark/README.md MapPartitionsRDD[3] at textFile at <console>:24scala> textFile.count()org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: hdfs://master-hadoop-wintime:9000/usr/local/spark/README.md  at org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:287)  at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:229)  at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:315)  at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:200)  at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:248)  at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:246)  at scala.Option.getOrElse(Option.scala:121)  at org.apache.spark.rdd.RDD.partitions(RDD.scala:246)  at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)  at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:248)  at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:246)  at scala.Option.getOrElse(Option.scala:121)  at org.apache.spark.rdd.RDD.partitions(RDD.scala:246)  at org.apache.spark.SparkContext.runJob(SparkContext.scala:1911)  at org.apache.spark.rdd.RDD.count(RDD.scala:1115)  ... 48 elided

如图所示, README.md是本地文件 未放入HDFS中 第一步是可以传入textFile中 在第二步SPARK计算中报错默认 hdfs:// 所以报错

解决方案 文件位置 加上file:///

scala> val textFile = sc.textFile("file:///usr/local/spark/README.md")textFile: org.apache.spark.rdd.RDD[String] = file:///usr/local/spark/README.md MapPartitionsRDD[5] at textFile at <console>:24scala> textFile.count()res2: Long = 99


0 0
原创粉丝点击