windows环境下IDEA运行spark程序出现的异常问题

来源:互联网 发布:什么软件上youtube 编辑:程序博客网 时间:2024/05/29 15:36

spark程序如下


/**  * Created by Philon Yun on 2017/5/15.  */import org.apache.spark.SparkConfimport org.apache.spark.SparkContextimport org.apache.spark.SparkContext._import org.apache.spark.sql.SQLContextobject HelloWorld {  def main(args: Array[String]): Unit = {    println("Hello World")    val conf = new SparkConf().setMaster("spark://192.168.11.221:7077").setAppName("MyFitst")    val sc = new SparkContext(conf)    sc.addJar("D:\\scala-idea-workspace\\cars\\out\\artifacts\\cars_jar\\cars.jar")    val sqlContext = new SQLContext(sc)    val rdd = sc.textFile("hdfs://master:9000/person.txt").map(_.split(","))    val personRDD = rdd.map(x => Person(x(0).toLong,x(1),x(2).toInt))    import sqlContext.implicits._    val personDF = personRDD.toDF    personDF.registerTempTable("t_person")    val df = sqlContext.sql("select * from t_person order by age desc ")    df.write.json("hdfs://master:9000/out2")    sc.stop()  }}case class Person(id:Long,name:String,age:Int)


运行时一直不出结果,控制台打印信息如下


"C:\Program Files\Java\jdk1.8.0_131\bin\java" "-javaagent:C:\NoWinProgram\IntelliJ IDEA Community Edition 2017.1\lib\idea_rt.jar=7435:C:\NoWinProgram\IntelliJ IDEA Community Edition 2017.1\bin" -Dfile.encoding=UTF-8 -classpath "C:\Program Files\Java\jdk1.8.0_131\jre\lib\charsets.jar;C:\Program Files\Java\jdk1.8.0_131\jre\lib\deploy.jar;C:\Program Files\Java\jdk1.8.0_131\jre\lib\ext\access-bridge-64.jar;C:\Program Files\Java\jdk1.8.0_131\jre\lib\ext\cldrdata.jar;C:\Program Files\Java\jdk1.8.0_131\jre\lib\ext\dnsns.jar;C:\Program Files\Java\jdk1.8.0_131\jre\lib\ext\jaccess.jar;C:\Program Files\Java\jdk1.8.0_131\jre\lib\ext\jfxrt.jar;C:\Program Files\Java\jdk1.8.0_131\jre\lib\ext\localedata.jar;C:\Program Files\Java\jdk1.8.0_131\jre\lib\ext\nashorn.jar;C:\Program Files\Java\jdk1.8.0_131\jre\lib\ext\sunec.jar;C:\Program Files\Java\jdk1.8.0_131\jre\lib\ext\sunjce_provider.jar;C:\Program Files\Java\jdk1.8.0_131\jre\lib\ext\sunmscapi.jar;C:\Program Files\Java\jdk1.8.0_131\jre\lib\ext\sunpkcs11.jar;C:\Program Files\Java\jdk1.8.0_131\jre\lib\ext\zipfs.jar;C:\Program Files\Java\jdk1.8.0_131\jre\lib\javaws.jar;C:\Program Files\Java\jdk1.8.0_131\jre\lib\jce.jar;C:\Program Files\Java\jdk1.8.0_131\jre\lib\jfr.jar;C:\Program Files\Java\jdk1.8.0_131\jre\lib\jfxswt.jar;C:\Program Files\Java\jdk1.8.0_131\jre\lib\jsse.jar;C:\Program Files\Java\jdk1.8.0_131\jre\lib\management-agent.jar;C:\Program Files\Java\jdk1.8.0_131\jre\lib\plugin.jar;C:\Program Files\Java\jdk1.8.0_131\jre\lib\resources.jar;C:\Program Files\Java\jdk1.8.0_131\jre\lib\rt.jar;D:\scala-idea-workspace\cars\out\production\cars;C:\Program Files (x86)\scala\lib\scala-actors-migration.jar;C:\Program Files (x86)\scala\lib\scala-actors.jar;C:\Program Files (x86)\scala\lib\scala-library.jar;C:\Program Files (x86)\scala\lib\scala-reflect.jar;C:\Program Files (x86)\scala\lib\scala-swing.jar;C:\Program Files (x86)\scala\src\scala-actors-src.jar;C:\Program Files (x86)\scala\src\scala-library-src.jar;C:\Program Files (x86)\scala\src\scala-reflect-src.jar;C:\Program Files (x86)\scala\src\scala-swing-src.jar;F:\DownLoad\mysql-connector-java-5.1.42\mysql-connector-java-5.1.42\mysql-connector-java-5.1.42-bin.jar;F:\DownLoad\spark-assembly-1.6.1-hadoop2.6.0.jar" HelloWorld
Hello World
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
17/05/19 14:32:48 INFO SparkContext: Running Spark version 1.6.1
17/05/19 14:32:48 INFO SecurityManager: Changing view acls to: Philon Yun,hadoop
17/05/19 14:32:48 INFO SecurityManager: Changing modify acls to: Philon Yun,hadoop
17/05/19 14:32:48 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(Philon Yun, hadoop); users with modify permissions: Set(Philon Yun, hadoop)
17/05/19 14:32:48 INFO Utils: Successfully started service 'sparkDriver' on port 7472.
17/05/19 14:32:49 INFO Slf4jLogger: Slf4jLogger started
17/05/19 14:32:49 INFO Remoting: Starting remoting
17/05/19 14:32:49 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriverActorSystem@169.254.4.104:7485]
17/05/19 14:32:49 INFO Utils: Successfully started service 'sparkDriverActorSystem' on port 7485.
17/05/19 14:32:49 INFO SparkEnv: Registering MapOutputTracker
17/05/19 14:32:49 INFO SparkEnv: Registering BlockManagerMaster
17/05/19 14:32:49 INFO DiskBlockManager: Created local directory at C:\Users\Philon Yun\AppData\Local\Temp\blockmgr-60dcf5d0-c40b-443b-892d-7f7a5d2e4183
17/05/19 14:32:49 INFO MemoryStore: MemoryStore started with capacity 2.4 GB
17/05/19 14:32:49 INFO SparkEnv: Registering OutputCommitCoordinator
17/05/19 14:32:49 INFO Utils: Successfully started service 'SparkUI' on port 4040.
17/05/19 14:32:49 INFO SparkUI: Started SparkUI at http://169.254.4.104:4040
17/05/19 14:32:49 INFO AppClient$ClientEndpoint: Connecting to master spark://192.168.11.221:7077...
17/05/19 14:32:49 INFO SparkDeploySchedulerBackend: Connected to Spark cluster with app ID app-20170519143251-0005
17/05/19 14:32:49 INFO AppClient$ClientEndpoint: Executor added: app-20170519143251-0005/0 on worker-20170519135622-192.168.11.222-39133 (192.168.11.222:39133) with 1 cores
17/05/19 14:32:49 INFO SparkDeploySchedulerBackend: Granted executor ID app-20170519143251-0005/0 on hostPort 192.168.11.222:39133 with 1 cores, 1024.0 MB RAM
17/05/19 14:32:49 INFO AppClient$ClientEndpoint: Executor added: app-20170519143251-0005/1 on worker-20170519135622-192.168.11.223-37116 (192.168.11.223:37116) with 1 cores
17/05/19 14:32:49 INFO SparkDeploySchedulerBackend: Granted executor ID app-20170519143251-0005/1 on hostPort 192.168.11.223:37116 with 1 cores, 1024.0 MB RAM
17/05/19 14:32:49 INFO AppClient$ClientEndpoint: Executor updated: app-20170519143251-0005/0 is now RUNNING
17/05/19 14:32:49 INFO AppClient$ClientEndpoint: Executor updated: app-20170519143251-0005/1 is now RUNNING
17/05/19 14:32:49 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 7523.
17/05/19 14:32:49 INFO NettyBlockTransferService: Server created on 7523
17/05/19 14:32:49 INFO BlockManagerMaster: Trying to register BlockManager
17/05/19 14:32:49 INFO BlockManagerMasterEndpoint: Registering block manager 169.254.4.104:7523 with 2.4 GB RAM, BlockManagerId(driver, 169.254.4.104, 7523)
17/05/19 14:32:49 INFO BlockManagerMaster: Registered BlockManager
17/05/19 14:32:49 INFO SparkDeploySchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.0
17/05/19 14:32:49 INFO HttpFileServer: HTTP File server directory is C:\Users\Philon Yun\AppData\Local\Temp\spark-78170f53-6988-49c6-b17c-42e6544272d0\httpd-d1f185b3-75b9-4b12-becf-fc079368e8ac
17/05/19 14:32:49 INFO HttpServer: Starting HTTP Server
17/05/19 14:32:49 INFO Utils: Successfully started service 'HTTP file server' on port 7524.
17/05/19 14:32:50 INFO SparkContext: Added JAR D:\scala-idea-workspace\cars\out\artifacts\cars_jar\cars.jar at http://169.254.4.104:7524/jars/cars.jar with timestamp 1495175570735
17/05/19 14:32:51 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 127.4 KB, free 127.4 KB)
17/05/19 14:32:51 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 13.9 KB, free 141.3 KB)
17/05/19 14:32:51 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 169.254.4.104:7523 (size: 13.9 KB, free: 2.4 GB)
17/05/19 14:32:51 INFO SparkContext: Created broadcast 0 from textFile at HelloWorld.scala:18
17/05/19 14:32:52 INFO FileInputFormat: Total input paths to process : 1
17/05/19 14:32:52 INFO SparkContext: Starting job: json at HelloWorld.scala:25
17/05/19 14:32:52 INFO DAGScheduler: Got job 0 (json at HelloWorld.scala:25) with 2 output partitions
17/05/19 14:32:52 INFO DAGScheduler: Final stage: ResultStage 0 (json at HelloWorld.scala:25)
17/05/19 14:32:52 INFO DAGScheduler: Parents of final stage: List()
17/05/19 14:32:52 INFO DAGScheduler: Missing parents: List()
17/05/19 14:32:53 INFO DAGScheduler: Submitting ResultStage 0 (MapPartitionsRDD[9] at json at HelloWorld.scala:25), which has no missing parents
17/05/19 14:32:53 INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 7.6 KB, free 148.9 KB)
17/05/19 14:32:53 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 3.9 KB, free 152.8 KB)
17/05/19 14:32:53 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on 169.254.4.104:7523 (size: 3.9 KB, free: 2.4 GB)
17/05/19 14:32:53 INFO SparkContext: Created broadcast 1 from broadcast at DAGScheduler.scala:1006
17/05/19 14:32:53 INFO DAGScheduler: Submitting 2 missing tasks from ResultStage 0 (MapPartitionsRDD[9] at json at HelloWorld.scala:25)
17/05/19 14:32:53 INFO TaskSchedulerImpl: Adding task set 0.0 with 2 tasks
17/05/19 14:32:55 INFO AppClient$ClientEndpoint: Executor updated: app-20170519143251-0005/1 is now EXITED (Command exited with code 1)
17/05/19 14:32:55 INFO SparkDeploySchedulerBackend: Executor app-20170519143251-0005/1 removed: Command exited with code 1
17/05/19 14:32:55 INFO SparkDeploySchedulerBackend: Asked to remove non-existent executor 1
17/05/19 14:32:55 INFO AppClient$ClientEndpoint: Executor added: app-20170519143251-0005/2 on worker-20170519135622-192.168.11.223-37116 (192.168.11.223:37116) with 1 cores
17/05/19 14:32:55 INFO SparkDeploySchedulerBackend: Granted executor ID app-20170519143251-0005/2 on hostPort 192.168.11.223:37116 with 1 cores, 1024.0 MB RAM
17/05/19 14:32:55 INFO AppClient$ClientEndpoint: Executor updated: app-20170519143251-0005/2 is now RUNNING
17/05/19 14:32:55 INFO AppClient$ClientEndpoint: Executor updated: app-20170519143251-0005/0 is now EXITED (Command exited with code 1)
17/05/19 14:32:55 INFO SparkDeploySchedulerBackend: Executor app-20170519143251-0005/0 removed: Command exited with code 1
17/05/19 14:32:55 INFO SparkDeploySchedulerBackend: Asked to remove non-existent executor 0
17/05/19 14:32:55 INFO AppClient$ClientEndpoint: Executor added: app-20170519143251-0005/3 on worker-20170519135622-192.168.11.222-39133 (192.168.11.222:39133) with 1 cores
17/05/19 14:32:55 INFO SparkDeploySchedulerBackend: Granted executor ID app-20170519143251-0005/3 on hostPort 192.168.11.222:39133 with 1 cores, 1024.0 MB RAM
17/05/19 14:32:55 INFO AppClient$ClientEndpoint: Executor updated: app-20170519143251-0005/3 is now RUNNING
17/05/19 14:32:58 INFO AppClient$ClientEndpoint: Executor updated: app-20170519143251-0005/3 is now EXITED (Command exited with code 1)
17/05/19 14:32:58 INFO SparkDeploySchedulerBackend: Executor app-20170519143251-0005/3 removed: Command exited with code 1
17/05/19 14:32:58 INFO SparkDeploySchedulerBackend: Asked to remove non-existent executor 3
17/05/19 14:32:58 INFO AppClient$ClientEndpoint: Executor added: app-20170519143251-0005/4 on worker-20170519135622-192.168.11.222-39133 (192.168.11.222:39133) with 1 cores
17/05/19 14:32:58 INFO SparkDeploySchedulerBackend: Granted executor ID app-20170519143251-0005/4 on hostPort 192.168.11.222:39133 with 1 cores, 1024.0 MB RAM
17/05/19 14:32:58 INFO AppClient$ClientEndpoint: Executor updated: app-20170519143251-0005/4 is now RUNNING
17/05/19 14:32:58 INFO AppClient$ClientEndpoint: Executor updated: app-20170519143251-0005/2 is now EXITED (Command exited with code 1)
17/05/19 14:32:58 INFO SparkDeploySchedulerBackend: Executor app-20170519143251-0005/2 removed: Command exited with code 1
17/05/19 14:32:58 INFO SparkDeploySchedulerBackend: Asked to remove non-existent executor 2
17/05/19 14:32:58 INFO AppClient$ClientEndpoint: Executor added: app-20170519143251-0005/5 on worker-20170519135622-192.168.11.223-37116 (192.168.11.223:37116) with 1 cores
17/05/19 14:32:58 INFO SparkDeploySchedulerBackend: Granted executor ID app-20170519143251-0005/5 on hostPort 192.168.11.223:37116 with 1 cores, 1024.0 MB RAM
17/05/19 14:32:58 INFO AppClient$ClientEndpoint: Executor updated: app-20170519143251-0005/5 is now RUNNING
17/05/19 14:33:03 INFO AppClient$ClientEndpoint: Executor updated: app-20170519143251-0005/5 is now EXITED (Command exited with code 1)
17/05/19 14:33:03 INFO SparkDeploySchedulerBackend: Executor app-20170519143251-0005/5 removed: Command exited with code 1
17/05/19 14:33:03 INFO SparkDeploySchedulerBackend: Asked to remove non-existent executor 5
17/05/19 14:33:03 INFO AppClient$ClientEndpoint: Executor added: app-20170519143251-0005/6 on worker-20170519135622-192.168.11.223-37116 (192.168.11.223:37116) with 1 cores
17/05/19 14:33:03 INFO SparkDeploySchedulerBackend: Granted executor ID app-20170519143251-0005/6 on hostPort 192.168.11.223:37116 with 1 cores, 1024.0 MB RAM
17/05/19 14:33:03 INFO AppClient$ClientEndpoint: Executor updated: app-20170519143251-0005/6 is now RUNNING
17/05/19 14:33:06 INFO AppClient$ClientEndpoint: Executor updated: app-20170519143251-0005/4 is now EXITED (Command exited with code 1)
17/05/19 14:33:06 INFO SparkDeploySchedulerBackend: Executor app-20170519143251-0005/4 removed: Command exited with code 1
17/05/19 14:33:06 INFO SparkDeploySchedulerBackend: Asked to remove non-existent executor 4
17/05/19 14:33:06 INFO AppClient$ClientEndpoint: Executor added: app-20170519143251-0005/7 on worker-20170519135622-192.168.11.222-39133 (192.168.11.222:39133) with 1 cores
17/05/19 14:33:06 INFO SparkDeploySchedulerBackend: Granted executor ID app-20170519143251-0005/7 on hostPort 192.168.11.222:39133 with 1 cores, 1024.0 MB RAM
17/05/19 14:33:06 INFO AppClient$ClientEndpoint: Executor updated: app-20170519143251-0005/7 is now RUNNING
17/05/19 14:33:08 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
17/05/19 14:33:09 INFO AppClient$ClientEndpoint: Executor updated: app-20170519143251-0005/7 is now EXITED (Command exited with code 1)
17/05/19 14:33:09 INFO SparkDeploySchedulerBackend: Executor app-20170519143251-0005/7 removed: Command exited with code 1
17/05/19 14:33:09 INFO SparkDeploySchedulerBackend: Asked to remove non-existent executor 7
17/05/19 14:33:09 INFO AppClient$ClientEndpoint: Executor added: app-20170519143251-0005/8 on worker-20170519135622-192.168.11.222-39133 (192.168.11.222:39133) with 1 cores
17/05/19 14:33:09 INFO SparkDeploySchedulerBackend: Granted executor ID app-20170519143251-0005/8 on hostPort 192.168.11.222:39133 with 1 cores, 1024.0 MB RAM
17/05/19 14:33:09 INFO AppClient$ClientEndpoint: Executor updated: app-20170519143251-0005/8 is now RUNNING
17/05/19 14:33:12 INFO AppClient$ClientEndpoint: Executor updated: app-20170519143251-0005/6 is now EXITED (Command exited with code 1)
17/05/19 14:33:12 INFO SparkDeploySchedulerBackend: Executor app-20170519143251-0005/6 removed: Command exited with code 1
17/05/19 14:33:12 INFO SparkDeploySchedulerBackend: Asked to remove non-existent executor 6
17/05/19 14:33:12 INFO AppClient$ClientEndpoint: Executor added: app-20170519143251-0005/9 on worker-20170519135622-192.168.11.223-37116 (192.168.11.223:37116) with 1 cores
17/05/19 14:33:12 INFO SparkDeploySchedulerBackend: Granted executor ID app-20170519143251-0005/9 on hostPort 192.168.11.223:37116 with 1 cores, 1024.0 MB RAM
17/05/19 14:33:12 INFO AppClient$ClientEndpoint: Executor updated: app-20170519143251-0005/9 is now RUNNING
17/05/19 14:33:17 INFO AppClient$ClientEndpoint: Executor updated: app-20170519143251-0005/8 is now EXITED (Command exited with code 1)
17/05/19 14:33:17 INFO SparkDeploySchedulerBackend: Executor app-20170519143251-0005/8 removed: Command exited with code 1
17/05/19 14:33:17 INFO SparkDeploySchedulerBackend: Asked to remove non-existent executor 8
17/05/19 14:33:17 INFO AppClient$ClientEndpoint: Executor added: app-20170519143251-0005/10 on worker-20170519135622-192.168.11.222-39133 (192.168.11.222:39133) with 1 cores
17/05/19 14:33:17 INFO SparkDeploySchedulerBackend: Granted executor ID app-20170519143251-0005/10 on hostPort 192.168.11.222:39133 with 1 cores, 1024.0 MB RAM
17/05/19 14:33:17 INFO AppClient$ClientEndpoint: Executor updated: app-20170519143251-0005/10 is now RUNNING
17/05/19 14:33:20 INFO AppClient$ClientEndpoint: Executor updated: app-20170519143251-0005/9 is now EXITED (Command exited with code 1)
17/05/19 14:33:20 INFO SparkDeploySchedulerBackend: Executor app-20170519143251-0005/9 removed: Command exited with code 1
17/05/19 14:33:20 INFO SparkDeploySchedulerBackend: Asked to remove non-existent executor 9
17/05/19 14:33:20 INFO AppClient$ClientEndpoint: Executor added: app-20170519143251-0005/11 on worker-20170519135622-192.168.11.223-37116 (192.168.11.223:37116) with 1 cores
17/05/19 14:33:20 INFO SparkDeploySchedulerBackend: Granted executor ID app-20170519143251-0005/11 on hostPort 192.168.11.223:37116 with 1 cores, 1024.0 MB RAM
17/05/19 14:33:20 INFO AppClient$ClientEndpoint: Executor updated: app-20170519143251-0005/11 is now RUNNING



这一行特别醒目


WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources

初始工作没有接受任何资源;检查你的集群用户界面以确保工人注册并有足够的资源


我在网上查了很多,说内存不足的,节点间通信问题的,其实我的集群设置是没问题的,而且有足够的内存用


这是我的提交任务后的集群信息:




最终我找到了解决办法就是在windows环境变量中添加一个环境变量  SPARK_LOCAL_IP

在cmd中运行ipconfig命令查看自己的ip地址,然后赋值给这个环境变量,然后重启IDEA重新提交运行任务就可以解决这个问题。

阅读全文
0 0
原创粉丝点击