spark实例:用spark-submit运行spark程序

来源:互联网 发布:intelj java.util.map 编辑:程序博客网 时间:2024/04/30 09:59

本文记录了使用intellij idea新建项目、开发应用程序、打包应用程序,并使用通过spark-submit运行应用程序的过程。

过程如下:

1、新建项目:选择File->New Project ->Scala ->Non-SBT,next后输入 Project name:chx3515等信息,如下图:


2、为项目添加spark jar包:选中项目chx3515后,选择File->project Structure,选择Libraries,然后选择添加(+)->java,在Select Library Files中找到spark-assembly-1.0.0-hadoop2.2.0.jar(版本号可能不同),并点“OK”。


3、在Project Structure中选择Modules,在src下新建两级folder(main->scala),并将其mark as “Sources”,如下图


4、在scala文件夹下,新建scala class:Join,Kind选择Object,程序如下:

package chx3515import org.apache.spark.{SparkContext, SparkConf}import org.apache.spark.SparkContext._object Join {  def main(args: Array[String]) = {    if (args.length == 0) {      System.err.println("Usage: Join <file1> <file2>")      System.exit(1)    }     val conf = new SparkConf().setAppName("Join")    val sc = new SparkContext(conf)    val format = new java.text.SimpleDateFormat("yyyy-MM-dd")    case class Register(d: java.util.Date, uuid: String, cust_id: String, lat: Float, lng: Float)    case class Click(d: java.util.Date, uuid: String, landing_page: Int)    val reg = sc.textFile(args(0)).map(_.split("\t")).map(r => (r(1), Register(format.parse(r(0)), r(1), r(2), r(3).toFloat, r(4).toFloat)))    val clk = sc.textFile(args(1)).map(_.split("\t")).map(c => (c(1), Click(format.parse(c(0)), c(1), c(2).trim.toInt)))    reg.join(clk).take(2).foreach(println)    sc.stop()  }}

5、程序开发完毕,开始进行打包:在Project Structure选择Artifacts,点“添加”按钮,选择jar->From modules with dependencies...,点击MainClass右边的浏览按钮,选择所开发的main class:Join。如下图:


6、修改Artifact的名称,删除不必要的依赖包,因集群上已安装了spark和scala,这些jar包在集群上都有。如下图:


7、OK之后,选择Build -> Build Artifacts...,idea就开始进行打包了。


8、打包完成后,可在idea的workspace下的相应目录下找到所打的包,如下图:



9、提交程序至spark进行运行:进入至/chx/idea_workspace/chx3515/out/artifacts/chx3515/目录下,执行:

$SPARK_HOME/bin/spark-submit --master spark://namenode1:7077 --executor-memory 512m --class chx3515.Join chx3515.jar hdfs://namenode1:8000/dataguru/week2/join/reg.tsv hdfs://namenode1:8000/dataguru/week2/join/reg.tsv

执行结果如下:

[hadoop@namenode1 chx3515]$<strong> <span style="color:#ff0000;">$SPARK_HOME/bin/spark-submit --master spark://namenode1:7077 --executor-memory 512m --class chx3515.Join chx3515.jar hdfs://namenode1:8000/dataguru/week2/join/reg.tsv hdfs://namenode1:8000/dataguru/week2/join/reg.tsv</span></strong>Spark assembly has been built with Hive, including Datanucleus jars on classpath14/12/16 23:32:11 INFO SecurityManager: Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties14/12/16 23:32:11 INFO SecurityManager: Changing view acls to: hadoop14/12/16 23:32:11 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(hadoop)14/12/16 23:32:14 INFO Slf4jLogger: Slf4jLogger started14/12/16 23:32:15 INFO Remoting: Starting remoting14/12/16 23:32:24 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://spark@namenode1:33244]14/12/16 23:32:24 INFO Remoting: Remoting now listens on addresses: [akka.tcp://spark@namenode1:33244]14/12/16 23:32:24 INFO SparkEnv: Registering MapOutputTracker14/12/16 23:32:24 INFO SparkEnv: Registering BlockManagerMaster14/12/16 23:32:24 INFO DiskBlockManager: Created local directory at /tmp/spark-local-20141216233224-8a7014/12/16 23:32:24 INFO MemoryStore: MemoryStore started with capacity 294.9 MB.14/12/16 23:32:24 INFO ConnectionManager: Bound socket to port 34119 with id = ConnectionManagerId(namenode1,34119)14/12/16 23:32:24 INFO BlockManagerMaster: Trying to register BlockManager14/12/16 23:32:24 INFO BlockManagerInfo: Registering block manager namenode1:34119 with 294.9 MB RAM14/12/16 23:32:24 INFO BlockManagerMaster: Registered BlockManager14/12/16 23:32:24 INFO HttpServer: Starting HTTP Server14/12/16 23:32:24 INFO HttpBroadcast: Broadcast server started at http://192.168.11.120:4795614/12/16 23:32:24 INFO HttpFileServer: HTTP File server directory is /tmp/spark-8abda079-ccb1-40f1-8e45-722b7803b1df14/12/16 23:32:24 INFO HttpServer: Starting HTTP Server14/12/16 23:32:25 INFO SparkUI: Started SparkUI at http://namenode1:404014/12/16 23:32:28 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable14/12/16 23:32:30 INFO SparkContext: Added JAR file:/chx/idea_workspace/chx3515/out/artifacts/chx3515/chx3515.jar at http://192.168.11.120:34693/jars/chx3515.jar with timestamp 141874395005714/12/16 23:32:30 INFO AppClient$ClientActor: Connecting to master spark://namenode1:7077...14/12/16 23:32:32 INFO MemoryStore: ensureFreeSpace(138763) called with curMem=0, maxMem=30922506214/12/16 23:32:32 INFO MemoryStore: Block broadcast_0 stored as values to memory (estimated size 135.5 KB, free 294.8 MB)14/12/16 23:32:33 INFO SparkDeploySchedulerBackend: Connected to Spark cluster with app ID app-20141216233233-000014/12/16 23:32:33 INFO AppClient$ClientActor: Executor added: app-20141216233233-0000/0 on worker-20141216233159-datanode2-46353 (datanode2:46353) with 1 cores14/12/16 23:32:33 INFO SparkDeploySchedulerBackend: Granted executor ID app-20141216233233-0000/0 on hostPort datanode2:46353 with 1 cores, 512.0 MB RAM14/12/16 23:32:33 INFO AppClient$ClientActor: Executor added: app-20141216233233-0000/1 on worker-20141216233139-namenode1-54411 (namenode1:54411) with 1 cores14/12/16 23:32:33 INFO SparkDeploySchedulerBackend: Granted executor ID app-20141216233233-0000/1 on hostPort namenode1:54411 with 1 cores, 512.0 MB RAM14/12/16 23:32:33 INFO AppClient$ClientActor: Executor added: app-20141216233233-0000/2 on worker-20141216233157-datanode1-33840 (datanode1:33840) with 1 cores14/12/16 23:32:33 INFO SparkDeploySchedulerBackend: Granted executor ID app-20141216233233-0000/2 on hostPort datanode1:33840 with 1 cores, 512.0 MB RAM14/12/16 23:32:33 INFO MemoryStore: ensureFreeSpace(138811) called with curMem=138763, maxMem=30922506214/12/16 23:32:33 INFO MemoryStore: Block broadcast_1 stored as values to memory (estimated size 135.6 KB, free 294.6 MB)14/12/16 23:32:34 INFO AppClient$ClientActor: Executor updated: app-20141216233233-0000/1 is now RUNNING14/12/16 23:32:34 INFO AppClient$ClientActor: Executor updated: app-20141216233233-0000/2 is now RUNNING14/12/16 23:32:34 INFO AppClient$ClientActor: Executor updated: app-20141216233233-0000/0 is now RUNNING14/12/16 23:32:49 INFO SparkDeploySchedulerBackend: Registered executor: Actor[akka.tcp://sparkExecutor@datanode1:51225/user/Executor#-141331840] with ID 214/12/16 23:32:49 INFO SparkDeploySchedulerBackend: Registered executor: Actor[akka.tcp://sparkExecutor@datanode2:37973/user/Executor#-1495192232] with ID 014/12/16 23:32:51 INFO FileInputFormat: Total input paths to process : 114/12/16 23:32:54 INFO BlockManagerInfo: Registering block manager datanode2:56812 with 297.0 MB RAM14/12/16 23:32:54 INFO BlockManagerInfo: Registering block manager datanode1:50930 with 297.0 MB RAM14/12/16 23:33:03 INFO FileInputFormat: Total input paths to process : 114/12/16 23:33:03 INFO SparkContext: Starting job: take at Join.scala:2214/12/16 23:33:03 INFO DAGScheduler: Registering RDD 3 (map at Join.scala:19)14/12/16 23:33:03 INFO DAGScheduler: Registering RDD 7 (map at Join.scala:20)14/12/16 23:33:03 INFO DAGScheduler: Got job 0 (take at Join.scala:22) with 1 output partitions (allowLocal=true)14/12/16 23:33:03 INFO DAGScheduler: Final stage: Stage 0(take at Join.scala:22)14/12/16 23:33:03 INFO DAGScheduler: Parents of final stage: List(Stage 1, Stage 2)14/12/16 23:33:03 INFO DAGScheduler: Missing parents: List(Stage 1, Stage 2)14/12/16 23:33:03 INFO DAGScheduler: Submitting Stage 1 (MappedRDD[3] at map at Join.scala:19), which has no missing parents14/12/16 23:33:04 INFO DAGScheduler: Submitting 2 missing tasks from Stage 1 (MappedRDD[3] at map at Join.scala:19)14/12/16 23:33:04 INFO TaskSchedulerImpl: Adding task set 1.0 with 2 tasks14/12/16 23:33:04 INFO DAGScheduler: Submitting Stage 2 (MappedRDD[7] at map at Join.scala:20), which has no missing parents14/12/16 23:33:04 INFO TaskSetManager: Starting task 1.0:0 as TID 0 on executor 0: datanode2 (PROCESS_LOCAL)14/12/16 23:33:04 INFO TaskSetManager: Serialized task 1.0:0 as 12820 bytes in 29 ms14/12/16 23:33:04 INFO TaskSetManager: Starting task 1.0:1 as TID 1 on executor 2: datanode1 (PROCESS_LOCAL)14/12/16 23:33:04 INFO DAGScheduler: Submitting 2 missing tasks from Stage 2 (MappedRDD[7] at map at Join.scala:20)14/12/16 23:33:04 INFO TaskSchedulerImpl: Adding task set 2.0 with 2 tasks14/12/16 23:33:04 INFO TaskSetManager: Serialized task 1.0:1 as 12820 bytes in 1 ms14/12/16 23:33:35 INFO TaskSetManager: Starting task 2.0:0 as TID 2 on executor 2: datanode1 (PROCESS_LOCAL)14/12/16 23:33:35 INFO TaskSetManager: Serialized task 2.0:0 as 12823 bytes in 1 ms14/12/16 23:33:35 INFO DAGScheduler: Completed ShuffleMapTask(1, 1)14/12/16 23:33:35 INFO TaskSetManager: Finished TID 1 in 31185 ms on datanode1 (progress: 1/2)14/12/16 23:33:35 INFO TaskSetManager: Starting task 2.0:1 as TID 3 on executor 2: datanode1 (PROCESS_LOCAL)14/12/16 23:33:35 INFO TaskSetManager: Serialized task 2.0:1 as 12823 bytes in 1 ms14/12/16 23:33:35 INFO DAGScheduler: Completed ShuffleMapTask(2, 0)14/12/16 23:33:35 INFO TaskSetManager: Finished TID 2 in 383 ms on datanode1 (progress: 1/2)14/12/16 23:33:36 INFO SparkDeploySchedulerBackend: Registered executor: Actor[akka.tcp://sparkExecutor@namenode1:52428/user/Executor#-1303856878] with ID 114/12/16 23:33:36 INFO DAGScheduler: Completed ShuffleMapTask(2, 1)14/12/16 23:33:36 INFO TaskSetManager: Finished TID 3 in 459 ms on datanode1 (progress: 2/2)14/12/16 23:33:36 INFO DAGScheduler: Stage 2 (map at Join.scala:20) finished in 31.897 s14/12/16 23:33:36 INFO DAGScheduler: looking for newly runnable stages14/12/16 23:33:36 INFO DAGScheduler: running: Set(Stage 1)14/12/16 23:33:36 INFO DAGScheduler: waiting: Set(Stage 0)14/12/16 23:33:36 INFO DAGScheduler: failed: Set()14/12/16 23:33:36 INFO TaskSchedulerImpl: Removed TaskSet 2.0, whose tasks have all completed, from pool 14/12/16 23:33:36 INFO DAGScheduler: Missing parents for Stage 0: List(Stage 1)14/12/16 23:33:37 INFO DAGScheduler: Completed ShuffleMapTask(1, 0)14/12/16 23:33:37 INFO DAGScheduler: Stage 1 (map at Join.scala:19) finished in 33.507 s14/12/16 23:33:37 INFO DAGScheduler: looking for newly runnable stages14/12/16 23:33:37 INFO DAGScheduler: running: Set()14/12/16 23:33:37 INFO DAGScheduler: waiting: Set(Stage 0)14/12/16 23:33:37 INFO DAGScheduler: failed: Set()14/12/16 23:33:37 INFO TaskSetManager: Finished TID 0 in 33499 ms on datanode2 (progress: 2/2)14/12/16 23:33:37 INFO TaskSchedulerImpl: Removed TaskSet 1.0, whose tasks have all completed, from pool 14/12/16 23:33:37 INFO DAGScheduler: Missing parents for Stage 0: List()14/12/16 23:33:37 INFO DAGScheduler: Submitting Stage 0 (FlatMappedValuesRDD[10] at join at Join.scala:22), which is now runnable14/12/16 23:33:38 INFO DAGScheduler: Submitting 1 missing tasks from Stage 0 (FlatMappedValuesRDD[10] at join at Join.scala:22)14/12/16 23:33:38 INFO TaskSchedulerImpl: Adding task set 0.0 with 1 tasks14/12/16 23:33:38 INFO TaskSetManager: Starting task 0.0:0 as TID 4 on executor 2: datanode1 (PROCESS_LOCAL)14/12/16 23:33:38 INFO TaskSetManager: Serialized task 0.0:0 as 13257 bytes in 0 ms14/12/16 23:33:38 INFO MapOutputTrackerMasterActor: Asked to send map output locations for shuffle 0 to spark@datanode1:4633314/12/16 23:33:42 INFO MapOutputTrackerMaster: Size of output statuses for shuffle 0 is 146 bytes14/12/16 23:33:42 INFO MapOutputTrackerMasterActor: Asked to send map output locations for shuffle 1 to spark@datanode1:4633314/12/16 23:33:42 INFO MapOutputTrackerMaster: Size of output statuses for shuffle 1 is 136 bytes14/12/16 23:33:42 INFO BlockManagerInfo: Registering block manager namenode1:51768 with 294.9 MB RAM14/12/16 23:33:42 INFO DAGScheduler: Completed ResultTask(0, 0)14/12/16 23:33:42 INFO TaskSetManager: Finished TID 4 in 4380 ms on datanode1 (progress: 1/1)14/12/16 23:33:42 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool 14/12/16 23:33:42 INFO DAGScheduler: Stage 0 (take at Join.scala:22) finished in 4.284 s14/12/16 23:33:42 INFO SparkContext: Job finished: take at Join.scala:22, took 38.894699663 s<span style="color:#ff0000;">(81da510acc4111e387f3600308919594,(Register(Tue Mar 04 00:00:00 CST 2014,81da510acc4111e387f3600308919594,2,33.85701,-117.85574),Click(Tue Mar 04 00:00:00 CST 2014,81da510acc4111e387f3600308919594,2)))(15dfb8e6cc4111e3a5bb600308919594,(Register(Sun Mar 02 00:00:00 CST 2014,15dfb8e6cc4111e3a5bb600308919594,1,33.659943,-117.95812),Click(Sun Mar 02 00:00:00 CST 2014,15dfb8e6cc4111e3a5bb600308919594,1)))</span>14/12/16 23:33:42 INFO SparkUI: Stopped Spark web UI at http://namenode1:404014/12/16 23:33:42 INFO DAGScheduler: Stopping DAGScheduler14/12/16 23:33:42 INFO SparkDeploySchedulerBackend: Shutting down all executors14/12/16 23:33:42 INFO SparkDeploySchedulerBackend: Asking each executor to shut down14/12/16 23:33:43 INFO MapOutputTrackerMasterActor: MapOutputTrackerActor stopped!14/12/16 23:33:43 INFO ConnectionManager: Selector thread was interrupted!14/12/16 23:33:43 INFO ConnectionManager: ConnectionManager stopped14/12/16 23:33:43 INFO MemoryStore: MemoryStore cleared14/12/16 23:33:43 INFO BlockManager: BlockManager stopped14/12/16 23:33:43 INFO BlockManagerMasterActor: Stopping BlockManagerMaster14/12/16 23:33:43 INFO BlockManagerMaster: BlockManagerMaster stopped14/12/16 23:33:43 INFO RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon.14/12/16 23:33:44 INFO RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports.14/12/16 23:33:44 INFO SparkContext: Successfully stopped SparkContext[hadoop@namenode1 chx3515]$ 

测试数据clk.tsv,reg.tsv,请见如下链接:http://download.csdn.net/detail/chx3515/8268069



0 0