Spark排序算法!! 使用java开发 自定义key值 进行二次排序 深入解析!

来源:互联网 发布:海马苹果助手for mac 编辑:程序博客网 时间:2024/06/06 16:52

Spark使用JAVA开发的二次排序

【数据文件Input

2 3

 4 1

 3 2

 4 3

 8 7

 2 1

【运行结果Output

2 1

 2 3

 3 2

 4 1

 4 3

 8 7

【源代码文件】SecondaySortApp.java SecondarySortKey.java

 

classSecondarySort  

1、读入每行的数据记录

JavaRDD<String> lines =sc.textFile("G://IMFBigDataSpark2016//tesdata//helloSpark.txt");

2、对每行进行单词切分,返回一个JavaPairRDD,即KV键值对,keySecondarySortKey,(SecondarySortKeyfirstsecond分别放入第一个、第二个数据);value为每行的数据。

 JavaPairRDD<SecondarySortKey, String>pairs = lines.mapToPair(new PairFunction<String, SecondarySortKey,String>()

 

3、对pairs进行按keySecondarySortKeycompareTo方法进行排序。返回一个JavaPairRDD

JavaPairRDD<SecondarySortKey,String> sorted = pairs.sortByKey();

4、放回一个JavaRDD,类型是String,二次排序以后,key值不再需要,将排序以后的读入的每行数据返回。

JavaRDD<String> SecondaySorted=sorted.map(new Function<Tuple2<SecondarySortKey,String>,String>()

这里sortedContentTuple2元组,第一个元素是SecondarySortKey值,第二个元素是line每行的数据,经过排序以后,我们取第二个值(每行的记录)就可以了。

public Stringcall(Tuple2<SecondarySortKey, String> sortedContent) throws Exception

5、打印输出二次排序后的结果

SecondaySorted.foreach(newVoidFunction<String>() {

 

【自定义Key值】SecondarySortKey.java

定义需要二次排序的keyfirstsecond

     重写$greater$greater$eq$less$less$eqcomparecompareTo方法,排序比较

     定义hashCodeequals

【源代码文件内容】

 

package com.dt.spark.SparkApps.cores;

import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaPairRDD;
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.api.java.function.Function;
import org.apache.spark.api.java.function.PairFunction;
import org.apache.spark.api.java.function.VoidFunction;

import scala.Tuple2;

public class SecondaySortApp {

 public static void main(String[] args) {
  SparkConf conf = new SparkConf().setAppName("SecondaySortApp").setMaster("local");
  JavaSparkContext sc = new JavaSparkContext(conf); //其底层实际上就是Scala的SparkContext
  JavaRDD<String> lines = sc.textFile("G://IMFBigDataSpark2016//tesdata//helloSpark.txt");
  
  JavaPairRDD<SecondarySortKey, String> pairs = lines.mapToPair(new PairFunction<String, SecondarySortKey, String>() {

   private static final long serialVersionUID = 1L;

   @Override
   public Tuple2<SecondarySortKey, String> call(String line) throws Exception {
    String[] splited =  line.split(" ");
    SecondarySortKey key =new SecondarySortKey(Integer.valueOf(splited[0]),Integer.valueOf(splited[1]));
    return new Tuple2<SecondarySortKey,String>(key,line);
   }
  });
  
  JavaPairRDD<SecondarySortKey, String> sorted = pairs.sortByKey();
  
   JavaRDD<String> SecondaySorted=sorted.map(new Function<Tuple2<SecondarySortKey,String>, String>() {

   /**
    *
    */
   private static final long serialVersionUID = 1L;

   @Override
   public String call(Tuple2<SecondarySortKey, String> sortedContent) throws Exception {
    // TODO Auto-generated method stub
    System.out.println("sortedContent._1    "+(sortedContent._1).toString());
    System.out.println("sortedContent._2    "+sortedContent._2);
    
    return sortedContent._2;
   }
  });
   SecondaySorted.foreach(new VoidFunction<String>() {

   @Override
   public void call(String sorted) throws Exception {
     System.out.println(sorted);
   }
  });
  
 }

}

打的日志

Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
16/02/28 20:20:04 INFO SparkContext: Running Spark version 1.6.0
16/02/28 20:20:06 INFO SecurityManager: Changing view acls to: admin
16/02/28 20:20:06 INFO SecurityManager: Changing modify acls to: admin
16/02/28 20:20:06 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(admin); users with modify permissions: Set(admin)
16/02/28 20:20:07 INFO Utils: Successfully started service 'sparkDriver' on port 55740.
16/02/28 20:20:08 INFO Slf4jLogger: Slf4jLogger started
16/02/28 20:20:08 INFO Remoting: Starting remoting
16/02/28 20:20:08 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriverActorSystem@192.168.3.6:55753]
16/02/28 20:20:08 INFO Utils: Successfully started service 'sparkDriverActorSystem' on port 55753.
16/02/28 20:20:08 INFO SparkEnv: Registering MapOutputTracker
16/02/28 20:20:08 INFO SparkEnv: Registering BlockManagerMaster
16/02/28 20:20:08 INFO DiskBlockManager: Created local directory at C:\Users\admin\AppData\Local\Temp\blockmgr-f01069fc-9772-4dd3-b744-7d3f3f7985e7
16/02/28 20:20:08 INFO MemoryStore: MemoryStore started with capacity 146.2 MB
16/02/28 20:20:08 INFO SparkEnv: Registering OutputCommitCoordinator
16/02/28 20:20:08 INFO Utils: Successfully started service 'SparkUI' on port 4040.
16/02/28 20:20:08 INFO SparkUI: Started SparkUI at http://192.168.3.6:4040
16/02/28 20:20:08 INFO Executor: Starting executor ID driver on host localhost
16/02/28 20:20:08 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 55760.
16/02/28 20:20:08 INFO NettyBlockTransferService: Server created on 55760
16/02/28 20:20:08 INFO BlockManagerMaster: Trying to register BlockManager
16/02/28 20:20:08 INFO BlockManagerMasterEndpoint: Registering block manager localhost:55760 with 146.2 MB RAM, BlockManagerId(driver, localhost, 55760)
16/02/28 20:20:08 INFO BlockManagerMaster: Registered BlockManager
16/02/28 20:20:09 WARN SizeEstimator: Failed to check whether UseCompressedOops is set; assuming yes
16/02/28 20:20:09 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 114.9 KB, free 114.9 KB)
16/02/28 20:20:09 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 13.9 KB, free 128.8 KB)
16/02/28 20:20:09 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on localhost:55760 (size: 13.9 KB, free: 146.2 MB)
16/02/28 20:20:09 INFO SparkContext: Created broadcast 0 from textFile at SecondaySortApp.java:18
16/02/28 20:20:11 WARN : Your hostname, pc resolves to a loopback/non-reachable address: fe80:0:0:0:bdb2:979:df5e:7337%eth19, but we couldn't find any external IP address!
16/02/28 20:20:14 INFO FileInputFormat: Total input paths to process : 1
16/02/28 20:20:14 INFO SparkContext: Starting job: foreach at SecondaySortApp.java:50
16/02/28 20:20:14 INFO DAGScheduler: Registering RDD 2 (mapToPair at SecondaySortApp.java:20)
16/02/28 20:20:14 INFO DAGScheduler: Got job 0 (foreach at SecondaySortApp.java:50) with 1 output partitions
16/02/28 20:20:14 INFO DAGScheduler: Final stage: ResultStage 1 (foreach at SecondaySortApp.java:50)
16/02/28 20:20:14 INFO DAGScheduler: Parents of final stage: List(ShuffleMapStage 0)
16/02/28 20:20:14 INFO DAGScheduler: Missing parents: List(ShuffleMapStage 0)
16/02/28 20:20:14 INFO DAGScheduler: Submitting ShuffleMapStage 0 (MapPartitionsRDD[2] at mapToPair at SecondaySortApp.java:20), which has no missing parents
16/02/28 20:20:14 INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 4.7 KB, free 133.6 KB)
16/02/28 20:20:14 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 2.7 KB, free 136.3 KB)
16/02/28 20:20:14 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on localhost:55760 (size: 2.7 KB, free: 146.2 MB)
16/02/28 20:20:14 INFO SparkContext: Created broadcast 1 from broadcast at DAGScheduler.scala:1006
16/02/28 20:20:14 INFO DAGScheduler: Submitting 1 missing tasks from ShuffleMapStage 0 (MapPartitionsRDD[2] at mapToPair at SecondaySortApp.java:20)
16/02/28 20:20:14 INFO TaskSchedulerImpl: Adding task set 0.0 with 1 tasks
16/02/28 20:20:14 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, localhost, partition 0,PROCESS_LOCAL, 2142 bytes)
16/02/28 20:20:14 INFO Executor: Running task 0.0 in stage 0.0 (TID 0)
16/02/28 20:20:14 INFO HadoopRDD: Input split: file:/G:/IMFBigDataSpark2016/tesdata/helloSpark.txt:0+28
16/02/28 20:20:14 INFO deprecation: mapred.tip.id is deprecated. Instead, use mapreduce.task.id
16/02/28 20:20:14 INFO deprecation: mapred.task.id is deprecated. Instead, use mapreduce.task.attempt.id
16/02/28 20:20:14 INFO deprecation: mapred.task.is.map is deprecated. Instead, use mapreduce.task.ismap
16/02/28 20:20:14 INFO deprecation: mapred.task.partition is deprecated. Instead, use mapreduce.task.partition
16/02/28 20:20:14 INFO deprecation: mapred.job.id is deprecated. Instead, use mapreduce.job.id
16/02/28 20:20:14 INFO Executor: Finished task 0.0 in stage 0.0 (TID 0). 2253 bytes result sent to driver
16/02/28 20:20:14 INFO TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 139 ms on localhost (1/1)
16/02/28 20:20:14 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool
16/02/28 20:20:14 INFO DAGScheduler: ShuffleMapStage 0 (mapToPair at SecondaySortApp.java:20) finished in 0.161 s
16/02/28 20:20:14 INFO DAGScheduler: looking for newly runnable stages
16/02/28 20:20:14 INFO DAGScheduler: running: Set()
16/02/28 20:20:14 INFO DAGScheduler: waiting: Set(ResultStage 1)
16/02/28 20:20:14 INFO DAGScheduler: failed: Set()
16/02/28 20:20:14 INFO DAGScheduler: Submitting ResultStage 1 (MapPartitionsRDD[4] at map at SecondaySortApp.java:34), which has no missing parents
16/02/28 20:20:14 INFO MemoryStore: Block broadcast_2 stored as values in memory (estimated size 3.8 KB, free 140.1 KB)
16/02/28 20:20:14 INFO MemoryStore: Block broadcast_2_piece0 stored as bytes in memory (estimated size 2.2 KB, free 142.3 KB)
16/02/28 20:20:14 INFO BlockManagerInfo: Added broadcast_2_piece0 in memory on localhost:55760 (size: 2.2 KB, free: 146.2 MB)
16/02/28 20:20:14 INFO SparkContext: Created broadcast 2 from broadcast at DAGScheduler.scala:1006
16/02/28 20:20:14 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 1 (MapPartitionsRDD[4] at map at SecondaySortApp.java:34)
16/02/28 20:20:14 INFO TaskSchedulerImpl: Adding task set 1.0 with 1 tasks
16/02/28 20:20:14 INFO TaskSetManager: Starting task 0.0 in stage 1.0 (TID 1, localhost, partition 0,NODE_LOCAL, 1894 bytes)
16/02/28 20:20:14 INFO Executor: Running task 0.0 in stage 1.0 (TID 1)
16/02/28 20:20:14 INFO ShuffleBlockFetcherIterator: Getting 1 non-empty blocks out of 1 blocks
16/02/28 20:20:14 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 6 ms
sortedContent._1    com.dt.spark.SparkApps.cores.SecondarySortKey@400
sortedContent._2    2 1
2 1
sortedContent._1    com.dt.spark.SparkApps.cores.SecondarySortKey@402
sortedContent._2    2 3
2 3
sortedContent._1    com.dt.spark.SparkApps.cores.SecondarySortKey@420
sortedContent._2    3 2
3 2
sortedContent._1    com.dt.spark.SparkApps.cores.SecondarySortKey@43e
sortedContent._2    4 1
4 1
sortedContent._1    com.dt.spark.SparkApps.cores.SecondarySortKey@440
sortedContent._2    4 3
4 3
sortedContent._1    com.dt.spark.SparkApps.cores.SecondarySortKey@4c0
sortedContent._2    8 7
8 7
16/02/28 20:20:14 INFO Executor: Finished task 0.0 in stage 1.0 (TID 1). 1165 bytes result sent to driver
16/02/28 20:20:14 INFO TaskSetManager: Finished task 0.0 in stage 1.0 (TID 1) in 60 ms on localhost (1/1)
16/02/28 20:20:14 INFO TaskSchedulerImpl: Removed TaskSet 1.0, whose tasks have all completed, from pool
16/02/28 20:20:14 INFO DAGScheduler: ResultStage 1 (foreach at SecondaySortApp.java:50) finished in 0.061 s
16/02/28 20:20:14 INFO DAGScheduler: Job 0 finished: foreach at SecondaySortApp.java:50, took 0.368809 s
16/02/28 20:20:14 INFO SparkContext: Invoking stop() from shutdown hook
16/02/28 20:20:15 INFO SparkUI: Stopped Spark web UI at http://192.168.3.6:4040
16/02/28 20:20:15 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
16/02/28 20:20:15 INFO MemoryStore: MemoryStore cleared
16/02/28 20:20:15 INFO BlockManager: BlockManager stopped
16/02/28 20:20:15 INFO BlockManagerMaster: BlockManagerMaster stopped
16/02/28 20:20:15 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
16/02/28 20:20:15 INFO SparkContext: Successfully stopped SparkContext
16/02/28 20:20:15 INFO RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon.
16/02/28 20:20:15 INFO ShutdownHookManager: Shutdown hook called
16/02/28 20:20:15 INFO ShutdownHookManager: Deleting directory C:\Users\admin\AppData\Local\Temp\spark-2c790e4e-d35a-4151-9b71-226f1c96dbf3
16/02/28 20:20:15 INFO RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports.
16/02/28 20:20:15 INFO RemoteActorRefProvider$RemotingTerminator: Remoting shut down.

 

 

0 0
原创粉丝点击