Spark Streaming实战对论坛网站动态行为pv,uv,注册人数,跳出率的多维度分析

来源:互联网 发布:什么是5g网络 编辑:程序博客网 时间:2024/06/05 08:00
论坛数据运行代码自动生成,该生成的数据会作为Producer的方式发送给Kafka,然后SparkStreaming程序会从Kafka中在线Pull到论坛或者网站的用户在线行为信息,进而进行多维度的在线分析
数据格式如下:
date:日期,格式为yyyy-MM-dd
timestamp:时间戳
userID:用户ID
pageID:页面ID
chanelID:板块的ID

action:点击和注册


生成的用户点击模拟数据如下:

product:2017-06-20      1497948113817   1397    91      ML      Viewproduct:2017-06-20      1497948113819   149     1941    ML      Registerproduct:2017-06-20      1497948113820   null    335     Spark   Registerproduct:2017-06-20      1497948113821   1724    1038    ML      Viewproduct:2017-06-20      1497948113822   282     494     Flink   Viewproduct:2017-06-20      1497948113823   null    1619    ML      Viewproduct:2017-06-20      1497948113823   991     1950    ML      Viewproduct:2017-06-20      1497948113824   686     1347    Kafka   Registerproduct:2017-06-20      1497948113825   1982    1145    Hive    Viewproduct:2017-06-20      1497948113826   211     1097    Storm   Viewproduct:2017-06-20      1497948113827   633     1345    Hive    Viewproduct:2017-06-20      1497948113828   957     1381    Hadoop  Registerproduct:2017-06-20      1497948113831   300     1781    Spark   Viewproduct:2017-06-20      1497948113832   1244    1076    Hadoop  Registerproduct:2017-06-20      1497948113833   1958    634     ML      View

生成模拟数据代码:

package org.apache.spark.examples.streaming;import java.text.SimpleDateFormat;import java.util.Date;import java.util.Properties;import java.util.Random;import kafka.javaapi.producer.Producer;import kafka.producer.KeyedMessage;import kafka.producer.ProducerConfig;/** * 这里产生数据,就会发送给kafka,kafka那边启动消费者,就会接收到数据,这一步是用来测试生成数据和消费数据没有问题的,确定没问题后要关闭消费者, * 启动OnlineBBSUserLogss.java的类作为消费者,就会按pv,uv等方式处理这些数据。 * 因为一个topic只能有一个消费者,所以启动程序前必须关闭kafka方式启动的消费者(我这里没有关闭关闭kafka方式启动的消费者也没正常啊)  */public class SparkStreamingDataManuallyProducerForKafkas extends Thread{//具体的论坛频道static String[] channelNames = new  String[]{"Spark","Scala","Kafka","Flink","Hadoop","Storm","Hive","Impala","HBase","ML"};//用户的两种行为模式static String[] actionNames = new String[]{"View", "Register"};private static Producer<String, String> producerForKafka;private static String dateToday;private static Random random;//2、作为线程而言,要复写run方法,先写业务逻辑,再写控制@Overridepublic void run() {int counter = 0;//搞500条while(true){//模拟实际情况,不断循环,异步过程,不可能是同步过程   counter++;  String userLog = userlogs();  System.out.println("product:"+userLog);  //"test"为topic  producerForKafka.send(new KeyedMessage<String, String>("test", userLog));  if(0 == counter%500){counter = 0;try {   Thread.sleep(1000);} catch (InterruptedException e) {   // TODO Auto-generated catch block   e.printStackTrace();}}}}private static String userlogs() {StringBuffer userLogBuffer = new StringBuffer("");int[] unregisteredUsers = new int[]{1, 2, 3, 4, 5, 6, 7, 8};long timestamp = new Date().getTime();Long userID = 0L;long pageID = 0L;//随机生成的用户ID if(unregisteredUsers[random.nextInt(8)] == 1) {   userID = null;} else {   userID = (long) random.nextInt((int) 2000);}//随机生成的页面IDpageID =  random.nextInt((int) 2000);          //随机生成ChannelString channel = channelNames[random.nextInt(10)];//随机生成action行为String action = actionNames[random.nextInt(2)];userLogBuffer.append(dateToday).append("\t").append(timestamp).append("\t").append(userID).append("\t").append(pageID).append("\t").append(channel).append("\t").append(action);   //这里不要加\n换行符,因为kafka自己会换行,再append一个换行符,消费者那边就会处理不出数据return userLogBuffer.toString();}    public static void main(String[] args) throws Exception {  dateToday = new SimpleDateFormat("yyyy-MM-dd").format(new Date());  random = new Random();Properties props = new Properties();props.put("zk.connect", "h71:2181,h72:2181,h73:2181");props.put("metadata.broker.list","h71:9092,h72:9092,h73:9092");props.put("serializer.class", "kafka.serializer.StringEncoder");ProducerConfig config = new ProducerConfig(props);producerForKafka = new Producer<String, String>(config);new SparkStreamingDataManuallyProducerForKafkas().start(); }}

pv,uv,注册人数,跳出率的多维度分析代码:

package org.apache.spark.examples.streaming;import java.util.HashMap;import java.util.HashSet;import java.util.Map;import java.util.Set;import kafka.serializer.StringDecoder;import org.apache.spark.SparkConf;import org.apache.spark.api.java.JavaRDD;import org.apache.spark.api.java.function.Function;import org.apache.spark.api.java.function.Function2;import org.apache.spark.api.java.function.PairFunction;import org.apache.spark.streaming.Durations;import org.apache.spark.streaming.api.java.JavaPairDStream;import org.apache.spark.streaming.api.java.JavaPairInputDStream;import org.apache.spark.streaming.api.java.JavaStreamingContext;import org.apache.spark.streaming.kafka.KafkaUtils;import scala.Tuple2;/* *消费者消费SparkStreamingDataManuallyProducerForKafka类中逻辑级别产生的数据,这里是计算pv,uv,注册人数,跳出率的方式 */public class OnlineBBSUserLogss {   public static void main(String[] args) {   /** * 第一步:配置SparkConf: * 1,至少2条线程:因为Spark Streaming应用程序在运行的时候,至少有一条 * 线程用于不断的循环接收数据,并且至少有一条线程用于处理接受的数据(否则的话无法 * 有线程用于处理数据,随着时间的推移,内存和磁盘都会不堪重负); * 2,对于集群而言,每个Executor一般肯定不止一个Thread,那对于处理Spark Streaming的 * 应用程序而言,每个Executor一般分配多少Core比较合适?根据我们过去的经验,5个左右的 * Core是最佳的(一个段子分配为奇数个Core表现最佳,例如3个、5个、7个Core等); *///  SparkConf conf = new SparkConf().setMaster("spark://h71:7077").setAppName("OnlineBBSUserLogs");      SparkConf conf = new SparkConf().setAppName("wordcount").setMaster("local[2]");      /** * 第二步:创建SparkStreamingContext: * 1,这个是SparkStreaming应用程序所有功能的起始点和程序调度的核心 * SparkStreamingContext的构建可以基于SparkConf参数,也可基于持久化的SparkStreamingContext的内容 * 来恢复过来(典型的场景是Driver崩溃后重新启动,由于Spark Streaming具有连续7*24小时不间断运行的特征, * 所有需要在Driver重新启动后继续上次的状态,此时的状态恢复需要基于曾经的Checkpoint); * 2,在一个Spark Streaming应用程序中可以创建若干个SparkStreamingContext对象,使用下一个SparkStreamingContext * 之前需要把前面正在运行的SparkStreamingContext对象关闭掉,由此,我们获得一个重大的启发SparkStreaming框架也只是 * Spark Core上的一个应用程序而已,只不过Spark Streaming框架箱运行的话需要Spark工程师写业务逻辑处理代码; */      JavaStreamingContext jsc = new JavaStreamingContext(conf, Durations.seconds(5));      /** * 第三步:创建Spark Streaming输入数据来源input Stream: * 1,数据输入来源可以基于File、HDFS、Flume、Kafka、Socket等 * 2, 在这里我们指定数据来源于网络Socket端口,Spark Streaming连接上该端口并在运行的时候一直监听该端口 * 的数据(当然该端口服务首先必须存在),并且在后续会根据业务需要不断的有数据产生(当然对于Spark Streaming * 应用程序的运行而言,有无数据其处理流程都是一样的);  * 3,如果经常在每间隔5秒钟没有数据的话不断的启动空的Job其实是会造成调度资源的浪费,因为并没有数据需要发生计算,所以 * 实例的企业级生成环境的代码在具体提交Job前会判断是否有数据,如果没有的话就不再提交Job; */      Map<String, String> kafkaParameters = new HashMap<String, String>();      kafkaParameters.put("metadata.broker.list","h71:9092,h72:9092,h73:9092");      Set topics = new HashSet<String>();      topics.add("test");      JavaPairInputDStream<String, String> lines = KafkaUtils.createDirectStream(jsc,String.class,String.class,StringDecoder.class,StringDecoder.class,kafkaParameters, topics);      //在线PV计算      onlinePagePV(lines);      //在线UV计算//      onlineUV(lines);      //在线计算注册人数//      onlineRegistered(lines);      //在线计算跳出率//      onlineJumped(lines);      //在线不同模块的PV//      onlineChannelPV(lines);            /*       * Spark Streaming执行引擎也就是Driver开始运行,Driver启动的时候是位于一条新的线程中的,当然其内部有消息循环体,用于       * 接受应用程序本身或者Executor中的消息;       */      jsc.start();      jsc.awaitTermination();      jsc.close();   }   private static void onlineChannelPV(JavaPairInputDStream<String, String> lines) {      lines.mapToPair(new PairFunction<Tuple2<String,String>, String, Long>() {         @Override         public Tuple2<String, Long> call(Tuple2<String,String> t) throws Exception {            String[] logs = t._2.split("\t");            String channelID =logs[4];            return new Tuple2<String,Long>(channelID, 1L);         }      }).reduceByKey(new Function2<Long, Long, Long>() { //对相同的Key,进行Value的累计(包括Local和Reducer级别同时Reduce)         @Override         public Long call(Long v1, Long v2) throws Exception {            return v1 + v2;         }      }).print();   }   private static void onlineJumped(JavaPairInputDStream<String, String> lines) {      lines.filter(new Function<Tuple2<String,String>, Boolean>() {         @Override         public Boolean call(Tuple2<String, String> v1) throws Exception {            String[] logs = v1._2.split("\t");            String action = logs[5];            if("View".equals(action)){               return true;            } else {               return false;            }         }      }).mapToPair(new PairFunction<Tuple2<String,String>, Long, Long>() {         @Override         public Tuple2<Long, Long> call(Tuple2<String,String> t) throws Exception {            String[] logs = t._2.split("\t");         // Long usrID = Long.valueOf(logs[2] != null ? logs[2] : "-1"); 这个有错            Long usrID = Long.valueOf("null".equals(logs[2])  ? "-1" : logs[2]);            return new Tuple2<Long,Long>(usrID, 1L);         }      }).reduceByKey(new Function2<Long, Long, Long>() { //对相同的Key,进行Value的累计(包括Local和Reducer级别同时Reduce)         @Override         public Long call(Long v1, Long v2) throws Exception {            return v1 + v2;         }      }).filter(new Function<Tuple2<Long,Long>, Boolean>() {         @Override         public Boolean call(Tuple2<Long, Long> v1) throws Exception {            if(1 == v1._2){               return true;            } else {               return false;            }         }      }).count().print();   }   private static void onlineRegistered(JavaPairInputDStream<String, String> lines) {      lines.filter(new Function<Tuple2<String,String>, Boolean>() {         @Override         public Boolean call(Tuple2<String, String> v1) throws Exception {            String[] logs = v1._2.split("\t");            String action = logs[5];            if("Register".equals(action)){               return true;            } else {               return false;            }         }      }).count().print();   }   /**    * 因为要计算UV,所以需要获得同样的Page的不同的User,这个时候就需要去重操作,DStreamzhong有distinct吗?当然没有(截止到Spark 1.6.1的时候还没有该Api)    * 此时我们就需要求助于DStream魔术般的方法tranform,在该方法内部直接对RDD进行distinct操作,这样就是实现了用户UserID的去重,进而就可以计算出UV了。    * @param lines    */   private static void onlineUV(JavaPairInputDStream<String, String> lines) {      /*       * 第四步:接下来就像对于RDD编程一样基于DStream进行编程!!!原因是DStream是RDD产生的模板(或者说类),在Spark Streaming具体       * 发生计算前,其实质是把每个Batch的DStream的操作翻译成为对RDD的操作!!!       * 对初始的DStream进行Transformation级别的处理,例如map、filter等高阶函数等的编程,来进行具体的数据计算       */      JavaPairDStream<String, String> logsDStream = lines.filter(new Function<Tuple2<String,String>, Boolean>() {         @Override         public Boolean call(Tuple2<String, String> v1) throws Exception {            String[] logs = v1._2.split("\t");            String action = logs[5];            if("View".equals(action)){               return true;            } else {               return false;            }         }      });            //在单词拆分的基础上对每个单词实例计数为1,也就是word => (word, 1)      logsDStream.map(new Function<Tuple2<String,String>,String>(){         @Override         public String call(Tuple2<String, String> v1) throws Exception {            String[] logs =v1._2.split("\t");            String usrID = String.valueOf(logs[2] != null ? logs[2] : "-1" );            //原文是Long usrID = Long.valueOf(logs[2] != null ? logs[2] : "-1" );            //报错:java.lang.NumberFormatException: For input string: "null"            Long pageID = Long.valueOf(logs[3]);            return pageID+"_"+usrID;         }      }).transform(new Function<JavaRDD<String>,JavaRDD<String>>(){         @Override         public JavaRDD<String> call(JavaRDD<String> v1) throws Exception {            // TODO Auto-generated method stub            return v1.distinct();         }      }).mapToPair(new PairFunction<String, Long, Long>() {         @Override         public Tuple2<Long, Long> call(String t) throws Exception {            String[] logs = t.split("_");            Long pageId = Long.valueOf(logs[0]);            return new Tuple2<Long,Long>(pageId, 1L);         }      }).reduceByKey(new Function2<Long, Long, Long>() { //对相同的Key,进行Value的累计(包括Local和Reducer级别同时Reduce)         @Override         public Long call(Long v1, Long v2) throws Exception {            return v1 + v2;         }      }).print();   }   private static void onlinePagePV(JavaPairInputDStream<String, String> lines) {      /*       * 第四步:接下来就像对于RDD编程一样基于DStream进行编程!!!原因是DStream是RDD产生的模板(或者说类),在Spark Streaming具体       * 发生计算前,其实质是把每个Batch的DStream的操作翻译成为对RDD的操作!!!       * 对初始的DStream进行Transformation级别的处理,例如map、filter等高阶函数等的编程,来进行具体的数据计算       */      JavaPairDStream<String, String> logsDStream = lines.filter(new Function<Tuple2<String,String>, Boolean>() {         @Override         public Boolean call(Tuple2<String, String> v1) throws Exception {            String[] logs = v1._2.split("\t");            String action = logs[5];            if("View".equals(action)){               return true;            } else {               return false;            }         }      });            //在单词拆分的基础上对每个单词实例计数为1,也就是word => (word, 1)      JavaPairDStream<Long, Long> pairs = logsDStream.mapToPair(new PairFunction<Tuple2<String,String>, Long, Long>() {         @Override         public Tuple2<Long, Long> call(Tuple2<String, String> t) throws Exception {            String[] logs = t._2.split("\t");            Long pageId = Long.valueOf(logs[3]);            return new Tuple2<Long,Long>(pageId, 1L);         }      });      //在单词实例计数为1基础上,统计每个单词在文件中出现的总次数      JavaPairDStream<Long, Long> wordsCount = pairs.reduceByKey(new Function2<Long, Long, Long>() { //对相同的Key,进行Value的累计(包括Local和Reducer级别同时Reduce)    //对相同的key,进行Value的累加(包括Local和Reducer级别同时Reduce)         @Override         public Long call(Long v1, Long v2) throws Exception {            return v1 + v2;         }      });            /*       * 此处的print并不会直接出发Job的执行,因为现在的一切都是在Spark Streaming框架的控制之下的,对于Spark Streaming       * 而言具体是否触发真正的Job运行是基于设置的Duration时间间隔的       *        * 诸位一定要注意的是Spark Streaming应用程序要想执行具体的Job,对Dtream就必须有output Stream操作,       * output Stream有很多类型的函数触发,类print、saveAsTextFile、saveAsHadoopFiles等,最为重要的一个       * 方法是foraeachRDD,因为Spark Streaming处理的结果一般都会放在Redis、DB、DashBoard等上面,foreachRDD       * 主要就是用用来完成这些功能的,而且可以随意的自定义具体数据到底放在哪里!!!       *       * 在企業生產環境下,一般會把計算的數據放入Redis或者DB中,采用J2EE等技术进行趋势的绘制等,这就像动态更新的股票交易一下来实现       * 在线的监控等;       */      wordsCount.print();   }}

启动hadoop、spark、zookeeper、kafka集群(启动过程就不多言了)这里把我使用的版本列出:

hadoop         hadoop-2.6.0-cdh5.5.2
kafka              kafka_2.10-0.8.2.0
spark             spark-1.3.1-bin-hadoop2.6(后来我又装了spark-1.6.0-bin-hadoop2.6也行)
zookeeper     zookeeper-3.4.5-cdh5.5.2

java                 jdk1.7.0_25


在myeclipse中创建项目:


(这里我吐槽一下,在myeclipse-8.5和myeclipse-10.7.1版本中只能识别spark-1.3.1-bin-hadoop2.6的jar包却无法识别spark-1.6.0-bin-hadoop2.6的jar包,虽然用spark-1.3.1-bin-hadoop2.6的jar包也能正常运行不影响什么,但有强迫症的我咋能忍,无奈我下载了个myeclipse-pro-2014-GA版本(你下载最新的版本应该也可以吧)才这两个版本spark的jar包都识别,我尼玛也是醉了。。。)

将该项目打成streaming.jar包上从本地上传到虚拟机上,我这里是上传到了/home/hadoop/spark-1.3.1-bin-hadoop2.6目录中


第一步:kafka建立topic

[hadoop@h71 kafka_2.10-0.8.2.0]$ bin/kafka-topics.sh --create --zookeeper h71:2181 --replication-factor 2 --partitions 2 --topic test

(如果不创建该topic的话,也倒无妨,因为你如果先直接运行SparkStreamingDataManuallyProducerForKafkas.java的时候会自动创建topic,如果是先运行的OnlineBBSUserLogss.java的话虽然第一次会报错:Exception in thread "main" org.apache.spark.SparkException: org.apache.spark.SparkException: Couldn't find leader offsets for Set(),但是它已经为你创建了该topic,再运行的话则不会报错了,只不过他们创建的该topic都默认分区和副本都为1)


第二步:运行SparkStreamingDataManuallyProducerForKafka

[hadoop@h71 spark-1.3.1-bin-hadoop2.6]$ bin/spark-submit --master spark://h71:7077 --name JavaWordCountByHQ --class org.apache.spark.examples.streaming.SparkStreamingDataManuallyProducerForKafkas --executor-memory 500m --total-executor-cores 2 streaming.jar

会报错:

Exception in thread "main" java.lang.NoClassDefFoundError: kafka/producer/ProducerConfig        at com.spark.study.streaming.SparkStreamingDataManuallyProducerForKafkas.main(SparkStreamingDataManuallyProducerForKafkas.java:102)        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)        at java.lang.reflect.Method.invoke(Method.java:606)        at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:569)        at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:166)        at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:189)        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:110)        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)Caused by: java.lang.ClassNotFoundException: kafka.producer.ProducerConfig        at java.net.URLClassLoader$1.run(URLClassLoader.java:366)        at java.net.URLClassLoader$1.run(URLClassLoader.java:355)        at java.security.AccessController.doPrivileged(Native Method)        at java.net.URLClassLoader.findClass(URLClassLoader.java:354)        at java.lang.ClassLoader.loadClass(ClassLoader.java:424)        at java.lang.ClassLoader.loadClass(ClassLoader.java:357)        ... 10 more

解决:

第一种方法:

在spark-env.sh中添加如下内容:

[hadoop@h71 spark-1.3.1-bin-hadoop2.6]$ vi conf/spark-env.sh
export SPARK_HOME=/home/hadoop/spark-1.3.1-bin-hadoop2.6
export SPARK_CLASSPATH=$SPARK_HOME/lib/*

再运行SparkStreamingDataManuallyProducerForKafka

[hadoop@h71 spark-1.3.1-bin-hadoop2.6]$ bin/spark-submit --master spark://h71:7077 --name JavaWordCountByHQ --class org.apache.spark.examples.streaming.SparkStreamingDataManuallyProducerForKafkas --executor-memory 500m --total-executor-cores 2 streaming.jar


但是这种方法不是很好,因为再运行OnlineBBSUserLogss的时候会显示如下内容但不影响运行:

[hadoop@h71 spark-1.3.1-bin-hadoop2.6]$ bin/spark-submit --master spark://h71:7077 --name JavaWordCountByHQ --class org.apache.spark.examples.streaming.OnlineBBSUserLogss --executor-memory 500m --total-executor-cores 2 streaming.jar

17/06/21 22:49:46 WARN spark.SparkConf: SPARK_CLASSPATH was detected (set to '/home/hadoop/spark-1.3.1-bin-hadoop2.6/lib/*').This is deprecated in Spark 1.0+.Please instead use: - ./spark-submit with --driver-class-path to augment the driver classpath - spark.executor.extraClassPath to augment the executor classpath

第二种方法:(推荐使用这种)

上面不是都已经提示了嘛,Please instead use: - ./spark-submit with --driver-class-path to augment the driver classpath

所以运行如下命令:

[hadoop@h71 spark-1.3.1-bin-hadoop2.6]$ bin/spark-submit --master spark://h71:7077 --name JavaWordCountByHQ --class org.apache.spark.examples.streaming.SparkStreamingDataManuallyProducerForKafkas --executor-memory 500m --total-executor-cores 2 streaming.jar --driver-class-path /home/hadoop/spark-1.3.1-bin-hadoop2.6/lib/spark-examples-1.3.1-hadoop2.6.0.jar

运行该命令后会产生数据写入到kafka中,再执行

[hadoop@h71 spark-1.3.1-bin-hadoop2.6]$ bin/spark-submit --master spark://h71:7077 --name JavaWordCountByHQ --class org.apache.spark.examples.streaming.OnlineBBSUserLogss --executor-memory 500m --total-executor-cores 2 streaming.jar --driver-class-path /home/hadoop/spark-1.3.1-bin-hadoop2.6/lib/spark-examples-1.3.1-hadoop2.6.0.jar

注意:在spark-1.6.0-bin-hadoop2.6版本中--driver-class-path的位置还不能放在最后,否则无法识别,运行命令为

[hadoop@h71 spark-1.6.0-bin-hadoop2.6]$ bin/spark-submit --master spark://h71:7077 --name JavaWordCountByHQ --driver-class-path /home/hadoop/spark-1.6.0-bin-hadoop2.6/lib/spark-examples-1.6.0-hadoop2.6.0.jar --class org.apache.spark.examples.streaming.OnlineBBSUserLogss --executor-memory 500m --total-executor-cores 2 streaming.jar


OnlineBBSUserLogs成功消费数据,并统计出数值,实验成功

.......16/05/08 19:00:33 INFO scheduler.DAGScheduler: Job 2 finished: print at OnlineBBSUserLogs.java:113, took 0.385315 s-------------------------------------------Time: 1462705200000 ms-------------------------------------------(Flink,89)(Storm,99)(Scala,97)(HBase,107)(Spark,91)(Hadoop,108)(Hive,129)(Impala,82)(Kafka,101)(ML,97)...

知识点:
1、创建kafka的createDirectStream,返回JavaPairInputDStream类型的line值
org.apache.spark.streaming.kafka.createDirectStream 源代码

package org.apache.spark.streaming.kafka/**   * Create an input stream that directly pulls messages from Kafka Brokers   * without using any receiver. This stream can guarantee that each message   * from Kafka is included in transformations exactly once (see points below).   *   * Points to note:   *  - No receivers: This stream does not use any receiver. It directly queries Kafka   *  - Offsets: This does not use Zookeeper to store offsets. The consumed offsets are tracked   *    by the stream itself. For interoperability with Kafka monitoring tools that depend on   *    Zookeeper, you have to update Kafka/Zookeeper yourself from the streaming application.   *    You can access the offsets used in each batch from the generated RDDs (see   *    [[org.apache.spark.streaming.kafka.HasOffsetRanges]]).   *  - Failure Recovery: To recover from driver failures, you have to enable checkpointing   *    in the [[StreamingContext]]. The information on consumed offset can be   *    recovered from the checkpoint. See the programming guide for details (constraints, etc.).   *  - End-to-end semantics: This stream ensures that every records is effectively received and   *    transformed exactly once, but gives no guarantees on whether the transformed data are   *    outputted exactly once. For end-to-end exactly-once semantics, you have to either ensure   *    that the output operation is idempotent, or use transactions to output records atomically.   *    See the programming guide for more details.   *   * @param jssc JavaStreamingContext object   * @param keyClass Class of the keys in the Kafka records   * @param valueClass Class of the values in the Kafka records   * @param keyDecoderClass Class of the key decoder   * @param valueDecoderClass Class type of the value decoder   * @param kafkaParams Kafka <a href="http://kafka.apache.org/documentation.html#configuration">   *   configuration parameters</a>. Requires "metadata.broker.list" or "bootstrap.servers"   *   to be set with Kafka broker(s) (NOT zookeeper servers), specified in   *   host1:port1,host2:port2 form.   *   If not starting from a checkpoint, "auto.offset.reset" may be set to "largest" or "smallest"   *   to determine where the stream starts (defaults to "largest")   * @param topics Names of the topics to consume   * @tparam K type of Kafka message key   * @tparam V type of Kafka message value   * @tparam KD type of Kafka message key decoder   * @tparam VD type of Kafka message value decoder   * @return DStream of (Kafka message key, Kafka message value)   */  def createDirectStream[K, V, KD <: Decoder[K], VD <: Decoder[V]](      jssc: JavaStreamingContext,      keyClass: Class[K],      valueClass: Class[V],      keyDecoderClass: Class[KD],      valueDecoderClass: Class[VD],      kafkaParams: JMap[String, String],      topics: JSet[String]    ): JavaPairInputDStream[K, V] = {    implicit val keyCmt: ClassTag[K] = ClassTag(keyClass)    implicit val valueCmt: ClassTag[V] = ClassTag(valueClass)    implicit val keyDecoderCmt: ClassTag[KD] = ClassTag(keyDecoderClass)    implicit val valueDecoderCmt: ClassTag[VD] = ClassTag(valueDecoderClass)    createDirectStream[K, V, KD, VD](      jssc.ssc,      Map(kafkaParams.asScala.toSeq: _*),      Set(topics.asScala.toSeq: _*)    )  }}

2、读取kafka的数据流的值以后,进行相关mapToPair、reduceByKey的操作
mapToPair-reduceByKey-PairFunction-Function2的源代码

package org.apache.spark.api.java.function.PairFunction /** * A function that returns key-value pairs (Tuple2<K, V>), and can be used to * construct PairRDDs. */public interface PairFunction<T, K, V> extends Serializable {  public Tuple2<K, V> call(T t) throws Exception;}package org.apache.spark.api.java.function.Function2/** * A two-argument function that takes arguments of type T1 and T2 and returns an R. */public interface Function2<T1, T2, R> extends Serializable {  public R call(T1 v1, T2 v2) throws Exception;}package org.apache.spark.streaming.api.java.reduceByKey/**   * Return a new DStream by applying `reduceByKey` to each RDD. The values for each key are   * merged using the associative reduce function. Hash partitioning is used to generate the RDDs   * with Spark's default number of partitions.   */  def reduceByKey(func: JFunction2[V, V, V]): JavaPairDStream[K, V] =    dstream.reduceByKey(func)package org.apache.spark.streaming.api.java.mapToPair /** Return a new DStream by applying a function to all elements of this DStream. */  def mapToPair[K2, V2](f: PairFunction[T, K2, V2]): JavaPairDStream[K2, V2] = {    def cm: ClassTag[(K2, V2)] = fakeClassTag    new JavaPairDStream(dstream.map[(K2, V2)](f)(cm))(fakeClassTag[K2], fakeClassTag[V2])  }


参考:
http://www.thinksaas.cn/topics/0/514/514343.html
http://blog.csdn.net/qq_21234493/article/details/51450648

阅读全文
1 0
原创粉丝点击