Spark之sparkStream实时监控端口读取数据存入到HDFS

来源:互联网 发布:mac地址 unix 查看 编辑:程序博客网 时间:2024/05/21 10:40

sparkStream实际上就是为实时操作生成的数据提供服务的。
下面给大家介绍:通过监控虚拟机9999端口,当那边输入单词时,这边会对它进行实时的一个单词计数,并将结果存入到hdfs.
一,过程分析:
maven的依赖:

      <dependency>        <groupId>org.apache.spark</groupId>        <artifactId>spark-streaming_2.10</artifactId>        <version>1.6.1</version>      </dependency>

注意:先在虚拟机中输入:nc -lk 9999
打开输入端口。

1,创建JavaStreamingContext 。
2,创建一个JavaReceiverInputDStream输入流,通过socket文本流监听主机名端口号,输入对应每一行的单词。
3,通过flatMap将每一行单词分割。
4,将flatMap转化为键值对的形式,并给每个单词赋初始值为1。
5,合并相同的单词
6,打印结果在控制台
7,将统计的结果存到hdfs

二,代码展示:

import java.util.Arrays;import org.apache.spark.SparkConf;import org.apache.spark.api.java.function.FlatMapFunction;import org.apache.spark.api.java.function.Function2;import org.apache.spark.api.java.function.PairFunction;import org.apache.spark.streaming.Durations;import org.apache.spark.streaming.api.java.JavaDStream;import org.apache.spark.streaming.api.java.JavaPairDStream;import org.apache.spark.streaming.api.java.JavaReceiverInputDStream;import org.apache.spark.streaming.api.java.JavaStreamingContext;import scala.Tuple2;public class wordcount02 {    public static void main(String[] args) {                    SparkConf conf = new SparkConf().setMaster("local[4]").setAppName("wordCountSparkStream")                .set("spark.testing.memory", "2147480000");        JavaStreamingContext jssc=new JavaStreamingContext(conf,Durations.seconds(10));        System.out.println("创建javaStreamingContext成功:"+jssc);        JavaReceiverInputDStream<String> lines=jssc.socketTextStream("192.168.61.128", 9999);        JavaDStream<String> words=lines.flatMap(new FlatMapFunction<String,String>(){            public Iterable<String> call(String line) throws Exception {                return Arrays.asList(line.split(" "));            }});        JavaPairDStream<String, Integer> pairs=words.mapToPair(new PairFunction<String,String,Integer>(){            public Tuple2<String, Integer> call(String arg0) throws Exception {                return new Tuple2<String,Integer>(arg0,1);            }});        JavaPairDStream<String,Integer> wordCounts=pairs.reduceByKey(new Function2<Integer,Integer,Integer>(){            public Integer call(Integer arg0, Integer arg1) throws Exception {                return arg0+arg1;            }});        wordCounts.print();        wordCounts.dstream().saveAsTextFiles("hdfs://192.168.61.128:9000/sparkStream001/wordCount/", "spark");        jssc.start();//开始计算        jssc.awaitTermination();//等待计算结束    }}
1 0