Spark应用程序的测试与发布

来源:互联网 发布:wow60 n服数据库 编辑:程序博客网 时间:2024/06/18 05:05

Spark应用程序的测试与发布

1、测试采用本地的方法,设置master为local即可

def main(args: Array[String]) {    val conf = new SparkConf().setAppName("NewsTopNByDayAndHour").setMaster("local[3]");    conf.set("spark.streaming.blockInterval", "50ms");    val sc = new SparkContext(conf);    val ssc = new StreamingContext(sc, Seconds(25));    //缓存36小时的数据    ssc.remember(Minutes(2160));    val sqlContext = new HiveContext(sc);    Logger.getLogger("org.apache.spark").setLevel(Level.WARN);    Logger.getLogger("org.eclipse.jetty.server").setLevel(Level.ERROR);    //1.注册UDF    val udf = UDFUtils();    udf.registerUdf(sqlContext);    //2.kafka数据处理    val kafkaService = KakfaService();    val urlClickLogPairsDStream = kafkaService.kafkaDStream(ssc);    //3.缓存hive中的数据    val cacheUtils = CacheUtils();    cacheUtils.cacheHiveData(sqlContext);    //4.缓存窗口函数数据处理    val urlClickCountDaysDStream = urlClickLogPairsDStream.reduceByKeyAndWindow(      (v1: Int, v2: Int) => {        v1 + v2      },      Seconds(3600 * 24),      Seconds(250));    //5. 24h处理业务逻辑    urlClickCntDay(urlClickCountDaysDStream, sqlContext);        //第二次消费. 缓存窗口函数数据处理    val urlClickCountsHourDStream = urlClickLogPairsDStream.reduceByKeyAndWindow(      (v1: Int, v2: Int) => {        v1 + v2      },      Minutes(60),      Seconds(175));    //第二次消费. 1h处理业务逻辑    urlClickCntHour(urlClickCountsHourDStream, sqlContext);    //6.启动streaming任务    ssc.start();    ssc.awaitTermination();  }


其中local[3],表示本地以3个线程来运行。

提交脚本

#!/bin/bashsource /etc/profilenohup /opt/modules/spark/bin/spark-submit \--conf "spark.executor.extraJavaOptions=-XX:PermSize=8m -XX:+PrintGCDetails -XX:+PrintGCTimeStamps" \--driver-memory 3g \--executor-memory 3g \--total-executor-cores 32 \--conf spark.ui.port=5666  \--jars /opt/bin/sparkJars/kafka_2.10-0.8.2.1.jar,/opt/bin/sparkJars/spark-streaming-kafka_2.10-1.4.1.jar,/opt/bin/sparkJars/metrics-core-2.2.0.jar,/opt/bin/sparkJars/mysql-connector-java-5.1.26-bin.jar,/opt/bin/sparkJars/spark-streaming-kafka_2.10-1.4.1.jar \--class com.hexun.streaming.StockerRealRank \StockerRealRank.jar \>stocker.log 2>&1 & \


2、发布生产环境,以集群方式运行

把代码中的

val conf = new SparkConf().setAppName("NewsTopNByDayAndHour").setMaster("local[3]");

修改为

val conf = new SparkConf().setAppName("NewsTopNByDayAndHour");

提交脚本

#!/bin/bashsource /etc/profilenohup /opt/modules/spark/bin/spark-submit \--master spark://10.130.2.20:7077 \--conf "spark.executor.extraJavaOptions=-XX:PermSize=8m -XX:+PrintGCDetails -XX:+PrintGCTimeStamps" \--driver-memory 3g \--executor-memory 3g \--num-executors 3 \--total-executor-cores 32 \--conf spark.ui.port=5666  \--jars /opt/bin/sparkJars/kafka_2.10-0.8.2.1.jar,/opt/bin/sparkJars/spark-streaming-kafka_2.10-1.4.1.jar,/opt/bin/sparkJars/metrics-core-2.2.0.jar,/opt/bin/sparkJars/mysql-connector-java-5.1.26-bin.jar,/opt/bin/sparkJars/spark-streaming-kafka_2.10-1.4.1.jar \--class com.hexun.streaming.StockerRealRank \StockerRealRank.jar \>stocker.log 2>&1 & \

与本地械相比,就是多了设置master节点的配置,executor的数量





3、典型问题与解答
1) 开语言与工具的选择
Scala + eclipse
实际上IntelliJ IDEA对scala的支持更完善,以后建议转到IDEA中

2) 针对缓存窗口的大小需设置缓存的数据时间

//缓存2天的数据ssc.remember(Minutes(60 * 48));

3) Kafka的数据可多次消费,一个reduceByKeyAndwindow对应一次消费
//第一次消费val urlClickCountsDStream = urlClickLogPairsDStream.reduceByKeyAndWindow((v1: Int, v2: Int) => {v1 + v2},Minutes(60 * 2),Seconds(25));//第二次消费kafka数据val urlClickCountsDStreamByDay = urlClickLogPairsDStream.reduceByKeyAndWindow((v1: Int, v2: Int) => {v1 + v2},Minutes(60 * 48),Seconds(35));




2 0
原创粉丝点击