spark updateStateByKey用法更新状态

来源:互联网 发布:淘宝网有国际版吗 编辑:程序博客网 时间:2024/06/10 06:34

因为本人刚开始接触大数据开发,在使用spark做开发过程遇到了一些问题,所以写下来作为笔记。
先把代码贴出来吧。(网上找的一段代码示例)

关于updateStateByKey :
1.重点:首先会以DStream中的数据进行按key做reduce操作,然后再对各个批次的数据进行累加
2.updateStateByKey 方法中
updateFunc就要传入的参数,他是一个函数。Seq[V]表示当前key对应的所有值,Option[S] 是当前key的历史状态,返回的是新的

object UpdateStateByKeyDemo {  def main(args: Array[String]) {    val conf = new SparkConf().setAppName("UpdateStateByKeyDemo")    val ssc = new StreamingContext(conf,Seconds(20))    //要使用updateStateByKey方法,必须设置Checkpoint。    ssc.checkpoint("/checkpoint/")    val socketLines = ssc.socketTextStream("localhost",9999)    socketLines.flatMap(_.split(",")).map(word=>(word,1)).updateStateByKey( (currValues:Seq[Int],preValue:Option[Int]) =>{           //将目前值相加           val currValueSum = 0           for(currValue <- currValues){               currValueSum += currValue           }           //上面其实可以这样:val currValueSum = currValues.sum,我是为了让读者更直观。           //上面的Int类型都可以用对象类型替换           Some(currValue + preValue.getOrElse(0)) //目前值的和加上历史值    }).print()    ssc.start()    ssc.awaitTermination()    ssc.stop()  }}

源码:

/**   * Return a new "state" DStream where the state for each key is updated by applying   * the given function on the previous state of the key and the new values of each key.   * Hash partitioning is used to generate the RDDs with Spark's default number of partitions.   * @param updateFunc State update function. If `this` function returns None, then   *                   corresponding state key-value pair will be eliminated.   * @tparam S State type   */  def updateStateByKey[S: ClassTag](      updateFunc: (Seq[V], Option[S]) => Option[S]    ): DStream[(K, S)] = ssc.withScope {    updateStateByKey(updateFunc, defaultPartitioner())  }

最终调用的这个:

/**   * Return a new "state" DStream where the state for each key is updated by applying   * the given function on the previous state of the key and the new values of the key.   * org.apache.spark.Partitioner is used to control the partitioning of each RDD.   * @param updateFunc State update function. If `this` function returns None, then   *                   corresponding state key-value pair will be eliminated.   * @param partitioner Partitioner for controlling the partitioning of each RDD in the new   *                    DStream.   * @tparam S State type   */  def updateStateByKey[S: ClassTag](      updateFunc: (Seq[V], Option[S]) => Option[S],      partitioner: Partitioner    ): DStream[(K, S)] = ssc.withScope {    val cleanedUpdateF = sparkContext.clean(updateFunc)    val newUpdateFunc = (iterator: Iterator[(K, Seq[V], Option[S])]) => {      iterator.flatMap(t => cleanedUpdateF(t._2, t._3).map(s => (t._1, s)))    }    updateStateByKey(newUpdateFunc, partitioner, true)  }

其中defaultPartitioner():

private[streaming] def defaultPartitioner(numPartitions: Int = self.ssc.sc.defaultParallelism) = {    new HashPartitioner(numPartitions)  }

我目前项目中的spark版本是1.5的,据说1.6版本中的mapWithState 性能较updateStateByKey提升10倍。有机会了解了解

阅读全文
2 0