Spark mlib FPGrowth&nb…

来源:互联网 发布:js event button 编辑:程序博客网 时间:2024/06/06 04:38

MLlib’s FP-growth implementation takes the following(hyper-)parameters:

  • minSupport: the minimum support for an itemset to beidentified as frequent. For example, if an item appears 3 out of 5transactions, it has a support of 3/5=0.6.
  • numPartitions: the number of partitions used to distributethe work.

spark mlib 的官方FPGrowth运行出错。
这是序列输出可能引起的错误,spark采用的kryo序列化方式比JavaSerializer方式更快,但是在1.4版本的spark上会产生错误,故解决方案是,要么在spark-defaults.conf中替换,要么只运行中直接替换,

所以加上下面蓝色这句好就好了
val conf = newSparkConf().setAppName("SimpleFPGrowth").set("spark.serializer","org.apache.spark.serializer.JavaSerializer")



import org.apache.log4j.{Level, Logger}
import org.apache.spark.{SparkConf, SparkContext}
import org.apache.spark.mllib.fpm._
import org.apache.spark.rdd.RDD
// $example off$

object FPGrowth {
  def main(args: Array[String]) {
    // 屏蔽不必要的日志显示在终端上
   Logger.getLogger("org.apache.spark").setLevel(Level.WARN)
   Logger.getLogger("org.eclipse.jetty.server").setLevel(Level.OFF)
 
    // 设置运行环境
    val conf = newSparkConf().setAppName("SimpleFPGrowth").set("spark.serializer","org.apache.spark.serializer.JavaSerializer")
    val sc = newSparkContext(conf)

    // $example on$
    val data =sc.textFile("xrli/sample_fpgrowth.txt")

    val transactions:RDD[Array[String]] = data.map(s => s.trim.split(' '))

    val fpg = newFPGrowth()
     .setMinSupport(0.5)
     .setNumPartitions(10)
    val model =fpg.run(transactions)

   model.freqItemsets.collect().foreach { itemset =>
     println(itemset.items.mkString("[", ",", "]") + ", " +itemset.freq)
    }

    val minConfidence =0.8
   model.generateAssociationRules(minConfidence).collect().foreach {rule =>
     println(
       rule.antecedent.mkString("[", ",", "]")
         + " => " + rule.consequent.mkString("[", ",", "]")
         + ", " +rule.confidence)
    }
    // $example off$
  }
}
// scalastyle:on println}





//sample_fpgrowth.txt

//r z h k p
//z y x w v u t s
//s x o n r
//x z y m t s q e
//z
//x z y r q t p

数据集中每一行就是一项,以z为例,z在5项中都出现了,,所以支持度为5/6, itemset.freq打印了频度5。

model.generateAssociationRules(minConfidence).collect().foreach 
这是生成规则,如果数据集很大的话,推荐不要collect(). ,这样可以提升运行速度。

rule.antecedent 前提
rule.consequent结果


Spark <wbr>mlib <wbr>FPGrowth <wbr>运行错误解决方案

0 0
原创粉丝点击