【Data Algorithms_Recipes for Scaling up with Hadoop and Spark】Chapter3 Top 10 NonUniqueList
来源:互联网 发布:仿虎嗅网源码 编辑:程序博客网 时间:2024/06/03 05:55
:scala版
package com.bbw5.dataalgorithms.sparkimport org.apache.spark.SparkConfimport org.apache.spark.SparkContextimport java.util.PriorityQueue/** * Assumption: for all input (K, V), K's are non-unique. * This class implements Top-N design pattern for N > 0. * The main assumption is that for all input (K, V)'s, K's * are non-unique. It means that you will find entries like * (A, 2), ..., (A, 5),... * * This is a general top-N algorithm which will work unique * and non-unique keys. * * This class may be used to find bottom-N as well (by * just keeping N-smallest elements in the set. * * Top-10 Design Pattern: “Top Ten” Structure * * 1. map(input) => (K, V) * * 2. reduce(K, List<V1, V2, ..., Vn>) => (K, V), * where V = V1+V2+...+Vn * now all K's are unique * * 3. Find Top-N using the following high-level Spark API: * java.util.List<T> takeOrdered(int N, java.util.Comparator<T> comp) * Returns the first N elements from this RDD as defined by the specified * Comparator[T] and maintains the order. * cat id,cat name,cat weight * 1,cat1,13 * 2,cat2,10 * 3,cat3,14 * 4,cat4,13 * 5,cat5,20 * 6,cat6,24 * 7,cat7,13 * 8,cat8,10 * 9,cat9,24 * 10,cat10,13 * 11,cat11,30 * 12,cat12,14 * @author babaiw5 * */object SparkTop10UsingTakeOrdered { def main(args: Array[String]) { val sparkConf = new SparkConf().setAppName("SparkTop10UsingTakeOrdered") val sc = new SparkContext(sparkConf) val filename = "G:/temp/data/top1.txt" val textFile = sc.textFile(filename) val topN = 5 val topNRDD = textFile.map(_.split(",")).map(d => ((d(0), d(1)), d(2).toInt)).reduceByKey((a, b) => a + b). takeOrdered(topN)(Ordering.by[((String, String), Int), Int](-_._2)) topNRDD.foreach(println) }}
0 0
- 【Data Algorithms_Recipes for Scaling up with Hadoop and Spark】Chapter3 Top 10 NonUniqueList
- 【Data Algorithms_Recipes for Scaling up with Hadoop and Spark】Chapter3 Top 10 List
- 【Data Algorithms_Recipes for Scaling up with Hadoop and Spark】Chapter 10 Content-Based Recommend
- 【Data Algorithms_Recipes for Scaling up with Hadoop and Spark】Chapter1 Secondary Sort
- 【Data Algorithms_Recipes for Scaling up with Hadoop and Spark】Chapter4 LeftOuterJoin
- 【Data Algorithms_Recipes for Scaling up with Hadoop and Spark】Chapter5 Order Inversion Pattern
- 【Data Algorithms_Recipes for Scaling up with Hadoop and Spark】Chapter6 MovingAverage
- 【Data Algorithms_Recipes for Scaling up with Hadoop and Spark】Chapter 7 Market Basket Analysis
- 【Data Algorithms_Recipes for Scaling up with Hadoop and Spark】Chapter 8 Common Friends
- 【Data Algorithms_Recipes for Scaling up with Hadoop and Spark】Chapter 9 Recommendation Items
- 【Data Algorithms_Recipes for Scaling up with Hadoop and Spark】Chapter 9 Recommendation People
- 【Data Algorithms_Recipes for Scaling up with Hadoop and Spark】Chapter 11 Smarter Email Marketing wit
- 【Data Algorithms_Recipes for Scaling up with Hadoop and Spark】Chapter 13 k-Nearest Neighbors
- 【Data Algorithms_Recipes for Scaling up with Hadoop and Spark】Chapter 12. K-Means Clustering
- Scaling Up And Out
- Building the Enterprise Fabric for Big Data with Vertica and Spark Integration
- Machine Learning with Spark 笔记(chapter3 )
- Starting up PySpark for using python with Spark in eclipse
- SVN服务器的本地搭建和使用
- 安装小tips (关于pycharm5.0 ,git2.7)
- STL set
- 让Bean获取Spring容器
- 计算机网络之物理层
- 【Data Algorithms_Recipes for Scaling up with Hadoop and Spark】Chapter3 Top 10 NonUniqueList
- 编程小练习3
- MVC 过滤器的执行顺序 AOP面向切面编程
- Kaggle(1):数据挖掘的基本流程
- 【ios开发学习 - 第六课】UILabel使用
- 解决:void value not ignored as it ought to be
- ActivityLifecycleCallbacks
- HDU 3031 ToBe Or Not To Be(模拟)
- POJ-3687-Labeling Balls-(求最小字典序拓扑序列)逆向建图-拓扑排序