spark wordcount

来源:互联网 发布:mac地址克隆 编辑:程序博客网 时间:2024/06/05 09:08

访问本地数据

在/usr/local/spark/mycode/创建一个文件
vi /usr/local/spark/mycode/word1.txt 输入一些单词,作为需要统计的数据

进入spark-shell

val textFile = sc.textFile("file:///usr/local/spark/mycode/word.txt")textFile.first(); //显示文本的第一行textFile.saveAsTextFile("file:///usr/local/spark/mycode/wordcopy") //把文件数据写到wordcopy目录下, 目录下会有part-00000为数据内容

访问hdfs上数据

把word.txt上传到hdfs
创建文件夹

hadoop fs -mkdir /input
上传
hadoop fs -put word.txt /input

进入spark-shell

val textFile = sc.textFile(“hdfs://localhost:9000/input/word.txt”)
上面路径也可以简写成:
val textFile = sc.textFile(“/input/word.txt”)

单词统计

两种方式可以进行单词统计操作

在spar-shell下方式

val wordCount = textFile.flatMap(line => line.split(” “)).map(word => (word,1)).reduceByKey((a,b) => a+b)

wordCount.collect()

输出内容

Array[(String, Int)] = Array((python,1), (hello,3), (java,1), (spark,1))

通过submit方式

创建单词文件
mkdir -p /usr/local/spark/mycode
vi /usr/local/spark/mycode/word.txt

hello pythonhello javahello spark

创建scala工程的多级目录, 这里的wordcount就相当于程序的主目录
mkdir -p /usr/local/spark/mycode/wordcount/src/main/scala

在scala目录下编写test.scala
vi ~/src/main/scala/test.scala
内容如下:

import org.apache.spark.SparkContextimport org.apache.spark.SparkContext._import org.apache.spark.SparkConfobject WordCount {        def main(args: Array[String]){                val inputFile = "file:///usr/local/spark/mycode/word.txt"                val conf = new SparkConf().setAppName("WordCount").setMaster("local[2]")                val sc = new SparkContext(conf)                val textFile = sc.textFile(inputFile)                val wordCount = textFile.flatMap(line => line.split(" ")).map(word => (word, 1)).reduceByKey((a, b) => a+b)                wordCount.foreach(println)        }}

在工程主目录wordcount下创建simple.sbt
内容为:

name := "Simple Project"version := "1.0"scalaVersion := "2.11.8"libraryDependencies += "org.apache.spark" %% "spark-core" % "2.1.0"

在工程主目录wordcount下执行命令,进行打包
/usr/local/sbt/sbt package

执行结束后会有如下提示

[info] Done packaging.[success] Total time: 101 s, completed Nov 2, 2017 9:21:56 AM

进入工程目录下查看生成的jar文件(simple-project_2.11-1.0.jar)
cd /usr/local/spark/mycode/wordcount/target/scala-2.11

接着通过spark-submit 执行任务
/usr/local/spark/bin/spark-submit –class “WordCount” /usr/local/spark/mycode/wordcount/target/scala-2.11/simple-project_2.11-1.0.jar

在输出的信息中可以看到如下结果:

(spark,1)(python,1)(hello,3)(java,1)
原创粉丝点击