windows下尝试graphx的一个例子

来源:互联网 发布:龙腾世纪2 知乎 编辑:程序博客网 时间:2024/06/11 17:49

1、安装java8,配环境变量;

2、安装scala,配环境变量;

3、安装spark,配环境变量

4、安装idea,并安装idea的scala插件

5、下载hadoop的winutils.exe(https://github.com/srccodes/hadoop-common-2.2.0-bin),放在一个自己知道的地方;

6、建立一个sbt项目,编辑build.sbt,新增一行

libraryDependencies += "org.apache.spark" % "spark-graphx_2.11" % "2.2.0"
7、Object , def main (){}如下:
import org.apache.spark._;import org.apache.spark.SparkContext;import org.apache.spark.SparkContext._;import org.apache.spark.graphx._;import org.apache.spark.rdd.RDD;import scala.util.parsing.json.JSON;import scala.io.Source;import java.io.PrintWriter;import java.io.File;object GraphXTest1 {  def main(args:Array[String]): Unit ={    println("Hello World, this is graphx");    System.setProperty("hadoop.home.dir","D:\\hadoop-common-2.2.0-bin-master");    val conf = new SparkConf();    val sc = new SparkContext("local","GraphXTest",conf);    /**      *vertices.txt格式如下:      * 1,zyf,111      * 2,yfz,222      * 3,fzy,333      */    val vertexLines:RDD[String] = sc.textFile("file:\\D:\\graphxTest\\vertices.txt");    val v:RDD[(VertexId,(String,Long))] = vertexLines.map(line=>{      val cols = line.split(",");      (cols(0).toLong,(cols(1),cols(2).toLong));    });    v.collect.foreach(println(_));    /**      * edges.txt格式如下:      * 1,2,100,2017-11-11        2,3,200,2017-12-12        3,1,300,2017-10-10      */    val format = new java.text.SimpleDateFormat("yyyy-MM-dd");    val edgeLines:RDD[String] = sc.textFile("file:\\D:\\graphxTest\\edges.txt");    val e:RDD[Edge[(Long,java.util.Date)]] = edgeLines.map(line=>{      val cols = line.split(",");      Edge(cols(0).toLong,cols(1).toLong,(cols(2).toLong,format.parse(cols(3))));    });    e.collect().foreach(println(_));  }}
好像是个教读文件的例子……算了我编不下去了。



阅读全文
0 0
原创粉丝点击