Intellij idea下spark开发HelloWorld

来源:互联网 发布:申万宏源炒股软件下载 编辑:程序博客网 时间:2024/05/21 22:58
  1. 开发环境
    Intellij idea 14
    jdk: 1.7.71
    spark: 1.1.0
    hadoop: 2.4.0
    scala: 2.11.1
    maven: 3.2.5
  2. 创建maven工程
    在src目录下创建main/java的source文件(在File –> Project Structure…–>Modules–>Sources右键添加目录和修改目录为source类型)
    这里写图片描述

在File –> Project Structure…–>Libraries添加spark-assembly-1.1.0-hadoop2.4.0的依赖包
这里写图片描述
3. java目录下编写Wordcount例子程序

import org.apache.spark.SparkConf;import org.apache.spark.api.java.JavaPairRDD;import org.apache.spark.api.java.JavaRDD;import org.apache.spark.api.java.JavaSparkContext;import org.apache.spark.api.java.function.FlatMapFunction;import org.apache.spark.api.java.function.Function2;import org.apache.spark.api.java.function.PairFunction;import scala.Tuple2;import java.util.Arrays;import java.util.List;import java.util.regex.Pattern;public class JavaWordCount {    private static final Pattern SPACE = Pattern.compile(" ");    public static void main(String[] args)throws Exception {        SparkConf sparkConf = new SparkConf().setAppName("JavaWordCount");        String srcPath = null;        String desPath = "/apps/ca/yanh/output";        if (args.length == 1) {            srcPath = args[0];        } else if(args.length == 2) {            srcPath = args[0];            desPath = args[1];        }        else {            System.out.println("Usage: java -jar jarName <src> [des]");            System.exit(1);        }        JavaSparkContext jsc = new JavaSparkContext(sparkConf);        JavaRDD<String> lines = jsc.textFile(srcPath, 1);        System.out.println("Begin to split!");        JavaRDD<String> words = lines.flatMap(new FlatMapFunction<String, String>() {            @Override            public Iterable<String> call(String s) throws Exception {                return Arrays.asList(SPACE.split(" "));            }        });        System.out.println("Begin to map!");        JavaPairRDD<String, Integer> ones = words.mapToPair(new PairFunction<String, String, Integer>() {            @Override            public Tuple2<String, Integer> call(String s) throws Exception {                return new Tuple2<String, Integer>(s, 1);            }        });        System.out.println("Begin to reduce!");        JavaPairRDD<String, Integer> counts = ones.reduceByKey(new Function2<Integer, Integer, Integer>() {            @Override            public Integer call(Integer i1, Integer i2) throws Exception {                return i1 + i2;            }        });        System.out.println("Begin to save!");        /*List<Tuple2<String, Integer>> output = counts.collect();        for(Tuple2<?, ?> tuple: output) {            System.out.println(tuple._1() + ": " + tuple._2());        }*/        counts.saveAsTextFile(desPath);        jsc.stop();    }}
  1. 打包成jar包
    在File –> Project Structure…–>Artifacts点击绿色“+”,Add–>JAR–>From Modules with Dependencies
    这里写图片描述

输入main class入口函数名,将Output Layout下所有jar包删掉(因为spark运行环境已经包含了这些包),然后Apply
这里写图片描述


编译程序:Build–>Build Artifacts…,然后选择要编译的项目进行编译
这里写图片描述


在当前工程生成的out目录下就可以找到输出的jar包

  1. 运行程序
    将jar包上传至spark集群,然后使用spark-submit进行提交运行(spark-submit具体参数自行查看)
    提交命令:spark-submit –class JavaWordCount ~/JavaWordCount.jar /apps/ca/yanh/data/README.md
    这里写图片描述

这里写图片描述

这是由于缺少本地库依赖和压缩包引起。
在此提供了这个包:http://pan.baidu.com/s/1rqkQa
spark-submit命令: spark-submit –driver-library-path :/usr/lib/hadoop/lib/native/ –jars /usr/lib/hadoop/lib/hadoop-lzo-0.6.0.jar –class JavaWordCount ~/JavaWordCount.jar /apps/ca/yanh/data/README.md

来自yhao2014

0 0
原创粉丝点击