Hadoop应用开发--基于MapReduce推荐系统的实现

来源:互联网 发布:ios软件开发待遇 编辑:程序博客网 时间:2024/06/17 18:42
1.推荐系统概述
电子商务网站是个性化推荐系统重要地应用的领域之一,亚马逊就是个性化推荐系统的积极应用者和推广者,亚马逊的推荐系统深入到网站的各类商品,为亚马逊带来了至少30%的销售额。
不光是电商类,推荐系统无处不在。QQ,人人网的好友推荐;新浪微博的你可能感兴趣的人;优酷,土豆的电影推荐;豆瓣的图书推荐;大从点评的餐饮推荐;世纪佳缘的相亲推荐;天际网的职业推荐等。

2.推荐算法分类
按数据使用划分:
– 协同过滤算法:UserCF , ItemCF , ModelCF
– 基于内容的推荐: 用户内容属性和物品内容属性
– 社会化过滤:基于用户的社会网络关系
按模型划分:
– 最近邻模型:基于距离的协同过滤算法
– Latent Factor Mode(SVD):基于矩阵分解的模型
– Graph:图模型,社会网络图模型


2.1基于用户的协同过滤算法UserCF
基于用户的协同过滤,通过不同用户对物品的评分来评测用户之间的相似性,基于用 户之间的相似性做出推荐。
简单来讲就是:给用户推荐和他兴趣相似的其他用户喜欢的物品。
基于UserCF 的基本思想相当简单,基于用户对物品的偏好找到相邻邻居用户,然后将 邻居用户喜欢的推荐给当前用户。

计算上,就是将一个用户对所有物品的偏好作为一个向量来计算用户之间的相似度, 找到 K 邻居后,根据邻居的相似度权重以及他们对物品的偏好,预测当前用户没有偏 好的未涉及物品,计算得到一个排序的物品列表作为推荐。

上图给出了一个例子,对于用户 A,根据用户的历史偏好,这里只计算得到一个邻居 – 用户 C,然后将用户 C 喜欢的物品 D 推荐给用户 A。
2.2基于物品的协同过滤算法ItemCF
基于item的协同过滤,通过用户对不同item的评分来评测item之间的相似性,基于 item之间的相似性做出推荐。
简单来讲就是:给用户推荐和他之前喜欢的物品相似的物品。
基于ItemCF 的原理和基于UserCF 类似,只是在计算邻居时采用物品本身,而不是从 用户的角度,即基于用户对物品的偏好找到相似的物品,然后根据用户的历史偏好, 推荐相似的物品给他。

从计算的角度看,就是将所有用户对某个物品的偏好作为一个向量来计算物品之间的 相似度,得到物品的相似物品后,根据用户历史的偏好预测当前用户还没有表示偏好 的物品,计算得到一个排序的物品列表作为推荐。

上图给出了一个例子,对于物品 A,根据所有用户的历史偏好,喜欢物品 A 的用户都 喜欢物品 C,得出物品 A 和物品 C 比较相似,而用户 C 喜欢物品 A,那么可以推断出 用户 C 可能也喜欢物品 C。
2.4基于物品的协同过滤算法实现
分为2个步骤:
1. 计算物品之间的相似度
2. 根据物品的相似度和用户的历史行为给用户生成推荐列表
注:基于物品的协同过滤算法,是目前商用最广泛的推荐算法。
3.案例
3.1介绍
互联网某电影点评网站,主要产品包括 电影介绍,电影排行,网友对电影打分,网友 影评,影讯&购票,用户在看|想看|看过的电影,猜你喜欢(推荐)。
用户在完成注册后,可以浏览网站的各种电影介绍,看电影排行榜,选择自己喜欢的分类,找到自己想看的电影,并设置为“想看”,同时对自己已经看过的电影写下影评,并打分。
通过简短的描述,我们可以粗略地看出,这个网站提供个性化推荐电影服务。电影推荐 将成为这个网站的核心功能。
核心点:
– 网站提供所有电影信息,吸引用户浏览
– 网站收集用户行为,包括浏览行为,评分行为,评论行为,从而推测出用户的爱好。
– 网站帮助用户找到,用户还没有看过,并满足他兴趣的电影列表。
– 网站通过海量数据的积累了,预测未来新片的市场影响和票房
3.2推荐系统指标设计
在真实的环境中设计推荐的时候,要全面考量数据量,算法性能,结果准确度等的指标。
推荐算法选型:基于物品的协同过滤算法ItemCF,并行实现
数据量:基于Hadoop架构,支持GB,TB,PB级数据量
算法检验:可以通过 准确率,召回率,覆盖率,流行度 等指标评判。
结果解读:通过ItemCF的定义,合理给出结果解释
测试数据集:
每行3个字段,依次是用户ID,电影 ID,用户对电影的评分(0-5分,每0.5 分为一个评分点)
3.3算法模型和算法的思想
1. 建立物品的同现矩阵
2. 建立用户对物品的评分矩阵
3. 矩阵计算推荐结果
3.4建立物品的同现矩阵
按用户分组,找到每个用户所选的物品,单独出现计数及两两一组计数。
3.5建立用户对物品的评分矩阵
按用户分组,找到每个用户所选的物品及评分
3.6矩阵计算推荐结果
同现矩阵*评分矩阵=推荐结果
4.推荐系统架构
上图中,左边是Application业务系统,右边是Hadoop的HDFS, MapReduce。
业务系统记录了用户的行为和对物品的打分
设置系统定时器CRON,每xx小时,增量向HDFS导入数据(userid,itemid,value,time) 。
完成导入后,设置系统定时器,启动MapReduce程序,运行推荐算法。
完成计算后,设置系统定时器,从HDFS导出推荐结果数据到数据库,方便以后的及时查询。
5. MapReduce任务设计
步骤1: 按用户分组,计算所有物品出现的组合列表,得到用户对物品的评分矩阵
步骤2: 对物品组合列表进行计数,建立物品的同现矩阵
步骤3: 合并同现矩阵和评分矩阵
步骤4: 计算推荐结果列表
6.MapReduce程序实现
Recommend.java,主任务启动程序
Step1.java,按用户分组,计算所有物品出现的组合列表,得到用户对物品的评分矩阵
Step2.java,对物品组合列表进行计数,建立物品的同现矩阵
Step3.java,对同现矩阵和评分矩阵转型
Step4.java,合并矩阵,并计算推荐结果列表
HdfsDAO.java,HDFS操作工具类
6.1Recommend.java
import java.util.HashMap;import java.util.Map;import java.util.regex.Pattern;import org.apache.hadoop.mapred.JobConf;public class Recommend {    public static final String HDFS = "hdfs://192.168.1.210:9000";    public static final Pattern DELIMITER = Pattern.compile("[\t,]");    public static void main(String[] args) throws Exception {        Map<String, String> path = new HashMap<String, String>();        path.put("data", "logfile/small.csv");  //导入最原始的数据。        path.put("Step1Input", HDFS + "/user/hdfs/recommend");        path.put("Step1Output", path.get("Step1Input") + "/step1");        path.put("Step2Input", path.get("Step1Output"));        path.put("Step2Output", path.get("Step1Input") + "/step2");        path.put("Step3Input1", path.get("Step1Output"));        path.put("Step3Output1", path.get("Step1Input") + "/step3_1");        path.put("Step3Input2", path.get("Step2Output"));        path.put("Step3Output2", path.get("Step1Input") + "/step3_2");        path.put("Step4Input1", path.get("Step3Output1"));        path.put("Step4Input2", path.get("Step3Output2"));        path.put("Step4Output", path.get("Step1Input") + "/step4");        Step1.run(path);        Step2.run(path);        Step3.run1(path);        Step3.run2(path);        Step4.run(path);        System.exit(0);    }    public static JobConf config() {        JobConf conf = new JobConf(Recommend.class);        conf.setJobName("Recommand");        conf.addResource("classpath:/hadoop/core-site.xml");        conf.addResource("classpath:/hadoop/hdfs-site.xml");        conf.addResource("classpath:/hadoop/mapred-site.xml");        conf.set("io.sort.mb", "1024");        return conf;    }}

6.2 Step1.java
import java.io.IOException;import java.util.Iterator;import java.util.Map;import org.apache.hadoop.fs.Path;import org.apache.hadoop.io.IntWritable;import org.apache.hadoop.io.Text;import org.apache.hadoop.mapred.FileInputFormat;import org.apache.hadoop.mapred.FileOutputFormat;import org.apache.hadoop.mapred.JobClient;import org.apache.hadoop.mapred.JobConf;import org.apache.hadoop.mapred.MapReduceBase;import org.apache.hadoop.mapred.Mapper;import org.apache.hadoop.mapred.OutputCollector;import org.apache.hadoop.mapred.Reducer;import org.apache.hadoop.mapred.Reporter;import org.apache.hadoop.mapred.RunningJob;import org.apache.hadoop.mapred.TextInputFormat;import org.apache.hadoop.mapred.TextOutputFormat;import org.conan.myhadoop.hdfs.HdfsDAO;public class Step1 {    public static class Step1_ToItemPreMapper extends MapReduceBaseimplements Mapper<Object, Text, IntWritable, Text> {        private final static IntWritable k = new IntWritable();                private final static Text v = new Text();        @Override        public void map(Object key, Text value,OutputCollector<IntWritable, Text> output, Reporter reporter) throws      IOException {            String[] tokens =Recommend.DELIMITER.split(value.toString());            int userID = Integer.parseInt(tokens[0]);            String itemID= tokens[1];            String pref = tokens[2];            k.set(userID);            v.set(itemID + ":" + pref);                       output.collect(k, v);        }    }    public static class Step1_ToUserVectorReducer extends MapReduceBase implementsReducer<IntWritable, Text, IntWritable, Text> {        private final static Text v = new Text();        @Override       public void reduce(IntWritable key, Iterator<Text> values, OutputCollector<IntWritable, Text> output, Reporter reporter)throws IOException {            StringBuilder sb = new StringBuilder();            while (values.hasNext()) {                sb.append("," + values.next());            }            v.set(sb.toString().replaceFirst(",", ""));           output.collect(key, v);        }    }    public static void run(Map<String, String> path) throws IOException {       JobConf conf = Recommend.config();        String input = path.get("Step1Input");        String output = path.get("Step1Output");        HdfsDAO hdfs = new HdfsDAO(Recommend.HDFS, conf);       // hdfs.rmr(output);        hdfs.rmr(input);        hdfs.mkdirs(input);        hdfs.copyFile(path.get("data"), input);        conf.setMapOutputKeyClass (IntWritable.class);        conf.setMapOutputValueClass(Text.class);        conf.setOutputKeyClass(IntWritable.class);       conf.setOutputValueClass(Text.class);        conf.setMapperClass(Step1_ToItemPreMapper.class);               conf.setCombinerClass(Step1_ToUserVectorReducer.class);        conf.setReducerClass(Step1_ToUserVectorReducer.class);        conf.setInputFormat(TextInputFormat.class);        conf.setOutputFormat(TextOutputFormat.class);               FileInputFormat.setInputPaths(conf, new Path(input));        FileOutputFormat.setOutputPath(conf, new Path(output));        RunningJob job = JobClient.runJob(conf);        while (!job.isComplete()) {            job.waitForCompletion();        }   }}

 运行  Step1.run(path);之后的结果:


6.3 Step2.java
import java.io.IOException;import java.util.Iterator;import java.util.Map;import org.apache.hadoop.fs.Path;import org.apache.hadoop.io.IntWritable;import org.apache.hadoop.io.LongWritable;import org.apache.hadoop.io.Text;import org.apache.hadoop.mapred.FileInputFormat;import org.apache.hadoop.mapred.FileOutputFormat;import org.apache.hadoop.mapred.JobClient;import org.apache.hadoop.mapred.JobConf;import org.apache.hadoop.mapred.MapReduceBase;import org.apache.hadoop.mapred.Mapper;import org.apache.hadoop.mapred.OutputCollector;import org.apache.hadoop.mapred.Reducer;import org.apache.hadoop.mapred.Reporter;import org.apache.hadoop.mapred.RunningJob;import org.apache.hadoop.mapred.TextInputFormat;import org.apache.hadoop.mapred.TextOutputFormat;import org.conan.myhadoop.hdfs.HdfsDAO;public class Step2 {    public static class Step2_UserVectorToCooccurrenceMapper extends MapReduceBase implements Mapper<LongWritable, Text, Text, IntWritable> {        private final static Text k = new Text();        private final static IntWritable v = new IntWritable(1);        @Override        public void map(LongWritable key, Text values, OutputCollector<Text, IntWritable> output, Reporter reporter) throws IOException {            String[] tokens = Recommend.DELIMITER.split(values.toString());            for (int i = 1; i < tokens.length; i++) {                String itemID = tokens[i].split(":")[0];                for (int j = 1; j < tokens.length; j++) {                    String itemID2 = tokens[j].split(":")[0];                    k.set(itemID + ":" + itemID2);                    output.collect(k, v);                }            }        }    }    public static class Step2_UserVectorToConoccurrenceReducer extends MapReduceBase implements Reducer<Text, IntWritable, Text, IntWritable> {        private IntWritable result = new IntWritable();        @Override        public void reduce(Text key, Iterator<IntWritable> values, OutputCollector<Text, IntWritable> output, Reporter reporter) throws IOException {            int sum = 0;            while (values.hasNext()) {                sum += values.next().get();            }            result.set(sum);            output.collect(key, result);        }    }    public static void run(Map<String, String> path) throws IOException {        JobConf conf = Recommend.config();        String input = path.get("Step2Input");         String output = path.get("Step2Output");        HdfsDAO hdfs = new HdfsDAO(Recommend.HDFS, conf);        hdfs.rmr(output);        conf.setOutputKeyClass(Text.class);        conf.setOutputValueClass(IntWritable.class);        conf.setMapperClass(Step2_UserVectorToCooccurrenceMapper.class);// conf.setCombinerClass(Step2_UserVectorToConoccurrenceReducer.class);// conf.setReducerClass(Step2_UserVectorToConoccurrenceReducer.class);        conf.setInputFormat(TextInputFormat.class);        conf.setOutputFormat(TextOutputFormat.class);        FileInputFormat.setInputPaths(conf, new Path(input));        FileOutputFormat.setOutputPath(conf, new Path(output));        RunningJob job = JobClient.runJob(conf);        while (!job.isComplete()) {            job.waitForCompletion();        }    }}

运行  Step2.run(path);之后的结果:


6.4 Step3.java
import java.io.IOException;import java.util.Map;import org.apache.hadoop.fs.Path;import org.apache.hadoop.io.IntWritable;import org.apache.hadoop.io.LongWritable;import org.apache.hadoop.io.Text;import org.apache.hadoop.mapred.FileInputFormat;import org.apache.hadoop.mapred.FileOutputFormat;import org.apache.hadoop.mapred.JobClient;import org.apache.hadoop.mapred.JobConf;import org.apache.hadoop.mapred.MapReduceBase;import org.apache.hadoop.mapred.Mapper;import org.apache.hadoop.mapred.OutputCollector;import org.apache.hadoop.mapred.Reporter;import org.apache.hadoop.mapred.RunningJob;import org.apache.hadoop.mapred.TextInputFormat;import org.apache.hadoop.mapred.TextOutputFormat;import org.conan.myhadoop.hdfs.HdfsDAO;public class Step3 {    public static class Step31_UserVectorSplitterMapper extends MapReduceBase implements Mapper<LongWritable, Text, IntWritable, Text> {        private final static IntWritable k = new IntWritable();        private final static Text v = new Text();        @Override        public void map(LongWritable key, Text values, OutputCollector<IntWritable, Text> output, Reporter reporter) throws IOException {            String[] tokens = Recommend.DELIMITER.split(values.toString());            for (int i = 1; i < tokens.length; i++) {                String[] vector = tokens[i].split(":");                int itemID = Integer.parseInt(vector[0]);                String pref = vector[1];                k.set(itemID);                v.set(tokens[0] + ":" + pref);                output.collect(k, v);            }        }    }    public static void run1(Map<String, String> path) throws IOException {        JobConf conf = Recommend.config();        String input = path.get("Step3Input1");        String output = path.get("Step3Output1");        HdfsDAO hdfs = new HdfsDAO(Recommend.HDFS, conf);        hdfs.rmr(output);        conf.setOutputKeyClass(IntWritable.class);        conf.setOutputValueClass(Text.class);        conf.setMapperClass(Step31_UserVectorSplitterMapper.class);        conf.setInputFormat(TextInputFormat.class);        conf.setOutputFormat(TextOutputFormat.class);        FileInputFormat.setInputPaths(conf, new Path(input));        FileOutputFormat.setOutputPath(conf, new Path(output));        RunningJob job = JobClient.runJob(conf);        while (!job.isComplete()) {            job.waitForCompletion();        }    }    public static class Step32_CooccurrenceColumnWrapperMapper extends MapReduceBase implements Mapper<LongWritable, Text, Text, IntWritable> {        private final static Text k = new Text();        private final static IntWritable v = new IntWritable();        @Override        public void map(LongWritable key, Text values, OutputCollector<Text, IntWritable> output, Reporter reporter) throws IOException {            String[] tokens = Recommend.DELIMITER.split(values.toString());            k.set(tokens[0]);            v.set(Integer.parseInt(tokens[1]));            output.collect(k, v);        }    }    public static void run2(Map<String, String> path) throws IOException {        JobConf conf = Recommend.config();        String input = path.get("Step3Input2");        String output = path.get("Step3Output2");        HdfsDAO hdfs = new HdfsDAO(Recommend.HDFS, conf);        hdfs.rmr(output);        conf.setOutputKeyClass(Text.class);        conf.setOutputValueClass(IntWritable.class);        conf.setMapperClass(Step32_CooccurrenceColumnWrapperMapper.class);        conf.setInputFormat(TextInputFormat.class);        conf.setOutputFormat(TextOutputFormat.class);        FileInputFormat.setInputPaths(conf, new Path(input));        FileOutputFormat.setOutputPath(conf, new Path(output));        RunningJob job = JobClient.runJob(conf);        while (!job.isComplete()) {            job.waitForCompletion();        }    }}

Step3.run1(path);之后的结果:


Step3.run2(path);之后的结果:


6.5 Step4.java
import java.io.IOException;import java.util.ArrayList;import java.util.HashMap;import java.util.Iterator;import java.util.List;import java.util.Map;import org.apache.hadoop.fs.Path;import org.apache.hadoop.io.IntWritable;import org.apache.hadoop.io.LongWritable;import org.apache.hadoop.io.Text;import org.apache.hadoop.mapred.FileInputFormat;import org.apache.hadoop.mapred.FileOutputFormat;import org.apache.hadoop.mapred.JobClient;import org.apache.hadoop.mapred.JobConf;import org.apache.hadoop.mapred.MapReduceBase;import org.apache.hadoop.mapred.Mapper;import org.apache.hadoop.mapred.OutputCollector;import org.apache.hadoop.mapred.Reducer;import org.apache.hadoop.mapred.Reporter;import org.apache.hadoop.mapred.RunningJob;import org.apache.hadoop.mapred.TextInputFormat;import org.apache.hadoop.mapred.TextOutputFormat;import org.conan.myhadoop.hdfs.HdfsDAO;public class Step4 {    public static class Step4_PartialMultiplyMapper extends MapReduceBase implements Mapper<LongWritable, Text, IntWritable, Text> {        private final static IntWritable k = new IntWritable();        private final static Text v = new Text();        private final static Map<Integer, List<Cooccurrence>> cooccurrenceMatrix = new HashMap<Integer, List<Cooccurrence>>();        @Override        public void map(LongWritable key, Text values, OutputCollector<IntWritable, Text> output, Reporter reporter) throws IOException {            String[] tokens = Recommend.DELIMITER.split(values.toString());            String[] v1 = tokens[0].split(":");            String[] v2 = tokens[1].split(":");            if (v1.length > 1) {// cooccurrence                int itemID1 = Integer.parseInt(v1[0]);                int itemID2 = Integer.parseInt(v1[1]);                int num = Integer.parseInt(tokens[1]);                List<Cooccurrence> list = null;                if (!cooccurrenceMatrix.containsKey(itemID1)) {                    list = new ArrayList<Cooccurrence>();                } else {                    list = cooccurrenceMatrix.get(itemID1);                }                list.add(new Cooccurrence(itemID1, itemID2, num));                cooccurrenceMatrix.put(itemID1, list);            }            if (v2.length > 1) {// userVector                int itemID = Integer.parseInt(tokens[0]);                int userID = Integer.parseInt(v2[0]);                double pref = Double.parseDouble(v2[1]);                k.set(userID);                for (Cooccurrence co : cooccurrenceMatrix.get(itemID)) {                    v.set(co.getItemID2() + "," + pref * co.getNum());                    output.collect(k, v);                }            }        }    }    public static class Step4_AggregateAndRecommendReducer extends MapReduceBase implements Reducer<IntWritable, Text, IntWritable, Text> {        private final static Text v = new Text();        @Override        public void reduce(IntWritable key, Iterator<Text> values, OutputCollector<IntWritable, Text> output, Reporter reporter) throws IOException {            Map<String, Double> result = new HashMap<String, Double>();            while (values.hasNext()) {                String[] str = values.next().toString().split(",");                if (result.containsKey(str[0])) {                    result.put(str[0], result.get(str[0]) + Double.parseDouble(str[1]));                } else {                    result.put(str[0], Double.parseDouble(str[1]));                }            }            Iterator<String> iter = result.keySet().iterator();            while (iter.hasNext()) {                String itemID = iter.next();                double score = result.get(itemID);                v.set(itemID + "," + score);                output.collect(key, v);            }        }    }    public static void run(Map<String, String> path) throws IOException {        JobConf conf = Recommend.config();        String input1 = path.get("Step4Input1");        String input2 = path.get("Step4Input2");        String output = path.get("Step4Output");        HdfsDAO hdfs = new HdfsDAO(Recommend.HDFS, conf);        hdfs.rmr(output);        conf.setOutputKeyClass(IntWritable.class);        conf.setOutputValueClass(Text.class);        conf.setMapperClass(Step4_PartialMultiplyMapper.class);        conf.setCombinerClass(Step4_AggregateAndRecommendReducer.class);        conf.setReducerClass(Step4_AggregateAndRecommendReducer.class);        conf.setInputFormat(TextInputFormat.class);        conf.setOutputFormat(TextOutputFormat.class);        FileInputFormat.setInputPaths(conf, new Path(input1), new Path(input2));        FileOutputFormat.setOutputPath(conf, new Path(output));        RunningJob job = JobClient.runJob(conf);        while (!job.isComplete()) {            job.waitForCompletion();        }    }}class Cooccurrence {    private int itemID1;    private int itemID2;    private int num;    public Cooccurrence(int itemID1, int itemID2, int num) {        super();        this.itemID1 = itemID1;        this.itemID2 = itemID2;        this.num = num;    }    public int getItemID1() {        return itemID1;    }    public void setItemID1(int itemID1) {        this.itemID1 = itemID1;    }    public int getItemID2() {        return itemID2;    }    public void setItemID2(int itemID2) {        this.itemID2 = itemID2;    }    public int getNum() {        return num;    }    public void setNum(int num) {        this.num = num;    }}

map的输出结果:

reduce的结果:

这就是最终的结果,第一个数据用户ID,第二个结果是推荐的电影ID和推荐评分

6.6 HdfsDAO.java

import java.io.IOException;import java.net.URI;import org.apache.hadoop.conf.Configuration;import org.apache.hadoop.fs.FSDataInputStream;import org.apache.hadoop.fs.FSDataOutputStream;import org.apache.hadoop.fs.FileStatus;import org.apache.hadoop.fs.FileSystem;import org.apache.hadoop.fs.Path;import org.apache.hadoop.io.IOUtils;import org.apache.hadoop.mapred.JobConf;public class HdfsDAO {    private static final String HDFS = "hdfs://192.168.1.210:9000/";    public HdfsDAO(Configuration conf) {        this(HDFS, conf);    }    public HdfsDAO(String hdfs, Configuration conf) {        this.hdfsPath = hdfs;        this.conf = conf;    }    private String hdfsPath;    private Configuration conf;    public static void main(String[] args) throws IOException {        JobConf conf = config();        HdfsDAO hdfs = new HdfsDAO(conf);        hdfs.copyFile("datafile/item.csv", "/tmp/new");        hdfs.ls("/tmp/new");    }                public static JobConf config(){        JobConf conf = new JobConf(HdfsDAO.class);        conf.setJobName("HdfsDAO");        conf.addResource("classpath:/hadoop/core-site.xml");        conf.addResource("classpath:/hadoop/hdfs-site.xml");        conf.addResource("classpath:/hadoop/mapred-site.xml");        return conf;    }        public void mkdirs(String folder) throws IOException {        Path path = new Path(folder);        FileSystem fs = FileSystem.get(URI.create(hdfsPath), conf);        if (!fs.exists(path)) {            fs.mkdirs(path);            System.out.println("Create: " + folder);        }        fs.close();    }    public void rmr(String folder) throws IOException {        Path path = new Path(folder);        FileSystem fs = FileSystem.get(URI.create(hdfsPath), conf);        fs.deleteOnExit(path);        System.out.println("Delete: " + folder);        fs.close();    }    public void ls(String folder) throws IOException {        Path path = new Path(folder);        FileSystem fs = FileSystem.get(URI.create(hdfsPath), conf);        FileStatus[] list = fs.listStatus(path);        System.out.println("ls: " + folder);        System.out.println("==========================================================");        for (FileStatus f : list) {            System.out.printf("name: %s, folder: %s, size: %d\n", f.getPath(), f.isDir(), f.getLen());        }        System.out.println("==========================================================");        fs.close();    }    public void createFile(String file, String content) throws IOException {        FileSystem fs = FileSystem.get(URI.create(hdfsPath), conf);        byte[] buff = content.getBytes();        FSDataOutputStream os = null;        try {            os = fs.create(new Path(file));            os.write(buff, 0, buff.length);            System.out.println("Create: " + file);        } finally {            if (os != null)                os.close();        }        fs.close();    }    public void copyFile(String local, String remote) throws IOException {        FileSystem fs = FileSystem.get(URI.create(hdfsPath), conf);        fs.copyFromLocalFile(new Path(local), new Path(remote));        System.out.println("copy from: " + local + " to " + remote);        fs.close();    }    public void download(String remote, String local) throws IOException {        Path path = new Path(remote);        FileSystem fs = FileSystem.get(URI.create(hdfsPath), conf);        fs.copyToLocalFile(path, new Path(local));        System.out.println("download: from" + remote + " to " + local);        fs.close();    }        public void cat(String remoteFile) throws IOException {        Path path = new Path(remoteFile);        FileSystem fs = FileSystem.get(URI.create(hdfsPath), conf);        FSDataInputStream fsdis = null;        System.out.println("cat: " + remoteFile);        try {              fsdis =fs.open(path);            IOUtils.copyBytes(fsdis, System.out, 4096, false);            } finally {              IOUtils.closeStream(fsdis);            fs.close();          }    }    public void location() throws IOException {        // String folder = hdfsPath + "create/";        // String file = "t2.txt";        // FileSystem fs = FileSystem.get(URI.create(hdfsPath), new        // Configuration());        // FileStatus f = fs.getFileStatus(new Path(folder + file));        // BlockLocation[] list = fs.getFileBlockLocations(f, 0, f.getLen());        //        // System.out.println("File Location: " + folder + file);        // for (BlockLocation bl : list) {        // String[] hosts = bl.getHosts();        // for (String host : hosts) {        // System.out.println("host:" + host);        // }        // }        // fs.close();    }}


0 0
原创粉丝点击