Hadoop-第一个Hadoop程序

来源:互联网 发布:大时代 知乎 编辑:程序博客网 时间:2024/05/17 02:02

首先,Eclipse已经能够连接到远程Hadoop了。

如果不会可以看这里(http://blog.csdn.net/sunyx1130/article/details/50864454)

记住一定要创建一个自定义用户,root好像不能远程操作。

然后就是 用自定义的用户去hadoop中创建/input路径


1.新建一个Map/Reduce项目


2.编写代码

import java.io.IOException;import java.util.StringTokenizer;import org.apache.hadoop.conf.Configuration;import org.apache.hadoop.fs.Path;import org.apache.hadoop.io.IntWritable;import org.apache.hadoop.io.Text;import org.apache.hadoop.mapreduce.Job;import org.apache.hadoop.mapreduce.Mapper;import org.apache.hadoop.mapreduce.Reducer;import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;public class WordCount {public static class TokenizerMapper extends Mapper<object text="" intwritable=""> {private final static IntWritable one = new IntWritable(1);private Text word = new Text();public void map(Object key, Text value, Context context) throws IOException, InterruptedException {StringTokenizer itr = new StringTokenizer(value.toString());while (itr.hasMoreTokens()) {word.set(itr.nextToken());context.write(word, one);}}}public static class IntSumReducer extends Reducer<text intwritable="" text=""> {private IntWritable result = new IntWritable();public void reduce(Text key, Iterable<intwritable> values, Context context)throws IOException, InterruptedException {int sum = 0;for (IntWritable val : values) {sum += val.get();}result.set(sum);context.write(key, result);}}public static void main(String[] args) throws Exception {Configuration conf = new Configuration();/*conf.set("fs.default.name", "hdfs://master:9000");conf.set("hadoop.job.user", "sunyx");conf.set("mapreduce.framework.name", "yarn");conf.set("mapred.job.tracker", "master:9001"); // 这个端口配置文件如果不设置默认是多少啊,官方文档写的默认是"local"// 后面的英文说明也没看明白。端口实在太多了// 好乱。conf.set("yarn.resourcemanager.hostname", "master");conf.set("yarn.resourcemanager.scheduler.address", "master:8030");conf.set("yarn.resourcemanager.resource-tracker.address", "master:8031");conf.set("yarn.resourcemanager.address", "master:8032"); //conf.set("yarn.resourcemanager.admin.address", "master:8033");*/// set inpath,outpathPath inpath = new Path("hdfs://master:9000/input");Path outpath = new Path("hdfs://master:9000/output");Job job = Job.getInstance(conf, "WordCount");job.setInputFormatClass(TextInputFormat.class);job.setOutputFormatClass(TextOutputFormat.class);job.setJarByClass(WordCount.class);job.setMapperClass(TokenizerMapper.class);// job.setCombinerClass(IntSumReducer.class);job.setReducerClass(IntSumReducer.class);job.setOutputKeyClass(Text.class);job.setOutputValueClass(IntWritable.class);FileInputFormat.addInputPath(job, inpath);FileOutputFormat.setOutputPath(job, outpath);System.out.println("开始执行");System.exit(job.waitForCompletion(true) ? 0 : 1);}}
4.运行程序

5.查看程序运行状态


6.一些可能的问题

Exception in thread "main" org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does not exist:hdfs://master:9000/sunyx/input

没有指定路径,先在hdfs中创建相应路径,上传文件。 

Hadoop fs –mkdir /sunyx

Hadoop fs –mkdir /sunyx/input 

Hadoop fs –put /xxx/file1.txt /sunyx/input

Exception in thread "main" org.apache.hadoop.security.AccessControlException: Permission denied: user=sunyx, access=WRITE,inode="/tmp/hadoop-yarn/staging/sunyx/.staging":root:supergroup:drwxr-xr-x

配置hdfs-site.xml

<property><name>dfs.permissions</name><value>false</value></property>

如果设置了还是不行,就在linux创建一个连接的用户就可以了,然后配置相关的路径权限给用户。

0 0
原创粉丝点击