hadoop学习笔记

来源:互联网 发布:博雅立方 知乎 编辑:程序博客网 时间:2024/06/05 08:10


一.安装jdk和hadoop并配置环境变量,环境变量配置如下


jdk下载:wget --no-check-certificate --no-cookies --header "Cookie: oraclelicense=accept-securebackup-cookie"
http://download.oracle.com/otn-pub/java/jdk/8u144-b01/090f390dda5b47b9b721c7dfaa008135/jdk-8u144-linux-x64.tar.gz

hadoop 下载: wget http://mirror.bit.edu.cn/apache/hadoop/common/hadoop-1.2.1/hadoop-1.2.1.tar.gz 
解压缩 :tar -zxvf  hadoop-1.2.1.tar.gz 

export JAVA_HOME=/usr/java/jdk1.8.0_144export JRE_HOME=$JAVA_HOME/jreexport HADOOP_HOME=/opt/hadoop-1.2.1export CLASSPATH=$JAVA_HOME/lib:$JRE_HOME/lib:$CALSSPATHexport PATH=$JAVA_HOME/bin:$JRE_HOME/bin:$HADOOP_HOME/bin:$PATH



二.修改hadoop下的conf文件下的四个文件

2.1 hadoop-env.sh文件

修改jdk地址 export JAVA_HOME=/usr/java/jdk1.8.0_144

2.2 core-site.xml 文件

<configuration><property><name>hadoop.tmp.dir</name><value>/hadoop</value></property><property><name>dfs.name.dir</name><value>/hadoop/name</value></property><property><name>fs.default.name</name><value>hdfs://ray123:9000</value></property></configuration>
2.3  hdfs-site.xml 文件


<configuration><property><name>dfs.data.dir</name><value>/hadoop/data</value></property></configuration>

2.4 mapred-site.xml文件

<configuration><property><name>mapred.job.tracker</name><value>ray123:9001</value></property></configuration>


三.启动hadoop

3.1 运行 hadoop namenode -format

3.2 切换到bin目录 运行 start-all.sh

3.3 运行jps出现如下表示安装成功

2480 JobTracker2404 SecondaryNameNode3958 Jps2279 DataNode2154 NameNode2878 TaskTracker

四.编写java程序编译并打包

示例WordCount.java如下

import java.io.IOException;import java.util.StringTokenizer;import org.apache.hadoop.conf.Configuration;import org.apache.hadoop.fs.Path;import org.apache.hadoop.io.IntWritable;import org.apache.hadoop.io.LongWritable;import org.apache.hadoop.io.Text;import org.apache.hadoop.mapreduce.Job;import org.apache.hadoop.mapreduce.Mapper;import org.apache.hadoop.mapreduce.Reducer;import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;public class WordCount {public static class WordCountMap extendsMapper<LongWritable, Text, Text, IntWritable> {private final IntWritable one = new IntWritable(1);private Text word = new Text();public void map(LongWritable key, Text value, Context context)throws IOException, InterruptedException {String line = value.toString();StringTokenizer token = new StringTokenizer(line);while (token.hasMoreTokens()) {word.set(token.nextToken());context.write(word, one);}}}public static class WordCountReduce extendsReducer<Text, IntWritable, Text, IntWritable> {public void reduce(Text key, Iterable<IntWritable> values,Context context) throws IOException, InterruptedException {int sum = 0;for (IntWritable val : values) {sum += val.get();}context.write(key, new IntWritable(sum));}}public static void main(String[] args) throws Exception {Configuration conf = new Configuration();Job job = new Job(conf);job.setJarByClass(WordCount.class);job.setJobName("wordcount");job.setOutputKeyClass(Text.class);job.setOutputValueClass(IntWritable.class);job.setMapperClass(WordCountMap.class);job.setReducerClass(WordCountReduce.class);job.setInputFormatClass(TextInputFormat.class);job.setOutputFormatClass(TextOutputFormat.class);FileInputFormat.addInputPath(job, new Path(args[0]));FileOutputFormat.setOutputPath(job, new Path(args[1]));job.waitForCompletion(true);}}

编译命令: javac  -classpath  /opt/hadoop-1.2.1/hadoop-core-1.2.1.jar: /opt/hadoop-1.2.1/commons-cli-1.2.1.jar  -d  word_count_class/  WordCount.java

打包命令:jar -cvf wordcount.jar *.class

新建input文件夹放如两个输入文件file1和file2

将文件放到hadoop的input_wordcount 下  hadoop fs -put input/* input_wordcount/

查看hadoop 下的文件 hadoop fs -ls

运行jar文件 hadoop jar word_count_class/wordcount.jar  WordCount(主类)  input_wordcount(输入文件地址)  output_wordcount(输出文件地址)

运行结果在output_wordcount下的part-r-00000中



原创粉丝点击