hadoop环境快速搭建

来源:互联网 发布:传奇怪物数据 编辑:程序博客网 时间:2024/06/06 20:58

一、搭建环境

1. 安装jdk、scala、hadoop(解压安装)
2. 配置文件/etc/profile

export JAVA_HOME=/usr/lib/jvm/jre-1.7.0-openjdkexport CLASSPATH=.:$JAVA_HOME/jre/lib/rt.jar:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jarexport PATH=${JAVA_HOME}/bin:$PATHexport SCALE_HOME=/home/fjj/bigdata/scale/scala-2.11.8export PATH=${SCALE_HOME}/bin:$PATHexport HADOOP_HOME=/home/fjj/bigdata/hadoop/hadoopexport HADOOP_INSTALL=$HADOOP_HOMEexport HADOOP_MAPRED_HOME=$HADOOP_HOMEexport HADOOP_COMMON_HOME=$HADOOP_HOMEexport HADOOP_HDFS_HOME=$HADOOP_HOMEexport YARN_HOME=$HADOOP_HOMEexport HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/nativeexport PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin


二、编辑配置文件

1. hadoop-env.sh文件:配置java路径:export JAVA_HOME=/usr/lib/jvm/jre-1.7.0-openjdk
2. core-site.xml

 <configuration> <property> <name>fs.default.name</name> <value>hdfs://localhost:9000</value> </property> <property> <name>dfs.replication</name> <value>1</value> </property> <property> <name>hadoop.tmp.dir</name> <value>/home/hadoop/tmp</value> </property> </configuration>

3、mapred-site.xml
<configuration>  <property>  <name>mapred.job.tracker</name>  <value>localhost:9001</value>  </property> </configuration>


三、注意

1. data和name的clusterID要一致,具体可以查看hdfs-site.xml配置文件中dfs.namenode.name.dir和dfs.datanode.data.dir,查看目录current下的VERSION文件


四、常用命令

1、格式化Namenode:.bin/hadoop namenode -format
2、启动关闭hadoop:./sbin/start-all.sh ./sbin/stop-all.sh
3、创建目录:./hdfs dfs -mkdir -p /user/root/output
4、拷贝文件: hadoop fs -copyFromLocal /usr/local/hadoop/test.txt input


五、eclipse下配置hadoop

1. 将hadoop-eclipse-plugin-2.6.4.jar包复制到eclipse/plugins下,重启eclipse;
2. 打开Windows—Preferences后,在窗口左侧会有Hadoop Map/Reduce选项,点击此选项,在窗口右侧设置Hadoop安装路径;
3. 配置Map/Reduce Locations,打开Windows—Open Perspective—Other , 选择Map/Reduce,点击OK;

4. 点击Map/Reduce Location选项卡,点击右边小象图标,打开Hadoop Location配置窗口;
4. 输入Location Name,任意名称即可.配置Map/Reduce Master和DFS Mastrer,Host和Port配置成与mapred-site.xml和core-site.xml的设置一致即可


六、新建WordCount项目

 import java.io.IOException;    import java.util.StringTokenizer;         import org.apache.hadoop.conf.Configuration;    import org.apache.hadoop.fs.Path;    import org.apache.hadoop.io.IntWritable;    import org.apache.hadoop.io.Text;    import org.apache.hadoop.mapreduce.Job;    import org.apache.hadoop.mapreduce.Mapper;    import org.apache.hadoop.mapreduce.Reducer;    import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;    import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;    import org.apache.hadoop.util.GenericOptionsParser;         public class WordCount {         public static class TokenizerMapper    extends Mapper<Object, Text, Text, IntWritable>{         private final static IntWritable one = new IntWritable(1);    private Text word = new Text();         public void map(Object key, Text value, Context context    ) throws IOException, InterruptedException {    StringTokenizer itr = new StringTokenizer(value.toString());    while (itr.hasMoreTokens()) {    word.set(itr.nextToken());    context.write(word, one);    }    }    }         public static class IntSumReducer    extends Reducer<Text,IntWritable,Text,IntWritable> {    private IntWritable result = new IntWritable();         public void reduce(Text key, Iterable<IntWritable> values,    Context context    ) throws IOException, InterruptedException {    int sum = 0;    for (IntWritable val : values) {    sum += val.get();    }    result.set(sum);    context.write(key, result);    }    }         public static void main(String[] args) throws Exception {    Configuration conf = new Configuration();    String[] otherArgs = new GenericOptionsParser(conf, args).getRemainingArgs();    if (otherArgs.length != 2) {    System.err.println("Usage: wordcount <in> <out>");    System.exit(2);    }    Job job = new Job(conf, "word count");    job.setJarByClass(WordCount.class);    job.setMapperClass(TokenizerMapper.class);    job.setCombinerClass(IntSumReducer.class);    job.setReducerClass(IntSumReducer.class);    job.setOutputKeyClass(Text.class);    job.setOutputValueClass(IntWritable.class);    FileInputFormat.addInputPath(job, new Path(otherArgs[0]));    FileOutputFormat.setOutputPath(job, new Path(otherArgs[1]));    System.exit(job.waitForCompletion(true) ? 0 : 1);    }    }

1. 复制配置文件:将hadoop的配置文件复制到项目的目录src下:core-site.xml hdfs-site.xml log4j.properties
2. 点击Run As—>Run Configurations,配置运行参数,即输入和输出文件夹
    hdfs://localhost:9000/user/hadoop/input
    hdfs://localhost:9000/user/hadoop/output
3. 点击执行



4. 运行自带例子:/home/fjj/bigdata/hadoop/hadoop/bin/hadoopjar hadoop-mapreduce-examples-2.6.4.jar wordcount /user/root/input/user/root/output

0 0