hadoop2.2.0单机安装(记录)

来源:互联网 发布:mac队装备 编辑:程序博客网 时间:2024/05/16 04:04

说明:新版本hadoop与以前差异较大,很多进程不了解,这里只是初次安装做一下记录,有些地方很模糊,这里只做一下记录

安装JAVA

添加用户

SSH配置

不详述

环境变量

$ vi .bash_profile

export JAVA_HOME=/usr/java/jdk1.7.0_45

export HADOOP_HOME=/app/hadoop/hadoop-2.2.0

export JRE_HOME=$JAVA_HOME/jre

exportPATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin

exportCLASSPATH=./:$JAVA_HOME/lib:$JAVA_HOME/jre/lib

 

下载安装hadoop:

http://apache.fayea.com/apache-mirror/hadoop/common/hadoop-2.2.0/

 

上传解压

[hadoop@hadoopt]$ tar -xvf hadoop-2.2.0.tar.gz(官网的只有32位的)

[(至于解压到哪就不做说明了,任何文件夹都可,这些目录不要照搬,自己创建自己,还有下面配置的目录、自己创建,但是一定用hadoop用户创建,否则没有权限)

移动解压软件到软件目录

/opt/hadoop-2.2.0

 

修改hadoop参数文件

安装参考:

http://wenku.baidu.com/view/1681511a52d380eb62946df6.html

 

以下所有配置文件的修改均在下面目录完成

[hadoop@hadoop01 hadoop]$ pwd

/app/hadoop/hadoop-2.2.0/etc/hadoop

Core-site.xml

[hadoop@hadoop01 hadoop]$ vi core-site.xml

<configuration>

<property>

 <name>fs.default.name</name>

 <value>hdfs://localhost:8020</value>

 <description>The name of the defaultfile system. Either the literal string "local" or a host:port forNDFS.

 </description>

 <final>true</final>

</property>

</configuration>

Hdfs-site.xml

[hadoop@hadoop01 hadoop]$ vi hdfs-site.xml

 

 

<configuration>

         <property>

                   <name>dfs.namenode.name.dir</name>

                   <value>/home/hadoop/dfs/name</value>

                   <description>Determineswhere on the local filesystem the DFS name node should store the name table. Ifthis is a comma-delimited list of directories then the name table is replicatedin all of the directories, for redundancy. </description>

                   <final>true</final>

         </property>

         <property>

                   <name>dfs.datanode.data.dir</name>

                   <value>/home/hadoop/dfs/data</value>

                   <description>Determineswhere on the local filesystem an DFS data node should store its blocks. If thisis a comma-delimited list of directories, then data will be stored in all nameddirectories, typically on different devices.Directories that do not exist areignored.

                   </description>

                   <final>true</final>

         </property>

         <property>

                   <name>dfs.replication</name>

                   <value>1</value>

         </property>

         <property>

         <name>dfs.permissions</name>

         <value>false</value>

         </property>

          

</configuration>

Mapred-site.xml

[hadoop@hadoop01 hadoop]$ vi mapred-site.xml

 

<configuration>

         <property>

         <name>mapreduce.framework.name</name>

         <value>yarn</value>

         </property>

         <property>

                   <name>mapred.system.dir</name>

                   <value>/home/hadoop/mapred/system</value>

                   <final>true</final>

         </property>

         <property>

         <name>mapred.local.dir</name>

         <value>/home/hadoop/mapred/local</value>

         <final>true</final>

         </property>

</configuration>

Yarn-site.xml

默认即可 

Hadoop-env.sh

[hadoop@hadoop01 hadoop]$ vi hadoop-env.sh

增加

export JAVA_HOME=/usr/java/jdk1.7.0_45

 

 

启动hadoop

格式化namenode

[hadoop@hadoop01 ~]$ hdfs namenode –format (这里与以前不同了hdfs在bin里面)

开启进程(蓝色这一部分可以使用一个命令代替,目前不明白下面每个命令作用,所以留下记录,建议忽略蓝色部分直接执行/sbin/start-all.sh)

[hadoop@hadoop01 ~]$ hadoop-daemon.sh startnamenode

 [hadoop@hadoop01~]$ hadoop-daemon.sh start datanode

开启yarn守护进程

[hadoop@hadoop01 ~]$ yarn-daemon.sh startresourcemanager

[hadoop@hadoop01 ~]$ yarn-daemon.sh startnodemanager

[hadoop@hadoop01 ~]$ start-yarn.sh

检查进程是否启动

[hadoop@hadoop01 ~]$ jps

2912 NameNode

5499 ResourceManager

2981 DataNode

6671 Jps

6641 NodeManager

6473 SecondaryNameNode

 

有以上内容说明已经启动

查看hadoop资源管理页面

http://localhost:8088

 

 注意:进入界面后不应该都是0,否则有问题
原创粉丝点击