Hadoop 1.2.1 伪分布升级到 2.6.0伪分布(八)

来源:互联网 发布:淘宝网平跟女式鞋 编辑:程序博客网 时间:2024/05/17 04:43
Hadoop 1.2.1伪分布式搭建

一 创建目录
mkdir -p /hadoop/hadoop/data/dfs/name
mkdir -p /hadoop/hadoop/data/dfs/data
mkdir -p /hadoop/tmp
chown -R hadoop:root /hadoop


二 解压hadoop 1.2.1到hadoop
[hadoop@hadoop04 hadoop]$ cd /hadoop/
[hadoop@hadoop04 hadoop]$ tar -zxvf hadoop-1.2.1.tar.gz


三 创建软连接
[hadoop@hadoop04 hadoop]$ ln -s hadoop-1.2.1.tar.gz hadoop

四 配置core-site.xml

<property>
  <name>fs.default.name</name>
  <value>hdfs://hadoop04:9000</value>
 </property>
 <property>
  <name>hadoop.tmp.dir</name>
  <value>/hadoop/tmp</value>
  <description>Abasefor other temporary directories.</description>
 </property>


五 配置 hdfs-site.xml

  <property>
   <name>dfs.name.dir</name>
   <value>/hadoop/hadoop/data/dfs/name</value>
 </property>

 <property>
  <name>dfs.data.dir</name>
  <value>/hadoop/hadoop/data/dfs/data</value>
  </property>

 <property>
  <name>dfs.replication</name>
  <value>1</value>
 </property>


六 配置mapred-site.xml

<property>
   <name>mapred.job.tracker</name>
   <value>hadoop04:9001</value>
 </property>
 <property>
  <name>mapred.child.java.opts</name>
  <value>-Xmx1000m</value>
 </property>


七 配置hadoop-env.sh

export JAVA_HOME=/usr/java/default

八 配置 master和slaves文件
[hadoop@hadoop04 conf]$ cat masters 
hadoop04
[hadoop@hadoop04 conf]$ cat slaves 
hadoop04
[hadoop@hadoop04 conf]$


九 格式化namenode
[hadoop@hadoop04 conf]$ hadoop namenode –format

十 启动所有进程
[hadoop@hadoop04 conf]$ start-all.sh

十一 查看进程状态
[hadoop@hadoop04 conf]$ jps
3831 JobTracker
3952 TaskTracker
4342 Jps
3637 DataNode
3755 SecondaryNameNode
3518 NameNode
[hadoop@hadoop04 conf]$


十二 上传hadoop2.6.0并解压
[hadoop@hadoop04 hadoop]$ tar -zxvf hadoop-2.6.0.tar.gz

十三 创建软连接
[hadoop@hadoop04 hadoop]$ tar -zxvf hadoop-2.6.0.tar.gz

十四 停止所有服务
[hadoop@hadoop04 hadoop]$ stop-all.sh

十五 备份元数据与配置文件

[hadoop@hadoop04 hadoop]$ mkdir -p /hadoop/backup/hd121
[hadoop@hadoop04 hadoop]$ cp -r /hadoop/hadoop/conf /hadoop/backup/hd121/
[hadoop@hadoop04 hadoop]$ cp -r /hadoop/hadoop/data/dfs/name/ /hadoop/backup/hd121/


十六 修改环境变量


[hadoop@hadoop04 ~]$ vi .bash_profile 
# .bash_profile

# Get the aliases and functions
if [ -f ~/.bashrc ]; then
        . ~/.bashrc
fi

# User specific environment and startup programs


PATH=$PATH:$HOME/bin

export PATH
export JAVA_HOME=/usr/java/default
export PATH=$JAVA_HOME/bin:$PATH
export CLASSPATH=$CLASSPATH:$JAVA_HOME/lib
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib

#Hadoop1.0           《===将原来的hadoop1.2.1的注释掉
#export HADOOP1_HOME=/hadoop/hadoop
#export PATH=$HADOOP1_HOME/bin:$PATH
#export HADOOP_CONF_DIR=${HADOOP1_HOME}/conf

#Hadoop2.0
export HADOOP2_HOME=/opt/hadoop
export HADOOP_CONF_DIR=${HADOOP2_HOME}/etc/hadoop 
export HADOOP_MAPRED_HOME=${HADOOP2_HOME}
export YARN_CONF_DIR=${HADOOP2_HOME}/etc/hadoop
export HADOOP_YARN_HOME=${HADOOP2_HOME}
export HADOOP_COMMON_HOME=${HADOOP2_HOME}
export HADOOP_HDFS_HOME=${HADOOP2_HOME}
export HDFS_CONF_DIR=${HADOOP2_HOME}/etc/hadoop
export PATH=$HADOOP2_HOME/bin:$HADOOP2_HOME/sbin:$PATH


十七 修改hadoop2.6.0的core-site.xml

<property>
  <name>fs.defaultFS</name>   <===和hadoop1.2.1的名字不一样,建议改成现在这个
  <value>hdfs://hadoop04:9000</value>
 </property>
 <property>
  <name>hadoop.tmp.dir</name>
  <value>/hadoop/tmp</value>
  <description>Abasefor other temporary directories.</description>
 </property>


十八修改hadoop2.6.0的hdfs-site.xml

  <property>
   <name>dfs.namenode.name.dir</name>
   <value>/hadoop/hadoop/data/dfs/name</value>
 </property>

 <property>
  <name>dfs.datanode.data.dir</name>
  <value>/hadoop/hadoop/data/dfs/data</value>
  </property>

 <property>
  <name>dfs.replication</name>
  <value>1</value>
 </property>

十九 修改slaves
hadoop04

二十 对hdfs进行升级
[hadoop@hadoop04 hadoop]$ hadoop-daemon.sh start namenode –upgrade

二十一 启动datanode
[hadoop@hadoop04 hadoop]$ hadoop-daemon.sh start datanode

二十二 对块进行验证
[hadoop@hadoop04 logs]$ hdfs fsck -blocks
17/03/24 04:11:16 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Connecting to namenode via http://hadoop04:50070
FSCK started by hadoop (auth:SIMPLE) from /192.168.123.13 for path / at Fri Mar 24 04:11:21 EDT 2017
..Status: HEALTHY
 Total size:    63851634 B
 Total dirs:    6
 Total files:   2
 Total symlinks:                0
 Total blocks (validated):      2 (avg. block size 31925817 B)
 Minimally replicated blocks:   2 (100.0 %)
 Over-replicated blocks:        0 (0.0 %)
 Under-replicated blocks:       0 (0.0 %)
 Mis-replicated blocks:         0 (0.0 %)
 Default replication factor:    1
 Average block replication:     1.0
 Corrupt blocks:                0
 Missing replicas:              0 (0.0 %)

 Number of data-nodes:          1
 Number of racks:               1
FSCK ended at Fri Mar 24 04:11:21 EDT 2017 in 11 milliseconds



The filesystem under path '/' is HEALTHY
[hadoop@hadoop04 logs]$



二十三 结束升级

[hadoop@hadoop04 logs]$ hdfs dfsadmin -finalizeUpgrade
17/03/24 04:13:25 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Finalize upgrade successful
[hadoop@hadoop04 logs]$


二十四 对于mapreduce的升级


Hadoop 1.2.1 升级到 hadoop2.6.0后,mapreduce是有yarn来进行管理的,试运行在yarn的系统管理之上的,在hadoop1.2.1中是没有的。所以可以理解为对hadoop2.6.0的yarn进行全新的配置


配置 mapred-site.xml

  <property>
   <name>mapreduce.framework.name</name>
   <value>yarn</value>
 </property>

配置 yarn-site.xml
<property>
  <name>yarn.nodemanager.aux-services</name>
  <value>mapreduce_shuffle</value>
 </property>

二十五 启动yarn服务
[hadoop@hadoop04 hadoop]$ start-yarn.sh
简单升级完成
0 0
原创粉丝点击