Hadoop 1.2.1 伪分布升级到 2.6.0伪分布(八)
来源:互联网 发布:淘宝网平跟女式鞋 编辑:程序博客网 时间:2024/05/17 04:43
Hadoop 1.2.1伪分布式搭建
一 创建目录
mkdir -p /hadoop/hadoop/data/dfs/name
mkdir -p /hadoop/hadoop/data/dfs/data
mkdir -p /hadoop/tmp
chown -R hadoop:root /hadoop
二 解压hadoop 1.2.1到hadoop
[hadoop@hadoop04 hadoop]$ cd /hadoop/
[hadoop@hadoop04 hadoop]$ tar -zxvf hadoop-1.2.1.tar.gz
三 创建软连接
[hadoop@hadoop04 hadoop]$ ln -s hadoop-1.2.1.tar.gz hadoop
四 配置core-site.xml
<property>
<name>fs.default.name</name>
<value>hdfs://hadoop04:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/hadoop/tmp</value>
<description>Abasefor other temporary directories.</description>
</property>
五 配置 hdfs-site.xml
<property>
<name>dfs.name.dir</name>
<value>/hadoop/hadoop/data/dfs/name</value>
</property>
<property>
<name>dfs.data.dir</name>
<value>/hadoop/hadoop/data/dfs/data</value>
</property>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
六 配置mapred-site.xml
<property>
<name>mapred.job.tracker</name>
<value>hadoop04:9001</value>
</property>
<property>
<name>mapred.child.java.opts</name>
<value>-Xmx1000m</value>
</property>
七 配置hadoop-env.sh
export JAVA_HOME=/usr/java/default
八 配置 master和slaves文件
[hadoop@hadoop04 conf]$ cat masters
hadoop04
[hadoop@hadoop04 conf]$ cat slaves
hadoop04
[hadoop@hadoop04 conf]$
九 格式化namenode
[hadoop@hadoop04 conf]$ hadoop namenode –format
十 启动所有进程
[hadoop@hadoop04 conf]$ start-all.sh
十一 查看进程状态
[hadoop@hadoop04 conf]$ jps
3831 JobTracker
3952 TaskTracker
4342 Jps
3637 DataNode
3755 SecondaryNameNode
3518 NameNode
[hadoop@hadoop04 conf]$
十二 上传hadoop2.6.0并解压
[hadoop@hadoop04 hadoop]$ tar -zxvf hadoop-2.6.0.tar.gz
十三 创建软连接
[hadoop@hadoop04 hadoop]$ tar -zxvf hadoop-2.6.0.tar.gz
十四 停止所有服务
[hadoop@hadoop04 hadoop]$ stop-all.sh
十五 备份元数据与配置文件
[hadoop@hadoop04 hadoop]$ mkdir -p /hadoop/backup/hd121
[hadoop@hadoop04 hadoop]$ cp -r /hadoop/hadoop/conf /hadoop/backup/hd121/
[hadoop@hadoop04 hadoop]$ cp -r /hadoop/hadoop/data/dfs/name/ /hadoop/backup/hd121/
十六 修改环境变量
[hadoop@hadoop04 ~]$ vi .bash_profile
# .bash_profile
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
# User specific environment and startup programs
PATH=$PATH:$HOME/bin
export PATH
export JAVA_HOME=/usr/java/default
export PATH=$JAVA_HOME/bin:$PATH
export CLASSPATH=$CLASSPATH:$JAVA_HOME/lib
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib
#Hadoop1.0 《===将原来的hadoop1.2.1的注释掉
#export HADOOP1_HOME=/hadoop/hadoop
#export PATH=$HADOOP1_HOME/bin:$PATH
#export HADOOP_CONF_DIR=${HADOOP1_HOME}/conf
#Hadoop2.0
export HADOOP2_HOME=/opt/hadoop
export HADOOP_CONF_DIR=${HADOOP2_HOME}/etc/hadoop
export HADOOP_MAPRED_HOME=${HADOOP2_HOME}
export YARN_CONF_DIR=${HADOOP2_HOME}/etc/hadoop
export HADOOP_YARN_HOME=${HADOOP2_HOME}
export HADOOP_COMMON_HOME=${HADOOP2_HOME}
export HADOOP_HDFS_HOME=${HADOOP2_HOME}
export HDFS_CONF_DIR=${HADOOP2_HOME}/etc/hadoop
export PATH=$HADOOP2_HOME/bin:$HADOOP2_HOME/sbin:$PATH
十七 修改hadoop2.6.0的core-site.xml
<property>
<name>fs.defaultFS</name> <===和hadoop1.2.1的名字不一样,建议改成现在这个
<value>hdfs://hadoop04:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/hadoop/tmp</value>
<description>Abasefor other temporary directories.</description>
</property>
十八修改hadoop2.6.0的hdfs-site.xml
<property>
<name>dfs.namenode.name.dir</name>
<value>/hadoop/hadoop/data/dfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>/hadoop/hadoop/data/dfs/data</value>
</property>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
十九 修改slaves
hadoop04
二十 对hdfs进行升级
[hadoop@hadoop04 hadoop]$ hadoop-daemon.sh start namenode –upgrade
二十一 启动datanode
[hadoop@hadoop04 hadoop]$ hadoop-daemon.sh start datanode
二十二 对块进行验证
[hadoop@hadoop04 logs]$ hdfs fsck -blocks
17/03/24 04:11:16 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Connecting to namenode via http://hadoop04:50070
FSCK started by hadoop (auth:SIMPLE) from /192.168.123.13 for path / at Fri Mar 24 04:11:21 EDT 2017
..Status: HEALTHY
Total size: 63851634 B
Total dirs: 6
Total files: 2
Total symlinks: 0
Total blocks (validated): 2 (avg. block size 31925817 B)
Minimally replicated blocks: 2 (100.0 %)
Over-replicated blocks: 0 (0.0 %)
Under-replicated blocks: 0 (0.0 %)
Mis-replicated blocks: 0 (0.0 %)
Default replication factor: 1
Average block replication: 1.0
Corrupt blocks: 0
Missing replicas: 0 (0.0 %)
Number of data-nodes: 1
Number of racks: 1
FSCK ended at Fri Mar 24 04:11:21 EDT 2017 in 11 milliseconds
[hadoop@hadoop04 logs]$
二十三 结束升级
[hadoop@hadoop04 logs]$ hdfs dfsadmin -finalizeUpgrade
17/03/24 04:13:25 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Finalize upgrade successful
[hadoop@hadoop04 logs]$
二十四 对于mapreduce的升级
Hadoop 1.2.1 升级到 hadoop2.6.0后,mapreduce是有yarn来进行管理的,试运行在yarn的系统管理之上的,在hadoop1.2.1中是没有的。所以可以理解为对hadoop2.6.0的yarn进行全新的配置
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
配置 yarn-site.xml
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
二十五 启动yarn服务
[hadoop@hadoop04 hadoop]$ start-yarn.sh
简单升级完成
一 创建目录
mkdir -p /hadoop/hadoop/data/dfs/name
mkdir -p /hadoop/hadoop/data/dfs/data
mkdir -p /hadoop/tmp
chown -R hadoop:root /hadoop
二 解压hadoop 1.2.1到hadoop
[hadoop@hadoop04 hadoop]$ cd /hadoop/
[hadoop@hadoop04 hadoop]$ tar -zxvf hadoop-1.2.1.tar.gz
三 创建软连接
[hadoop@hadoop04 hadoop]$ ln -s hadoop-1.2.1.tar.gz hadoop
四 配置core-site.xml
<property>
<name>fs.default.name</name>
<value>hdfs://hadoop04:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/hadoop/tmp</value>
<description>Abasefor other temporary directories.</description>
</property>
五 配置 hdfs-site.xml
<property>
<name>dfs.name.dir</name>
<value>/hadoop/hadoop/data/dfs/name</value>
</property>
<property>
<name>dfs.data.dir</name>
<value>/hadoop/hadoop/data/dfs/data</value>
</property>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
六 配置mapred-site.xml
<property>
<name>mapred.job.tracker</name>
<value>hadoop04:9001</value>
</property>
<property>
<name>mapred.child.java.opts</name>
<value>-Xmx1000m</value>
</property>
七 配置hadoop-env.sh
export JAVA_HOME=/usr/java/default
八 配置 master和slaves文件
[hadoop@hadoop04 conf]$ cat masters
hadoop04
[hadoop@hadoop04 conf]$ cat slaves
hadoop04
[hadoop@hadoop04 conf]$
九 格式化namenode
[hadoop@hadoop04 conf]$ hadoop namenode –format
十 启动所有进程
[hadoop@hadoop04 conf]$ start-all.sh
十一 查看进程状态
[hadoop@hadoop04 conf]$ jps
3831 JobTracker
3952 TaskTracker
4342 Jps
3637 DataNode
3755 SecondaryNameNode
3518 NameNode
[hadoop@hadoop04 conf]$
十二 上传hadoop2.6.0并解压
[hadoop@hadoop04 hadoop]$ tar -zxvf hadoop-2.6.0.tar.gz
十三 创建软连接
[hadoop@hadoop04 hadoop]$ tar -zxvf hadoop-2.6.0.tar.gz
十四 停止所有服务
[hadoop@hadoop04 hadoop]$ stop-all.sh
十五 备份元数据与配置文件
[hadoop@hadoop04 hadoop]$ mkdir -p /hadoop/backup/hd121
[hadoop@hadoop04 hadoop]$ cp -r /hadoop/hadoop/conf /hadoop/backup/hd121/
[hadoop@hadoop04 hadoop]$ cp -r /hadoop/hadoop/data/dfs/name/ /hadoop/backup/hd121/
十六 修改环境变量
[hadoop@hadoop04 ~]$ vi .bash_profile
# .bash_profile
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
# User specific environment and startup programs
PATH=$PATH:$HOME/bin
export PATH
export JAVA_HOME=/usr/java/default
export PATH=$JAVA_HOME/bin:$PATH
export CLASSPATH=$CLASSPATH:$JAVA_HOME/lib
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib
#Hadoop1.0 《===将原来的hadoop1.2.1的注释掉
#export HADOOP1_HOME=/hadoop/hadoop
#export PATH=$HADOOP1_HOME/bin:$PATH
#export HADOOP_CONF_DIR=${HADOOP1_HOME}/conf
#Hadoop2.0
export HADOOP2_HOME=/opt/hadoop
export HADOOP_CONF_DIR=${HADOOP2_HOME}/etc/hadoop
export HADOOP_MAPRED_HOME=${HADOOP2_HOME}
export YARN_CONF_DIR=${HADOOP2_HOME}/etc/hadoop
export HADOOP_YARN_HOME=${HADOOP2_HOME}
export HADOOP_COMMON_HOME=${HADOOP2_HOME}
export HADOOP_HDFS_HOME=${HADOOP2_HOME}
export HDFS_CONF_DIR=${HADOOP2_HOME}/etc/hadoop
export PATH=$HADOOP2_HOME/bin:$HADOOP2_HOME/sbin:$PATH
十七 修改hadoop2.6.0的core-site.xml
<property>
<name>fs.defaultFS</name> <===和hadoop1.2.1的名字不一样,建议改成现在这个
<value>hdfs://hadoop04:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/hadoop/tmp</value>
<description>Abasefor other temporary directories.</description>
</property>
十八修改hadoop2.6.0的hdfs-site.xml
<property>
<name>dfs.namenode.name.dir</name>
<value>/hadoop/hadoop/data/dfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>/hadoop/hadoop/data/dfs/data</value>
</property>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
十九 修改slaves
hadoop04
二十 对hdfs进行升级
[hadoop@hadoop04 hadoop]$ hadoop-daemon.sh start namenode –upgrade
二十一 启动datanode
[hadoop@hadoop04 hadoop]$ hadoop-daemon.sh start datanode
二十二 对块进行验证
[hadoop@hadoop04 logs]$ hdfs fsck -blocks
17/03/24 04:11:16 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Connecting to namenode via http://hadoop04:50070
FSCK started by hadoop (auth:SIMPLE) from /192.168.123.13 for path / at Fri Mar 24 04:11:21 EDT 2017
..Status: HEALTHY
Total size: 63851634 B
Total dirs: 6
Total files: 2
Total symlinks: 0
Total blocks (validated): 2 (avg. block size 31925817 B)
Minimally replicated blocks: 2 (100.0 %)
Over-replicated blocks: 0 (0.0 %)
Under-replicated blocks: 0 (0.0 %)
Mis-replicated blocks: 0 (0.0 %)
Default replication factor: 1
Average block replication: 1.0
Corrupt blocks: 0
Missing replicas: 0 (0.0 %)
Number of data-nodes: 1
Number of racks: 1
FSCK ended at Fri Mar 24 04:11:21 EDT 2017 in 11 milliseconds
[hadoop@hadoop04 logs]$
二十三 结束升级
[hadoop@hadoop04 logs]$ hdfs dfsadmin -finalizeUpgrade
17/03/24 04:13:25 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Finalize upgrade successful
[hadoop@hadoop04 logs]$
二十四 对于mapreduce的升级
Hadoop 1.2.1 升级到 hadoop2.6.0后,mapreduce是有yarn来进行管理的,试运行在yarn的系统管理之上的,在hadoop1.2.1中是没有的。所以可以理解为对hadoop2.6.0的yarn进行全新的配置
配置 mapred-site.xml
<property><name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
配置 yarn-site.xml
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
二十五 启动yarn服务
[hadoop@hadoop04 hadoop]$ start-yarn.sh
简单升级完成
0 0
- Hadoop 1.2.1 伪分布升级到 2.6.0伪分布(八)
- hadoop-2.6.0伪分布运行WordCount
- 轻松搭建hadoop-1.2.1伪分布
- hadoop-1.2.1伪分布配置hbase-0.98.0
- (3-1)hadoop-2.6.0伪分布笔记
- (1)Hadoop 1.2.1伪分布搭建
- CentOS安装配置Hadoop 1.2.1(伪分布模式)
- (2)Hadoop 2.6.1伪分布搭建
- hadoop伪分布安装
- hadoop 伪分布 搭建
- hadoop伪分布
- hadoop伪分布部署
- hadoop伪分布启动
- 配置hadoop伪分布
- hadoop 伪分布安装
- hadoop伪分布安装
- hadoop创建伪分布
- Hadoop伪分布续
- Can't set headers after they are sent
- 数据驱动-testng配置
- 获取tomcat目录下的properties
- RequireJS源码解读(二)
- 最长公共子序列问题(LCS)
- Hadoop 1.2.1 伪分布升级到 2.6.0伪分布(八)
- 音视频相关流媒体协议对比
- Stack(4)判断数组是不是二叉搜索树的前序遍历(递归及非递归实现)
- 算法22:给定一个排好序的linked list,删除其中所有的重复元素。比如给定1->2->3->3-> 4->4->5,返回1->2->5。给定1->1->1->2->3,返回2->3
- 物理学上四大神兽之拉普拉斯妖是指什么
- Invalidated object not currently part of this pool/Could not return the resource to the pool
- win10 adb interface 无法安装 系统找不到指定文件
- 使用jquery.form.js 进行表单提交,通过回调函数实现页面互动
- EhCache 分布式缓存/缓存集群