Hadoop2搭建可手工配置的HA
来源:互联网 发布:科研立项题目 软件 编辑:程序博客网 时间:2024/04/27 21:50
-----------------------------
1.搭建手工切换的ha(比hadoop1集群搭建多了journalnode集群)
-----------------------------
namenode:haoop0和hadoop1
datanode:hadoop2、hadoop3、hadoop4
journalnode:haoop0、hadoop1、hadoop2(必须是奇数个节点)
resourcemanager:hadoop0
nodemanager:hadoop2、hadoop3、hadoop4
1.1 配置文件(hadoop-env.sh、core-site.xml、hdfs-site.xml、yarn-site.xml、mapred-site.xml、slaves)
1.1.1 hadoop-env.sh
export JAVA_HOME=/usr/local/jdk
1.1.2 core-site.xml
<property>
<name>fs.defaultFS</name>
<value>hdfs://cluster1</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/usr/local/hadoop/tmp</value>
</property>
1.1.3 hdfs-site.xml
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<property>
<name>dfs.nameservices</name>
<value>cluster1</value>
</property>
<property>
<name>dfs.ha.namenodes.cluster1</name>
<value>hadoop101,hadoop102</value>
</property>
<property>
<name>dfs.namenode.rpc-address.cluster1.hadoop101</name>
<value>hadoop0:9000</value>
</property>
<property>
<name>dfs.namenode.http-address.cluster1.hadoop101</name>
<value>hadoop0:50070</value>
</property>
<property>
<name>dfs.namenode.rpc-address.cluster1.hadoop102</name>
<value>hadoop1:9000</value>
</property>
<property>
<name>dfs.namenode.http-address.cluster1.hadoop102</name>
<value>hadoop1:50070</value>
</property>
<property>
<name>dfs.ha.automatic-failover.enabled.cluster1</name>
<value>false</value>
</property>
<property>
<name>dfs.namenode.shared.edits.dir</name>
<value>qjournal://hadoop0:8485;hadoop1:8485;hadoop2:8485/cluster1</value>
</property>
<property>
<name>dfs.journalnode.edits.dir</name>
<value>/usr/local/hadoop/tmp/journal</value>
</property>
<property>
<name>dfs.ha.fencing.methods</name>
<value>sshfence</value>
</property>
<property>
<name>dfs.ha.fencing.ssh.private-key-files</name>
<value>/root/.ssh/id_rsa</value>
</property>
<property>
<name>dfs.client.failover.proxy.provider.cluster1</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
1.1.4 yarn-site.xml
<property>
<name>yarn.resourcemanager.hostname</name>
<value>hadoop0</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
1.1.5 mapred-site.xml
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
1.1.6 slaves
hadoop2
hadoop3
hadoop4
1.1.7 把hadoop0上的hadoop文件夹复制到hadoop1、hadoop2、hadoop3、hadoop4节点
scp -rq hadoop hadoop1:/usr/local
scp -rq hadoop hadoop1:/usr/loca2
scp -rq hadoop hadoop1:/usr/loca3
scp -rq hadoop hadoop1:/usr/loca4
1.2 启动journalnode集群
在hadoop0、hadoop1、hadoop2上分别执行hadoop/sbin/hadoop-daemon.sh start journalnode
1.3 格式化namenode、启动namenode
在hadoop0上执行hadoop/bin/hdfs namenode -format
在hadoop0上分别执行hadoop/sbin/hadoop-daemon.sh start namenode
在hadoop1上执行hadoop/bin/hdfs namenode -bootstrapStandby
在hadoop1上分别执行hadoop/sbin/hadoop-daemon.sh start namenode
在hadoop0上执行hadoop/bin/hdfs haadmin -failover --forceactive hadoop101 hadoop102
(通过机制保证只有一个为active,另一个为standby,可以手动进行切换)
1.4 启动datanode
在hadoop0上分别执行hadoop/sbin/hadoop-daemons.sh start datanode(注意是hadoop-daemons.sh)
1.5 启动resourcemanager和nodemanager
在hadoop0上执行 hadoop/sbin/start-yarn.sh start resourcemanager
1.搭建手工切换的ha(比hadoop1集群搭建多了journalnode集群)
-----------------------------
namenode:haoop0和hadoop1
datanode:hadoop2、hadoop3、hadoop4
journalnode:haoop0、hadoop1、hadoop2(必须是奇数个节点)
resourcemanager:hadoop0
nodemanager:hadoop2、hadoop3、hadoop4
1.1 配置文件(hadoop-env.sh、core-site.xml、hdfs-site.xml、yarn-site.xml、mapred-site.xml、slaves)
1.1.1 hadoop-env.sh
export JAVA_HOME=/usr/local/jdk
1.1.2 core-site.xml
<property>
<name>fs.defaultFS</name>
<value>hdfs://cluster1</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/usr/local/hadoop/tmp</value>
</property>
1.1.3 hdfs-site.xml
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<property>
<name>dfs.nameservices</name>
<value>cluster1</value>
</property>
<property>
<name>dfs.ha.namenodes.cluster1</name>
<value>hadoop101,hadoop102</value>
</property>
<property>
<name>dfs.namenode.rpc-address.cluster1.hadoop101</name>
<value>hadoop0:9000</value>
</property>
<property>
<name>dfs.namenode.http-address.cluster1.hadoop101</name>
<value>hadoop0:50070</value>
</property>
<property>
<name>dfs.namenode.rpc-address.cluster1.hadoop102</name>
<value>hadoop1:9000</value>
</property>
<property>
<name>dfs.namenode.http-address.cluster1.hadoop102</name>
<value>hadoop1:50070</value>
</property>
<property>
<name>dfs.ha.automatic-failover.enabled.cluster1</name>
<value>false</value>
</property>
<property>
<name>dfs.namenode.shared.edits.dir</name>
<value>qjournal://hadoop0:8485;hadoop1:8485;hadoop2:8485/cluster1</value>
</property>
<property>
<name>dfs.journalnode.edits.dir</name>
<value>/usr/local/hadoop/tmp/journal</value>
</property>
<property>
<name>dfs.ha.fencing.methods</name>
<value>sshfence</value>
</property>
<property>
<name>dfs.ha.fencing.ssh.private-key-files</name>
<value>/root/.ssh/id_rsa</value>
</property>
<property>
<name>dfs.client.failover.proxy.provider.cluster1</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
1.1.4 yarn-site.xml
<property>
<name>yarn.resourcemanager.hostname</name>
<value>hadoop0</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
1.1.5 mapred-site.xml
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
1.1.6 slaves
hadoop2
hadoop3
hadoop4
1.1.7 把hadoop0上的hadoop文件夹复制到hadoop1、hadoop2、hadoop3、hadoop4节点
scp -rq hadoop hadoop1:/usr/local
scp -rq hadoop hadoop1:/usr/loca2
scp -rq hadoop hadoop1:/usr/loca3
scp -rq hadoop hadoop1:/usr/loca4
1.2 启动journalnode集群
在hadoop0、hadoop1、hadoop2上分别执行hadoop/sbin/hadoop-daemon.sh start journalnode
1.3 格式化namenode、启动namenode
在hadoop0上执行hadoop/bin/hdfs namenode -format
在hadoop0上分别执行hadoop/sbin/hadoop-daemon.sh start namenode
在hadoop1上执行hadoop/bin/hdfs namenode -bootstrapStandby
在hadoop1上分别执行hadoop/sbin/hadoop-daemon.sh start namenode
在hadoop0上执行hadoop/bin/hdfs haadmin -failover --forceactive hadoop101 hadoop102
(通过机制保证只有一个为active,另一个为standby,可以手动进行切换)
1.4 启动datanode
在hadoop0上分别执行hadoop/sbin/hadoop-daemons.sh start datanode(注意是hadoop-daemons.sh)
1.5 启动resourcemanager和nodemanager
在hadoop0上执行 hadoop/sbin/start-yarn.sh start resourcemanager
hadoop/sbin/start-yarn.sh start nodemanager
参考博客:http://www.superwu.cn/2014/02/12/1094/
0 0
- Hadoop2搭建可手工配置的HA
- hadoop2集群搭建+HA配置
- hadoop2.6的HA配置
- ZooKeeper+Hadoop2.6.0的ResourceManager HA搭建
- ZooKeeper+Hadoop2.6.0的ResourceManager HA搭建
- hadoop2.4的HA集群搭建
- hadoop2.4.0 ha 搭建
- 搭建hadoop2 HA
- hadoop2的HA集群简单配置
- HA机制下hadoop2.x的配置
- 配置hadoop2.X的namenode HA及Yarn HA
- hadoop2.3.0 HA 配置
- hadoop2.x HA配置
- hadoop2-HA结构配置
- Hadoop2.x配置HA
- hadoop2.7.1 HA配置
- 配置Hadoop2.xx的高可用(Hadoop2.0 HA)
- 搭建hadoop2.6.0 HA及YARN HA
- OpenCV实现遍历文件夹下所有文件
- 最快速的Github入门,没有之一(四)
- 协程
- [leetcode 155] Min Stack
- 有趣的HTML5:离线存储
- Hadoop2搭建可手工配置的HA
- 服务器双网卡配置不同运营商IP的方案
- 服务器端文件分片合并的思考和实践
- 杂记(Node.js、NPM、SASS、Compass、ConEmu、Ruby )
- [Leetcode]Pascal's Triangle II
- 十步搭建OpenVPN
- 庖丁解牛-----winpcap源码彻底解密(一)
- Arm启动流程解析
- 20150105Review