cdh4.2的HA安装准备

来源:互联网 发布:linux查看内存信息 编辑:程序博客网 时间:2024/06/07 11:35

10.31.72.18   master1.jnhadoop.com  :NN  JN  ZK  HDFSZKFC                         默认NN启动为active
10.31.72.19   master2.jnhadoop.com  :JT   JN  ZK  MRZKFC                             默认JT启动为active
10.31.72.20   master3.jnhadoop.com  :NN  JN  JT  ZK HDFSZKFC  MRZKFC      默认启动NN和JT为standby
10.31.72.21   72.21.jnhadoop.com  :TT DN
10.31.72.22   72.22.jnhadoop.com  :TT DN
10.31.72.23   72.23.jnhadoop.com  :TT DN
10.31.72.24   72.24.jnhadoop.com  :TT DN
10.31.72.25   72.25.jnhadoop.com  :TT DN
10.31.72.26   72.26.jnhadoop.com  :TT DN 
10.31.72.27   72.27.jnhadoop.com  :TT DN 
10.31.72.28   72.28.jnhadoop.com  :TT DN 
10.31.72.29   72.29.jnhadoop.com  :TT DN 
10.31.72.31   72.31.jnhadoop.com  :TT DN 

 

安装puppet

 

#!/bin/bash


for i in `cat ip`;
do
echo $i 
ssh $i "yum install -y puppet"
ssh $i "echo '10.10.124.196 zw-124-196 puppetserver.mtpc.sohu.com' >>/etc/hosts"
ssh $i 'sed -i s/\"python\"/\"python2.4\"/ /usr/lib/ruby/site_ruby/1.8/puppet/provider/package/yum.rb'
ssh $i "puppetd --test --server puppetserver.mtpc.sohu.com"

done

安装的模块:

hadoop-hdfs-namenode

hadoop-hdfs-journalnode

hadoop-hdfs-datanode

hadoop-hdfs-zkfc

 

hadoop-0.20-mapreduce-jobtrackerha

hadoop-0.20-mapreduce-tasktracker

hadoop-0.20-mapreduce-zkfc

 

hadoop-client

========================

1、Start the primary (formatted) NameNode:

sudo service hadoop-hdfs-namenode start

2、Start the standby NameNode:

sudo -u hdfs hdfs namenode -bootstrapStandby
sudo service hadoop-hdfs-namenode start

 

sudo service hadoop-hdfs-journalnode start

sudo service hadoop-hdfs-datanode start

sudo service hadoop-hdfs-zkfc start

 

sudo service hadoop-0.20-mapreduce-tasktracker start

sudo service hadoop-0.20-mapreduce-jobtrackerha start

sodu service hadoop-0.20-mapreduce-zkfc start

无 
     编辑标签
    原创粉丝点击