Hadoop 集群搭建

来源:互联网 发布:苹果手机视频抓取软件 编辑:程序博客网 时间:2024/05/16 23:22
// 在周末的时候自己写的这个Hadoop的集群文档还希望对大家有所帮助
vmware workstation 克隆后网卡eth0不能正常工作 解决方法如下修改/etc/udev/rules.d/70-persistent-net.rules1. 将eth0这行注释掉或者删除,这里记载的还是克隆系统时的MAC地址,但是新启动的系统MAC已经更改, 2. 将NAME="eth1" 改为 “eth0”,ATTR 标记的MAC地址,这个是虚拟机为这个虚拟网卡分配的MAC,用上面的MAC替换掉 /etc/sysconfig/network-scripts/ifcfg-eth0中的MAC然后重启即可还有一个办法,不用eth0,直接用eth1等,把/etc/sysconfig/network-scripts/ifcfg-eth0复制成/etc/sysconfig/network-scripts/ifcfg-eth1===========================================================================================
//  严格按照以下步骤来完成集群规划:主机名IP安装的软件运行的进程cluster1192.168.1.101jdk、hadoopNameNode、DFSZKFailoverController(zkfc)cluster2192.168.1.102jdk、hadoopNameNode、DFSZKFailoverController(zkfc)cluster3192.168.1.103jdk、hadoopResourceManager(JobTracker)cluster4192.168.1.104jdk、hadoopResourceManagercluster5192.168.1.105jdk、hadoop、zookeeperDataNode、NodeManager、JournalNode、QuorumPeerMaincluster6192.168.1.106jdk、hadoop、zookeeperDataNode、NodeManager、JournalNode、QuorumPeerMaincluster7192.168.1.107jdk、hadoop、zookeeperDataNode、NodeManager、JournalNode、QuorumPeerMain安装步骤:   在这之前一定要安装JDK等环境1.安装配置zooekeeper集群(在cluster5上)1.1解压tar -zxvf zookeeper-3.4.5.tar.gz -C /cluster/1.2修改配置cd /cluster/zookeeper-3.4.5/conf/cp zoo_sample.cfg zoo.cfgvim zoo.cfg修改:dataDir=/cluster/zookeeper-3.4.5/tmp在最后添加:server.1=cluster5:2888:3888server.2=cluster6:2888:3888server.3=cluster7:2888:3888保存退出然后创建一个tmp文件夹mkdir /cluster/zookeeper-3.4.5/tmp再创建一个空文件touch /cluster/zookeeper-3.4.5/tmp/myid最后向该文件写入IDecho 1 > /cluster/zookeeper-3.4.5/tmp/myid1.3将配置好的zookeeper拷贝到其他节点(首先分别在cluster6、cluster7根目录下创建一个cluster目录:mkdir /cluster)scp -r /cluster/zookeeper-3.4.5/ cluster6:/cluster/scp -r /cluster/zookeeper-3.4.5/ cluster7:/cluster/注意:修改cluster6、cluster7对应/cluster/zookeeper-3.4.5/tmp/myid内容cluster6:echo 2 > /cluster/zookeeper-3.4.5/tmp/myidcluster7:echo 3 > /cluster/zookeeper-3.4.5/tmp/myid2.安装配置hadoop集群(在cluster1上操作)2.1解压tar -zxvf hadoop-2.6.0.tar.gz -C /cluster/2.2配置HDFS(hadoop2.0所有的配置文件都在$HADOOP_HOME/etc/hadoop目录下)#将hadoop添加到环境变量中vim /etc/profileexport JAVA_HOME=/usr/java/jdk1.7.0_55export HADOOP_HOME=/cluster/hadoop-2.6.0export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin#hadoop2.x的配置文件全部在$HADOOP_HOME/etc/hadoop下cd /cluster/hadoop-2.6.0/etc/hadoop2.2.1修改hadoo-env.sh(如果配置了JAVA_HOME可不用)export JAVA_HOME=/usr/java/jdk1.7.0_552.2.2修改core-site.xml<configuration><!-- 指定hdfs的nameservice为mycluster --><property><name>fs.defaultFS</name><value>hdfs://mycluster</value></property><!-- 指定hadoop临时目录 --><property><name>hadoop.tmp.dir</name><value>/cluster/hadoop-2.6.0/tmp</value></property><!-- 指定zookeeper地址 --><property><name>ha.zookeeper.quorum</name><value>cluster5:2181,cluster6:2181,cluster7:2181</value></property></configuration>2.2.3修改hdfs-site.xml<configuration><!--指定hdfs的nameservice为mycluster,需要和core-site.xml中的保持一致 --><property><name>dfs.nameservices</name><value>mycluster</value></property><!-- mycluster下面有两个NameNode,分别是nn1,nn2 --><property><name>dfs.ha.namenodes.mycluster</name><value>nn1,nn2</value></property><!-- nn1的RPC通信地址 --><property><name>dfs.namenode.rpc-address.mycluster.nn1</name><value>cluster1:9000</value></property><!-- nn1的http通信地址 --><property><name>dfs.namenode.http-address.mycluster.nn1</name><value>cluster1:50070</value></property><!-- nn2的RPC通信地址 --><property><name>dfs.namenode.rpc-address.mycluster.nn2</name><value>cluster2:9000</value></property><!-- nn2的http通信地址 --><property><name>dfs.namenode.http-address.mycluster.nn2</name><value>cluster2:50070</value></property><!-- 指定NameNode的元数据在JournalNode上的存放位置 --><property><name>dfs.namenode.shared.edits.dir</name><value>qjournal://cluster5:8485;cluster6:8485;cluster7:8485/mycluster</value></property><!-- 指定JournalNode在本地磁盘存放数据的位置 --><property><name>dfs.journalnode.edits.dir</name><value>/cluster/hadoop-2.6.0/journal</value></property><!-- 开启NameNode失败自动切换 --><property><name>dfs.ha.automatic-failover.enabled</name><value>true</value></property><!-- 配置失败自动切换实现方式 --><property><name>dfs.client.failover.proxy.provider.mycluster</name><value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value></property><!-- 配置隔离机制方法,多个机制用换行分割,即每个机制暂用一行--><property><name>dfs.ha.fencing.methods</name><value>sshfenceshell(/bin/true)</value></property><!-- 使用sshfence隔离机制时需要ssh免登陆 --><property><name>dfs.ha.fencing.ssh.private-key-files</name><value>/home/hadoop/.ssh/id_rsa</value></property><!-- 配置sshfence隔离机制超时时间 --><property><name>dfs.ha.fencing.ssh.connect-timeout</name><value>30000</value></property></configuration>2.2.4修改mapred-site.xml<configuration><!-- 指定mr框架为yarn方式 --><property><name>mapreduce.framework.name</name><value>yarn</value></property></configuration>2.2.5修改yarn-site.xml<configuration><!-- 开启RM高可靠 --><property>   <name>yarn.resourcemanager.ha.enabled</name>   <value>true</value></property><property>                    <name>yarn.resourcemanager.recovery.enabled</name>                    <value>true</value>                  </property>                                    <property>                    <name>yarn.resourcemanager.store.class</name>                    <value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>                  </property><!-- 指定RM的cluster id --><property>   <name>yarn.resourcemanager.cluster-id</name>   <value>yrc</value></property><!-- 指定RM的名字 --><property>   <name>yarn.resourcemanager.ha.rm-ids</name>   <value>rm1,rm2</value></property><!-- 分别指定RM的地址 --><property>   <name>yarn.resourcemanager.hostname.rm1</name>   <value>cluster3</value></property><property>   <name>yarn.resourcemanager.hostname.rm2</name>   <value>cluster4</value></property><!-- 指定zk集群地址 --><property>   <name>yarn.resourcemanager.zk-address</name>   <value>cluster5:2181,cluster6:2181,cluster7:2181</value></property><property>   <name>yarn.nodemanager.aux-services</name>   <value>mapreduce_shuffle</value></property></configuration>2.2.6修改slaves(slaves是指定子节点的位置,因为要在cluster1上启动HDFS、在cluster3启动yarn,所以cluster1上的slaves文件指定的是datanode的位置,cluster3上的slaves文件指定的是nodemanager的位置)cluster5cluster6cluster72.2.7配置免密码登陆#首先要配置cluster1到cluster2、cluster3、cluster4、cluster5、cluster6、cluster7的免密码登陆#在cluster1上生产一对钥匙ssh-keygen -t rsa#将公钥拷贝到其他节点,包括自己ssh-coyp-id cluster1ssh-coyp-id cluster2ssh-coyp-id cluster3ssh-coyp-id cluster4ssh-coyp-id cluster5ssh-coyp-id cluster6ssh-coyp-id cluster7#配置cluster3到cluster4、cluster5、cluster6、cluster7的免密码登陆#在cluster3上生产一对钥匙ssh-keygen -t rsa#将公钥拷贝到其他节点ssh-coyp-id cluster4ssh-coyp-id cluster5ssh-coyp-id cluster6ssh-coyp-id cluster7#注意:两个namenode之间要配置ssh免密码登陆,别忘了配置cluster2到cluster1的免登陆在cluster2上生产一对钥匙ssh-keygen -t rsassh-coyp-id -i cluster12.4将配置好的hadoop拷贝到其他节点scp -r /cluster/ cluster2:/scp -r /cluster/ cluster3:/scp -r /cluster/hadoop-2.6.0/ root@cluster4:/cluster/scp -r /cluster/hadoop-2.6.0/ root@cluster5:/cluster/scp -r /cluster/hadoop-2.6.0/ root@cluster6:/cluster/scp -r /cluster/hadoop-2.6.0/ root@cluster7:/cluster/###注意:严格按照下面的步骤2.5启动zookeeper集群(分别在cluster5、cluster6、tcast07上启动zk)cd /cluster/zookeeper-3.4.5/bin/./zkServer.sh start#查看状态:一个leader,两个follower./zkServer.sh status(可省略)2.6启动journalnode(分别在在cluster5、cluster6、tcast07上执行)cd /cluster/hadoop-2.6.0sbin/hadoop-daemon.sh start journalnode#运行jps命令检验,cluster5、cluster6、cluster7上多了JournalNode进程2.7格式化HDFS#在cluster1上执行命令:hdfs namenode -format#格式化后会在根据core-site.xml中的hadoop.tmp.dir配置生成个文件,这里我配置的是/cluster/hadoop-2.6.0/tmp,然后将/cluster/hadoop-2.6.0/tmp拷贝到cluster2的/cluster/hadoop-2.6.0/下。scp -r tmp/ cluster2:/cluster/hadoop-2.6.0/2.8格式化ZK(在cluster1上执行即可)hdfs zkfc -formatZK2.9启动HDFS(在cluster1上执行)sbin/start-dfs.sh验证HDFS HA首先向hdfs上传一个文件hadoop fs -put /etc/profile /profilehadoop fs -ls /然后再kill掉active的NameNodekill -9 <pid of NN>通过浏览器访问:http://192.168.1.102:50070NameNode 'cluster2:9000' (active)这个时候cluster2上的NameNode变成了active在执行命令:hadoop fs -ls /-rw-r--r--   3 root supergroup       1926 2014-02-06 15:36 /profile手动启动那个挂掉的NameNodesbin/hadoop-daemon.sh start namenode通过浏览器访问:http://192.168.1.101:50070NameNode 'cluster1:9000' (standby)2.10启动YARN(#####注意#####:是在cluster3上执行start-yarn.sh,把namenode和resourcemanager分开是因为性能问题,因为他们都要占用大量资源,所以把他们分开了,他们分开了就要分别在不同的机器上启动)sbin/start-yarn.sh在cluster4上执行yarn-daemon.sh start resourcemanager   测试 浏览器访问   http://cluster3:8088/cluster   然后在cluster3上  kill -9 1964    ResourceManager      测试 浏览器访问   http://cluster4:8088/cluster      到些说明 ResourceManager的故障自动恢复已经配置完成!到此,hadoop-2.6.0配置完毕,可以浏览器访问:http://192.168.1.101:50070NameNode 'cluster1:9000' (active)http://192.168.1.102:50070NameNode 'cluster2:9000' (standby)验证YARN:运行一下hadoop提供的demo中的WordCount程序:hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0.jar wordcount /profile /out

1 0
原创粉丝点击