Ubuntu16.04搭建HA集群hadoop-2.7.4
来源:互联网 发布:js 固定长度数组 编辑:程序博客网 时间:2024/06/05 07:01
搭建高可用的Hadoop集群主要是两个配置文件:1.hdfs-site.xml 2.core-site.xml,这里我有五台机器:
node1:192.168.0.172,node2:192.168.0.104,node3:192.168.0.177,node4:192.168.0.158,node5:192.168.0.136
其中zookeeper在五台机器上都有,node1与node2是两台namenode,其余三台是datanode和journalnode
1.首先把/etc/hosts文件改好:
127.0.0.1 localhost localhost4 localhost4.localdomain4::1 localhost localhost6 localhost6.localdomain6192.168.0.172 node1192.168.0.104 node2192.168.0.177 node3192.168.0.158 node4192.168.0.136 node52.配置/etc/profile的环境变量:
export JAVA_HOME=/opt/jdkexport PATH=$JAVA_HOME/bin:$PATHexport CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jarexport HADOOP_HOME=/opt/hadoop-2.7.4export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbinexport HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoopexport YARN_CONF_DIR=$HADOOP_HOME/etc/hadoop3.修改 etc/hadoop/hadoop-env.sh文件:
export JAVA_HOME=/opt/hadoop-2.7.44.设置免密码登录,这里主要是把node1和node2的公钥给加到其他的机器上边:
ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsacat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys注意:这个命令要在/root下运行
5.下面是最重要的两个配置文件(hdfs-site.xml和core-site.xml):
1)hdfs-site.xml
<configuration> <property> <name>dfs.nameservices</name> <value>mycluster</value> </property> <property> <name>dfs.ha.namenodes.mycluster</name> <value>nn1,nn2</value> </property> <property> <name>dfs.namenode.rpc-address.mycluster.nn1</name> <value>node1:8020</value> </property> <property> <name>dfs.namenode.rpc-address.mycluster.nn2</name> <value>node2:8020</value> </property> <property> <name>dfs.namenode.http-address.mycluster.nn1</name> <value>node1:50070</value> </property> <property> <name>dfs.namenode.http-address.mycluster.nn2</name> <value>node2:50070</value> </property> <property> <name>dfs.namenode.shared.edits.dir</name> <value>qjournal://node3:8485;node4:8485;node5:8485/mycluster</value> </property> <property> <name>dfs.client.failover.proxy.provider.mycluster</name> <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value> </property> <property> <name>dfs.ha.fencing.methods</name> <value>sshfence</value> </property> <property> <name>dfs.ha.fencing.ssh.private-key-files</name> <value>/root/.ssh/id_rsa</value> </property><property> <name>dfs.journalnode.edits.dir</name> <value>/opt/to/journal/node/local/data</value></property> <property> <name>dfs.ha.automatic-failover.enabled</name> <value>true</value> </property></configuration>
2)core-site.xml
<configuration><property> <name>fs.defaultFS</name> <value>hdfs://mycluster</value></property><property> <name>hadoop.tmp.dir</name> <value>/opt/hadoop-2.7.4/tmp</value></property><property> <name>ha.zookeeper.quorum</name> <value>node1:2181,node2:2181,node3:2181,node4:2181,node5:2181</value> </property></configuration>
修改slaves文件,把node3,node4,node5加进去,也就是把datanode的信息加到文件里边
6.配置zookeeper-3.6.4:
# The number of milliseconds of each ticktickTime=2000# The number of ticks that the initial # synchronization phase can takeinitLimit=10# The number of ticks that can pass between # sending a request and getting an acknowledgementsyncLimit=5# the directory where the snapshot is stored.# do not use /tmp for storage, /tmp here is just # example sakes.dataDir=/opt/zoo# the port at which the clients will connectclientPort=2181# the maximum number of client connections.# increase this if you need to handle more clients#maxClientCnxns=60## Be sure to read the maintenance section of the # administrator guide before turning on autopurge.## http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance## The number of snapshots to retain in dataDir#autopurge.snapRetainCount=3# Purge task interval in hours# Set to "0" to disable auto purge feature#autopurge.purgeInterval=1server.1=node1:2888:3888server.2=node2:2888:3888server.3=node3:2888:3888server.4=node4:2888:3888server.5=node5:2888:3888注意:其余的不用怎么改就是加上最后面这一段就好了,然后在dataDir的路径下新建一个myid文件,把自己的代号(也就是server.x中的x)写到文件里边,最后到bin里边运行./zkServer.sh start,注意看下log文件是不是启动成功!
7.启动journalnode(node3-node5):
hadoop-daemon.sh start journalnode
8.格式化zookeeper:
hdfs zkfc -formatZK9.最后format一下namenode:
hdfs namenode -format10.将format后在tmp中生成的文件复制到另外一台namenode上,然后启动start-dfs.sh,这样一个HA基本上是搭建好了
11.如果发现某一台机器少了个节点什么的可以用下面这个命令:
hadoop-daemon.sh start namenode/datanode
注意:如果zkfc的日志文件出现了如下的错误 在zkfc的日志里面,有一个warn:PATH=$PATH:/sbin:/usr/sbin fuser -v -k -n tcp 8090 via ssh: bash: fuser: 未找到命令,这样就可以了,一个高可用的集群就这样搭好了
yum install psmisc
阅读全文
1 0
- Ubuntu16.04搭建HA集群hadoop-2.7.4
- Hadoop HA 集群搭建
- 搭建Hadoop HA集群
- Hadoop HA集群搭建
- Hadoop---HA集群搭建
- 【笔记】Hadoop-HA集群搭建
- hadoop HA集群的搭建
- hadoop集群和HA搭建
- ubuntu16.04下hadoop-2.7.4搭建
- Hadoop HA高可用集群搭建(2.7.2)
- Hadoop HA高可用集群搭建(2.7.2)
- Hadoop HA高可用集群搭建(2.7.2)
- Hadoop-2.7.3集群(HA HDFS)搭建
- Hadoop-2.6.0集群HA搭建
- hadoop HA 高可用集群部署搭建
- hadoop集群搭建HDFS、HA、 YARN
- hadoop集群搭建HDFS、HA、 YARN
- Hadoop HA高可用集群搭建
- 如何跟踪确定IP地址
- Nodejs 模块
- Joone框架的神经网络如何保存和载入训练好的模型
- activeMQ消息队列连接池
- I/O复用select、poll、epoll
- Ubuntu16.04搭建HA集群hadoop-2.7.4
- MQTT SERVER 性能测试报告
- 获取cancvs上鼠标的坐标信息
- png图片透明部分点击
- Elementary OS安装Chrome
- 一个 Ajax Loading —— spin.js
- 场、熵的物理表现
- #Deep Learning回顾#之LeNet、AlexNet、GoogLeNet、VGG、ResNet
- CRT一段时间不用中断问题解决