hadoop2.74+zookeeper3.4.10+hbase1.2.6完全分布式HA集群搭建
来源:互联网 发布:php urldecode解码 编辑:程序博客网 时间:2024/05/29 04:57
之前介绍了hadoop2.7.4,zookeeper3.4.10,hbase1.2.6集群的搭建,这种集群只有一个master作为NameNode,一旦master挂机,整个集群就会瘫痪。为了避免这种情况的出现,就要用到backup-master,即开启两个NameNode,一旦master出故障,backup-maser就会立即接管master的工作,使集群保持正常工作,就是HA(High Available), 高可用性集群。
与之前不同的是,HA集群多出了一个standby的NameNode以及journalnode和ZKFC,简要说明一下:
(1)journalnode负责active NameNode与standby NameNode通信,保持数据同步。
两个namenode部署在不同的两台机器上,一个处于active状态,一个处于standby状态。这两个NameNode都与一组称为JNS的互相独立的进程保持通信(Journal Nodes)。当Active Node上更新了namespace,它将记录修改日志发送给JNS的多数派。Standby noes将会从JNS中读取这些edits,并持续关注它们对日志的变更。Standby Node将日志变更应用在自己的namespace中,当active namenode故障发生时,Standby将会在提升自己为Active之前,确保能够从JNS中读取所有的edits,即在故障发生之前Standy持有的namespace应该与Active保持完全同步。
(2)ZKFC是一个Zookeeper的客户端,它主要用来监测和管理NameNodes的状态,每个NameNode机器上都会运行一个ZKFC程序,它的职责主要有:一是健康监控。ZKFC间歇性的ping NameNode,得到NameNode返回状态,如果NameNode失效或者不健康,那么ZKFS将会标记其为不健康;二是Zookeeper会话管理。当本地NaneNode运行良好时,ZKFC将会持有一个Zookeeper session,如果本地NameNode为Active,它同时也持有一个“排他锁”znode,如果session过期,那么次lock所对应的znode也将被删除;三是选举。当集群中其中一个NameNode宕机,Zookeeper会自动将另一个激活。
由于前面文章已经介绍过,这里只给出hadoop、hbase的变化的文件配置,以及不一样的地方。
1、HA集群各个节点分布情况
NameNode
DataNode
Zookeeper
DFSZKFC
JournalNode
HMaster
HRegionServer
node01
1
1
1
1
node02
1
1
1
1
node03
1
1
1
1
node04
1
1
1
node05
1(备份)
1
1(备份)
2、hadoop配置
(1)配置xml
进入 cd /opt/hadoop/hadoop-2.7.4/etc/hadoop下,进行配置
core-site.xml配置,默认文件core-default.xml在hadoop-common-2.5.1.jar中
<configuration> <property> <name>fs.defaultFS</name> <value>hdfs://mycluster</value><!--这个地方改为dfs.nameservices的value--><description>NameNode URI</description> </property> <property> <name>io.file.buffer.size</name> <value>131072</value><!--默认4096 bytes--> <description>Size of read/write buffer used inSequenceFiles</description> </property> <property> <name>hadoop.tmp.dir</name> <value>/data</value><!--建议重新设置,默认路径重启集群后会失效--> <description>A base for other temporary directories</description> </property></configuration>
hdfs-site.xml配置,默认文件hdfs-default.xml在hadoop-hdfs-2.5.1.jar中
<configuration> <property> <name>dfs.namenode.name.dir</name><!--这个属性可以省略,默认路径file:///${hadoop.tmp.dir}/dfs/name--> <value>file:///data/dfs/name</value> <description>Path on the local filesystem where the NameNodestores the namespace and transactions logs persistently</description> </property> <!--<property>HA模式无需secondaryNameNode <name>dfs.namenode.secondary.http-address</name> <value>node05:9868</value><!--secondaryNameNode设置在node05上,建议生产环境专门设置在一台机器上--> <description>The secondary namenode http server address andport,默认port50090</description> </property>--> <property> <name>dfs.replication</name> <value>2</value><!--备份默认为3,不能超过datanode的数量--> </property> <property> <name>dfs.datanode.data.dir</name><!--这个属性可以省略,默认路径file:///${hadoop.tmp.dir}/dfs/data--> <value>file:///data/dfs/data</value> <description>Comma separated list of paths on the local filesystemof a DataNode where it should store its blocks</description> </property> <!--下边是HA的有关配置--> <property> <name>dfs.nameservices</name> <value>mycluster</value> <!--服务名,名字自定义--> </property> <property> <name>dfs.ha.namenodes.mycluster</name> <value>nn1,nn5</value><!--nameservices的机器,名字自己起--> </property> <property> <name>dfs.namenode.rpc-address.mycluster.nn1</name> <value>node01:8020</value><!--指定master的RPC地址--> </property> <property> <name>dfs.namenode.rpc-address.mycluster.nn5</name> <value>node05:8020</value><!--指定backup-master的RPC地址--> </property> <property> <name>dfs.namenode.http-address.mycluster.nn1</name> <value>node01:50070</value><!--指定master的http地址--> </property> <property> <name>dfs.namenode.http-address.mycluster.nn5</name> <value>node05:50070</value><!--指定backup-master的http地址--> </property> <property> <name>dfs.namenode.shared.edits.dir</name><!--配置jounalnode--> <value>qjournal://node02:8485;node03:8485;node04:8485/mycluster</value> </property> <property> <name>dfs.client.failover.proxy.provider.mycluster</name><!--负责切换的类--> <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider </value> </property> <property> <name>dfs.ha.fencing.methods</name> <value>sshfence</value><!--使用ssh方式切换--></property> <property> <name>dfs.ha.fencing.ssh.private-key-files</name> <value>/root/.ssh/id_rsa</value><!--如果使用ssh进行切换时通信时用的密钥存储的位置--></property> <property> <name>dfs.ha.automatic-failover.enabled</name> <value>true</value><!--设置为true,master出故障自动切换到backup-maste,启用zkfc--></property></configuration>
hadoop的其它配置文件不变
(2)启动hadoop集群
1.分别在node01、node02、node03上执行zkServer.sh start命令,启动zookeeper集群
2.分别在node02、node03、node04上执行hadoop-daemon.sh start journalnode 命令,启动journalnode
node02
[root@node02 ~]# hadoop-daemon.sh start journalnodestarting journalnode, logging to /opt/hadoop/hadoop-2.7.4/logs/hadoop-root-journalnode-node02.out
node03
[root@node03 ~]# hadoop-daemon.sh start journalnodestarting journalnode, logging to /opt/hadoop/hadoop-2.7.4/logs/hadoop-root-journalnode-node02.out
node04
[root@node04 ~]# hadoop-daemon.sh start journalnodestarting journalnode, logging to /opt/hadoop/hadoop-2.7.4/logs/hadoop-root-journalnode-node02.out
3.在node01上执行hdfs zkfc -formatZK命令,格式化ZKFC,部分日志如下
17/09/23 04:01:34 INFO zookeeper.ZooKeeper: Client environment:java.library.path=/opt/hadoop/hadoop-2.7.4/lib/native17/09/23 04:01:34 INFO zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp17/09/23 04:01:34 INFO zookeeper.ZooKeeper: Client environment:java.compiler=<NA>17/09/23 04:01:34 INFO zookeeper.ZooKeeper: Client environment:os.name=Linux17/09/23 04:01:34 INFO zookeeper.ZooKeeper: Client environment:os.arch=amd6417/09/23 04:01:34 INFO zookeeper.ZooKeeper: Client environment:os.version=2.6.32-431.el6.x86_6417/09/23 04:01:34 INFO zookeeper.ZooKeeper: Client environment:user.name=root17/09/23 04:01:34 INFO zookeeper.ZooKeeper: Client environment:user.home=/root17/09/23 04:01:34 INFO zookeeper.ZooKeeper: Client environment:user.dir=/data/zookeeper17/09/23 04:01:34 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=node01:2181,node02:2181,node03:2181 sessionTimeout=5000 watcher=org.apache.hadoop.ha.ActiveStandbyElector$WatcherWithClientRef@1dde4cb217/09/23 04:01:35 INFO zookeeper.ClientCnxn: Opening socket connection to server node01/192.168.1.71:2181. Will not attempt to authenticate using SASL (unknown error)17/09/23 04:01:35 INFO zookeeper.ClientCnxn: Socket connection established to node01/192.168.1.71:2181, initiating session17/09/23 04:01:35 INFO zookeeper.ClientCnxn: Session establishment complete on server node01/192.168.1.71:2181, sessionid = 0x15eada488810000, negotiated timeout = 500017/09/23 04:01:35 INFO ha.ActiveStandbyElector: Session connected.17/09/23 04:01:35 INFO ha.ActiveStandbyElector: Successfully created /hadoop-ha/mycluster in ZK.17/09/23 04:01:35 INFO zookeeper.ZooKeeper: Session: 0x15eada488810000 closed17/09/23 04:01:35 INFO zookeeper.ClientCnxn: EventThread shut down
这时候在node01、node03、node03任意一台装有zookeeper的节点上,执行zkCli.sh命令,连接到zookeeper客户端
Connecting to localhost:21812017-09-23 04:08:11,343 [myid:] - INFO [main:Environment@100] - Client environment:zookeeper.version=3.4.10-39d3a4f269333c922ed3db283be479f9deacaa0f, built on 03/23/2017 10:13 GMT2017-09-23 04:08:11,350 [myid:] - INFO [main:Environment@100] - Client environment:host.name=node022017-09-23 04:08:11,351 [myid:] - INFO [main:Environment@100] - Client environment:java.version=1.8.0_252017-09-23 04:08:11,355 [myid:] - INFO [main:Environment@100] - Client environment:java.vendor=Oracle Corporation2017-09-23 04:08:11,356 [myid:] - INFO [main:Environment@100] - Client environment:java.home=/usr/local/jdk1.7/jre2017-09-23 04:08:11,357 [myid:] - INFO [main:Environment@100] - Client environment:java.class.path=/opt/hadoop/zookeeper-3.4.10/bin/../build/classes:/opt/hadoop/zookeeper-3.4.10/bin/../build/lib/*.jar:/opt/hadoop/zookeeper-3.4.10/bin/../lib/slf4j-log4j12-1.6.1.jar:/opt/hadoop/zookeeper-3.4.10/bin/../lib/slf4j-api-1.6.1.jar:/opt/hadoop/zookeeper-3.4.10/bin/../lib/netty-3.10.5.Final.jar:/opt/hadoop/zookeeper-3.4.10/bin/../lib/log4j-1.2.16.jar:/opt/hadoop/zookeeper-3.4.10/bin/../lib/jline-0.9.94.jar:/opt/hadoop/zookeeper-3.4.10/bin/../zookeeper-3.4.10.jar:/opt/hadoop/zookeeper-3.4.10/bin/../src/java/lib/*.jar:/opt/hadoop/zookeeper-3.4.10/bin/../conf:.:/usr/local/jdk1.7/lib/dt.jar:/usr/local/jdk1.7/lib/tools.jar2017-09-23 04:08:11,358 [myid:] - INFO [main:Environment@100] - Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib2017-09-23 04:08:11,358 [myid:] - INFO [main:Environment@100] - Client environment:java.io.tmpdir=/tmp2017-09-23 04:08:11,358 [myid:] - INFO [main:Environment@100] - Client environment:java.compiler=<NA>2017-09-23 04:08:11,359 [myid:] - INFO [main:Environment@100] - Client environment:os.name=Linux2017-09-23 04:08:11,359 [myid:] - INFO [main:Environment@100] - Client environment:os.arch=amd642017-09-23 04:08:11,359 [myid:] - INFO [main:Environment@100] - Client environment:os.version=2.6.32-431.el6.x86_642017-09-23 04:08:11,359 [myid:] - INFO [main:Environment@100] - Client environment:user.name=root2017-09-23 04:08:11,359 [myid:] - INFO [main:Environment@100] - Client environment:user.home=/root2017-09-23 04:08:11,359 [myid:] - INFO [main:Environment@100] - Client environment:user.dir=/data/zookeeper2017-09-23 04:08:11,363 [myid:] - INFO [main:ZooKeeper@438] - Initiating client connection, connectString=localhost:2181 sessionTimeout=30000 watcher=org.apache.zookeeper.ZooKeeperMain$MyWatcher@69d0a921Welcome to ZooKeeper!2017-09-23 04:08:11,408 [myid:] - INFO [main-SendThread(localhost:2181):ClientCnxn$SendThread@1032] - Opening socket connection to server localhost/0:0:0:0:0:0:0:1:2181. Will not attempt to authenticate using SASL (unknown error)JLine support is enabled[zk: localhost:2181(CONNECTING) 0] 2017-09-23 04:08:11,556 [myid:] - INFO [main-SendThread(localhost:2181):ClientCnxn$SendThread@876] - Socket connection established to localhost/0:0:0:0:0:0:0:1:2181, initiating session2017-09-23 04:08:11,570 [myid:] - INFO [main-SendThread(localhost:2181):ClientCnxn$SendThread@1299] - Session establishment complete on server localhost/0:0:0:0:0:0:0:1:2181, sessionid = 0x25eada489620002, negotiated timeout = 30000WATCHER::WatchedEvent state:SyncConnected type:None path:null按回车进入到zookeeper客户端命令界面,执行ls /,列出所有文件
[zk: localhost:2181(CONNECTED) 0] ls /[zookeeper, hadoop-ha]可以看到有zookeeper和hadoop-ha两个文件夹,执行ls /hadoop-ha
[zk: localhost:2181(CONNECTED) 1] ls /hadoop-ha[mycluster]
可以看到nameservices名字mycluster已经被植入到zookeeper中,quit可以退出
[zk: localhost:2181(CONNECTED) 4] quitQuitting...2017-09-23 04:11:05,450 [myid:] - INFO [main:ZooKeeper@684] - Session: 0x35eada4b87c0000 closed2017-09-23 04:11:05,454 [myid:] - INFO [main-EventThread:ClientCnxn$EventThread@519] - EventThread shut down for session: 0x35eada4b87c0000
4.在node01上执行hdfs namenode -format命令,格式化namenode,部分日志如下
17/09/23 04:17:52 INFO util.GSet: Computing capacity for map NameNodeRetryCache17/09/23 04:17:52 INFO util.GSet: VM type = 64-bit17/09/23 04:17:52 INFO util.GSet: 0.029999999329447746% max memory 966.7 MB = 297.0 KB17/09/23 04:17:52 INFO util.GSet: capacity = 2^15 = 32768 entriesRe-format filesystem in QJM to [192.168.1.72:8485, 192.168.1.73:8485, 192.168.1.74:8485] ? (Y or N) Y 17/09/23 04:18:22 INFO namenode.FSImage: Allocated new BlockPoolId: BP-1931614468-192.168.1.71-150615470281717/09/23 04:18:22 INFO common.Storage: Storage directory /data/dfs/name has been successfully formatted.17/09/23 04:18:23 INFO namenode.FSImageFormatProtobuf: Saving image file /data/dfs/name/current/fsimage.ckpt_0000000000000000000 using no compression17/09/23 04:18:23 INFO namenode.FSImageFormatProtobuf: Image file /data/dfs/name/current/fsimage.ckpt_0000000000000000000 of size 321 bytes saved in 0 seconds.17/09/23 04:18:23 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 017/09/23 04:18:23 INFO util.ExitUtil: Exiting with status 017/09/23 04:18:23 INFO namenode.NameNode: SHUTDOWN_MSG: /************************************************************SHUTDOWN_MSG: Shutting down NameNode at node01/192.168.1.71************************************************************/格式化完成后并在node01上执行hadoop-daemon.sh start namenode命令,启动namenode
[root@node01 ~]# hadoop-daemon.sh start namenodestarting namenode, logging to /opt/hadoop/hadoop-2.7.4/logs/hadoop-root-namenode-node01.out
5.在node05上执行hdfs namenode -bootstrapStandby命令,把node01上的信息,同步到备份的node05上。node05作为备份的namenode要和活跃的namenode的信息保持一致,不能再执行格式化操作。
17/09/23 04:28:31 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]17/09/23 04:28:31 INFO namenode.NameNode: createNameNode [-bootstrapStandby]=====================================================About to bootstrap Standby ID nn5 from: Nameservice ID: mycluster Other Namenode ID: nn1 Other NN's HTTP address: http://node01:50070 Other NN's IPC address: node01/192.168.1.71:8020 Namespace ID: 392086364 Block pool ID: BP-1931614468-192.168.1.71-1506154702817 Cluster ID: CID-01a7bb63-83af-4130-9ffa-f5f6c2ffd9b9 Layout version: -63 isUpgradeFinalized: true=====================================================17/09/23 04:28:32 INFO common.Storage: Storage directory /data/dfs/name has been successfully formatted.17/09/23 04:28:33 INFO namenode.TransferFsImage: Opening connection to http://node01:50070/imagetransfer?getimage=1&txid=0&storageInfo=-63:392086364:0:CID-01a7bb63-83af-4130-9ffa-f5f6c2ffd9b917/09/23 04:28:33 INFO namenode.TransferFsImage: Image Transfer timeout configured to 60000 milliseconds17/09/23 04:28:33 INFO namenode.TransferFsImage: Transfer took 0.01s at 0.00 KB/s17/09/23 04:28:33 INFO namenode.TransferFsImage: Downloaded file fsimage.ckpt_0000000000000000000 size 321 bytes.17/09/23 04:28:33 INFO util.ExitUtil: Exiting with status 017/09/23 04:28:33 INFO namenode.NameNode: SHUTDOWN_MSG: /************************************************************SHUTDOWN_MSG: Shutting down NameNode at node05/192.168.1.75************************************************************/
6、在node01上执行start-all.sh,启动hadoop集群。
[root@node01 ~]# start-all.shThis script is Deprecated. Instead use start-dfs.sh and start-yarn.shStarting namenodes on [node01 node05]node05: starting namenode, logging to /opt/hadoop/hadoop-2.7.4/logs/hadoop-root-namenode-node05.outnode01: namenode running as process 4473. Stop it first.node04: starting datanode, logging to /opt/hadoop/hadoop-2.7.4/logs/hadoop-root-datanode-node04.outnode02: starting datanode, logging to /opt/hadoop/hadoop-2.7.4/logs/hadoop-root-datanode-node02.outnode03: starting datanode, logging to /opt/hadoop/hadoop-2.7.4/logs/hadoop-root-datanode-node03.outStarting journal nodes [node02 node03 node04]node04: journalnode running as process 2183. Stop it first.node02: journalnode running as process 2402. Stop it first.node03: journalnode running as process 2321. Stop it first.Starting ZK Failover Controllers on NN hosts [node01 node05]node05: starting zkfc, logging to /opt/hadoop/hadoop-2.7.4/logs/hadoop-root-zkfc-node05.outnode01: starting zkfc, logging to /opt/hadoop/hadoop-2.7.4/logs/hadoop-root-zkfc-node01.outstarting yarn daemonsstarting resourcemanager, logging to /opt/hadoop/hadoop-2.7.4/logs/yarn-root-resourcemanager-node01.outnode04: starting nodemanager, logging to /opt/hadoop/hadoop-2.7.4/logs/yarn-root-nodemanager-node04.outnode02: starting nodemanager, logging to /opt/hadoop/hadoop-2.7.4/logs/yarn-root-nodemanager-node02.outnode03: starting nodemanager, logging to /opt/hadoop/hadoop-2.7.4/logs/yarn-root-nodemanager-node03.out
7、在浏览器查看hadoop集群状态
在浏览器输入http://node01:50070,可以看到node01处于active状态
在浏览器输入http://node05:50070,可以看到node05处于standby状态
这时候,让node01出故障,kill掉node01的namenode
[root@node01 conf]# jps4946 DFSZKFailoverController11442 NameNode11651 Jps6195 HMaster5030 ResourceManager4299 QuorumPeerMain[root@node01 conf]# kill -9 11442在浏览器访问http://node01:50070已经无法访问,访问http://node05:50070,看到node05变为active状态
这时候重新启动node01
[root@node01 conf]# hadoop-daemon.sh start namenodestarting namenode, logging to /opt/hadoop/hadoop-2.7.4/logs/hadoop-root-namenode-node01.out在浏览器访问http://node01:50070,可以看到node01变为standby状态,而不是原来的active状态。
从而实现了出故障时两NameNode的自动切换
3、hbase配置
(1)配置hbase-site.xml,默认文件hbase-site.xml在hbase-common-1.2.6.jar中<configuration><property> <name>hbase.rootdir</name> <!-- hbase存放数据目录 ,默认值${hbase.tmp.dir}/hbase--> <value>hdfs://mycluster/data/hbase_db</value><!--这个地方要改变,mycluster为hdfs-site.xml中dfs.nameservices的值--> </property> <property> <name>hbase.cluster.distributed</name> <!-- 是否分布式部署 --> <value>true</value> </property> <property> <name>hbase.zookeeper.quorum</name> <!-- list of zookooper --> <value>node01,node02,node03</value> </property> </configuration>hbase的其他的配置文件保持不变。
别忘了把hadoop中改动过的hdfs-site.xml文件重新复制到hbase中conf目录下。
(2)启动hbase,在node01上执行start-hbase.sh命令
[root@node01 ~]# start-hbase.shstarting master, logging to /opt/hadoop/hbase-1.2.6/logs/hbase-root-master-node01.outnode03: starting regionserver, logging to /opt/hadoop/hbase-1.2.6/bin/../logs/hbase-root-regionserver-node03.outnode02: starting regionserver, logging to /opt/hadoop/hbase-1.2.6/bin/../logs/hbase-root-regionserver-node02.outnode04: starting regionserver, logging to /opt/hadoop/hbase-1.2.6/bin/../logs/hbase-root-regionserver-node04.outnode05: starting master, logging to /opt/hadoop/hbase-1.2.6/bin/../logs/hbase-root-master-node05.out
(3)在浏览器查看hbase集群状态
在浏览器输入http://node01:16010,可以看到node01为master,node05为backup master
在浏览器输入http://node05:16010,可以看到node05为backup master,node01为current active master
可以kill掉node01,node05就会有backup master变为active master,重新启动node01,node01就会成为backup master
这里不再详细展示,操作同hadoop集群是一样的。
3、执行jps查看进程
node01
[root@node01 ~]# jps6304 Jps4946 DFSZKFailoverController6195 HMaster5030 ResourceManager4473 NameNode4299 QuorumPeerMainnode02
[root@node02 ~]# jps2402 JournalNode2611 NodeManager3800 Jps2491 DataNode2892 HRegionServer2223 QuorumPeerMain
node03
[root@node03 ~]# jps2321 JournalNode2213 QuorumPeerMain2390 DataNode2776 HRegionServer3610 Jps2510 NodeManager
node04
[root@node04 ~]# jps3588 Jps2262 DataNode2647 HRegionServer2183 JournalNode2382 NodeManager
node05
[root@node05 ~]# jps5680 Jps4211 NameNode4684 HMaster4303 DFSZKFailoverController
- hadoop2.74+zookeeper3.4.10+hbase1.2.6完全分布式HA集群搭建
- hadoop2.74+zookeeper3.4.10+hbase1.2.6完全分布式搭建
- hadoop2.6.5+zookeeper3.4.10+hbase1.3.1分布式集群搭建
- Hadoop2.7.3+HBase1.2.5+ZooKeeper3.4.6搭建分布式集群环境
- Hadoop2.7.3+HBase1.2.5+ZooKeeper3.4.6搭建分布式集群环境
- CentOS7搭建HBase1.0完全分布式集群(Hadoop2.6)
- 搭建Hadoop2.5.2+Hbase1.1.6完全分布式
- 基于HA的hadoop2.7.1完全分布式集群搭建
- HBase介绍, HBase1.2.4,Hadoop2.7.3,Zookeeper3.4.6分布式HA部署配置
- 搭建高可用 zookeeper3.4.6 +hadoop2.7.1 +hbase1.2.6 环境
- 【Hadoop2.7.0、Zookeeper3.4.6、JDK1.7】搭建完全分布式的hadoop,HA部署安装,自动备援
- Hbase完全分布式集群安装配置(Hbase1.0.0,Hadoop2.6.0)
- Hbase完全分布式集群安装配置(Hbase1.0.0,Hadoop2.6.0)
- Hbase完全分布式集群安装配置(Hbase1.0.0,Hadoop2.6.0)
- Hadoop2.6+zookeeper3.4.6+hbase1.1.0.1完全分布式配置方案
- hbase1.2.3+zookeeper3.4.9+hadoop2.7.3完全分布式部署遇到的问题
- Hadoop2.6+HA+Zookeeper3.4.6+Hbase1.0.0安装
- Hadoop2.6+HA+Zookeeper3.4.6+Hbase1.0.0安装
- Android之Notification的多种用法
- mysql5.7.17解压版安装
- 负载均衡(Load Balancing)学习笔记一
- (十三)redis 复制(Replication)
- java jvm 内存调优几个设置的一些背景知识
- hadoop2.74+zookeeper3.4.10+hbase1.2.6完全分布式HA集群搭建
- HDU-2222-AC自动机
- 【职坐标】SSM框架整合
- spring是什么?
- php 数组取数据(主要是删除指定的元素)
- 从两个函数来学习js闭包的概念
- Android 窗口的跳转(切换),结束
- gerrit提交代码(风格审查)出错处理
- 索引