centos7.3+hbase1.3.2集群安装
来源:互联网 发布:淘宝导航二级下拉菜单 编辑:程序博客网 时间:2024/04/30 00:44
2017-07-26 14:08:38,056 INFO [main] zookeeper.ZooKeeper: Initiating client connection, connectString=localhost:2181 sessionTimeout=90000 watcher=org.apache.hadoop.hbase.zookeeper.PendingWatcher@5e8f9e2d
2017-07-26 14:08:38,099 INFO [main-SendThread(localhost:2181)] zookeeper.ClientCnxn: Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
2017-07-26 14:08:38,109 WARN [main-SendThread(localhost:2181)] zookeeper.ClientCnxn: Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
解决办法:
根据“server localhost/127.0.0.1:2181” 提示server为localhost,于是修改conf\regionservers ,将localhost改为服务器名字。
util.FSUtils: Waiting for dfs to exit safe mode..
解决办法:
[coder@h1 hadoop-0.20.2]$ bin/hadoop dfsadmin -safemode leave
Safe mode is OFF
2017-07-01 12:54:17,631 WARN org.apache.hadoop.hdfs.server.common.Util: Path /data/hadoop/name should be specified as a URI in configuration files. Please update hdfs configuration.
2017-07-01 12:54:17,632 WARN org.apache.hadoop.hdfs.server.common.Util: Path /data/hadoop/name should be specified as a URI in configuration files. Please update hdfs configuration.
2017-07-01 12:54:17,632 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one image storage directory (dfs.namenode.name.dir) configured. Beware of data loss due to lack of redundant storage directories!
2017-07-01 12:54:17,632 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one namespace edits storage directory (dfs.namenode.edits.dir) configured. Beware of data loss due to lack of redundant storage directories!
2017-07-01 12:54:17,637 WARN org.apache.hadoop.hdfs.server.common.Util: Path /data/hadoop/name should be specified as a URI in configuration files. Please update hdfs configuration.
2017-07-01 12:54:17,637 WARN org.apache.hadoop.hdfs.server.common.Util: Path /data/hadoop/name should be specified as a URI in configuration files. Please update hdfs configuration.
将 <property>
<name>dfs.namenode.name.dir</name>
<value>/data/hadoop/name</value>
</property>
改为
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/data/hadoop/name</value>
</property>
2017-07-27 14:00:47,043 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Problem connecting to server: centos128/192.168.44.128:9000
2017-07-27 14:00:53,052 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: centos128/192.168.44.128:9000. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2017-07-27 14:00:54,058 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: centos128/192.168.44.128:9000. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2017-07-27 14:00:55,060 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: centos128/192.168.44.128:9000. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2017-07-27 14:00:56,063 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: centos128/192.168.44.128:9000. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2017-07-27 14:00:57,066 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: centos128/192.168.44.128:9000. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2017-07-27 14:00:58,072 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: centos128/192.168.44.128:9000. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2017-07-27 14:00:59,074 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: centos128/192.168.44.128:9000. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2017-07-27 14:01:00,078 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: centos128/192.168.44.128:9000. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2017-07-27 14:01:01,082 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: centos128/192.168.44.128:9000. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2017-07-27 14:01:02,084 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: centos128/192.168.44.128:9000. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2017-07-27 14:01:02,086 WARN org.apache.hadoop.ipc.Client: Failed to connect to server: centos128/192.168.44.128:9000: retries get failed due to exceeded maximum allowed retries number: 10
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
解决办法:
hdfs dfsadmin -report
17/07/27 14:10:03 WARN ipc.Client: Failed to connect to server: centos128/192.168.44.128:9000: try once and fail.
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
hadoop namenode -format
$hadoop datanode -format
centos7.3+hbase1.3.2集群安装
服务器
192.168.44.128 centos128
192.168.44.129 centos129
192.168.44.130 centos130
centos128 是master是管理节点上面会有hmaster
1,服务器设置
1) /etc/hosts
[root@centos128 ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.44.128 centos128
192.168.44.129 centos129
192.168.44.130 centos130
其它服务器的/etc/hosts文件都依此安装
2) ssh 互信设置
ssh-keygen -t rsa
ssh-keygen -t dsa
cd ~/.ssh
ssh-copy-id -i id_rsa.pub centos128
ssh-copy-id -i id_dsa.pub centos128
ssh-copy-id -i id_rsa.pub centos129
ssh-copy-id -i id_dsa.pub centos129
ssh-copy-id -i id_rsa.pub centos130
ssh-copy-id -i id_dsa.pub centos130
3) jdk设置
安装jdk1.8
tar zxvf jdk-8u65-linux-x64.tar.gz
mv jdk-8u65-linux-x64 /usr/src/jdk
在/etc/profile中添加如下
JAVA_HOME=/usr/src/jdk/
PATH=$JAVA_HOME/bin:/usr/local/xtrabackup/bin:$PATH
CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
export JAVA_HOME
export PATH
2,安装并设hbase
1)
tar xvf hbase-1.3.1-bin.tar.gz
mv hbase-1.3.1-bin /usr/src/hbase
2)在hbase-env.sh 中添加如下
export JAVA_HOME=/usr/src/jdk
3)将hadoop中/usr/src/hadoop/etc/hadoop/hdfs-site.xml中文件link到/usr/src/hbase/conf
ln -s /usr/src/hadoop/etc/hadoop/hdfs-site.xml /usr/src/hbase/conf/hdfs-site.xml
4)在regionservers 添加
centos128
centos129
centos130
运行regionserver
5)hbase-site.xml 添加
<configuration>
<property>
<name>hbase.rootdir</name>
<value>hdfs://centos128:9000/hbase</value> #在dhfs上存放hbase文件
</property>
<property>
<name>hbase.cluster.distributed</name> #开启cluster
<value>true</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name> #开启各服务上的zookeeper实例
<value>centos128,centos129,centos130</value>
</property>
<property>
<name>hbase.zookeeper.property.dataDir</name> #zookeeper实例存放数据路径
<value>/usr/local/zookeeper</value>
</property>
</configuration>
6) 打包hbase 复制到数据节点解压
tar cvf hbase.tar hbase/*
scp hbase.tar centos129:/usr/src
scp hbase.tar centos130:/usr/src
ssh centos129
cd /usr/src
tar xvf hbase.tar
ssh centos130
cd /usr/src
tar xvf hbase.tar
7)在centos128上启动hbase集群并测试hbase
/usr/src/hbase/bin/start-hbase.sh
登录hbase
[root@centos128 src]# /usr/src/hbase/bin/hbase shell
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/src/hbase/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/src/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
HBase Shell; enter 'help<RETURN>' for list of supported commands.
Type "exit<RETURN>" to leave the HBase Shell
Version 1.3.1, r930b9a55528fe45d8edce7af42fef2d35e77677a, Thu Apr 6 19:36:54 PDT 2017
hbase(main):001:0> list
TABLE
test1
test2
2 row(s) in 0.4170 seconds
=> ["test1", "test2"]
再登录其它服务器,同样的操作,如果成功说明hbase集群配置完成。
8) 观察各节点进程
在centos128运行jps
[root@centos128 ~]# jps
2448 NameNode
4611 HQuorumPeer
4964 Jps
2886 ResourceManager
4681 HMaster
2732 SecondaryNameNode
其中NameNode,SecondaryNameNode,ResourceManager 是hadoop进程
其中HMaster,HQuorumPeer是hbase进程
[root@centos129 ~]# jps
3360 HRegionServer
2293 DataNode
3562 Jps
3276 HQuorumPeer
2414 NodeManager
其中DataNode,NodeManager是hadoop进程
其中HRegionServer,HQuorumPeer是hbase进程
[root@centos130 bin]# jps
3250 HQuorumPeer
2436 NodeManager
3510 Jps
2315 DataNode
3341 HRegionServer
其中DataNode,NodeManager是hadoop进程
其中HRegionServer,HQuorumPeer是hbase进程
登录http://192.168.44.128:16010/master-status 查看hbase集群状态
7)关闭hbase
/usr/src/hbase/bin/stop-hbase.sh
运行完之后,各服务器上的hbase相关进程都关闭。
4.注意事项
当/etc/hosts存在
127.0.0.1 centos128之类的记录时
hbase会一直循环,导致启动或关闭失败
/usr/src/hbase/conf/backup-masters
centos129
centos130
hbase-env.sh 中添加
export HBASE_CLASSPATH=/usr/src/hadoop/etc/hadoop
[root@centos128 conf]# cat hbase-env.sh | grep -v ^# | sed '/^$/d'
export JAVA_HOME=/usr/src/jdk
export HBASE_CLASSPATH=/usr/src/hadoop/etc/hadoop
export HBASE_OPTS="-XX:+UseConcMarkSweepGC"
export HBASE_MASTER_OPTS="$HBASE_MASTER_OPTS -XX:PermSize=128m -XX:MaxPermSize=128m"
export HBASE_REGIONSERVER_OPTS="$HBASE_REGIONSERVER_OPTS -XX:PermSize=128m -XX:MaxPermSize=128m"
/usr/src/hbase/conf/hbase-site.xml
2017-07-26 14:08:38,099 INFO [main-SendThread(localhost:2181)] zookeeper.ClientCnxn: Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
2017-07-26 14:08:38,109 WARN [main-SendThread(localhost:2181)] zookeeper.ClientCnxn: Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
解决办法:
根据“server localhost/127.0.0.1:2181” 提示server为localhost,于是修改conf\regionservers ,将localhost改为服务器名字。
util.FSUtils: Waiting for dfs to exit safe mode..
解决办法:
[coder@h1 hadoop-0.20.2]$ bin/hadoop dfsadmin -safemode leave
Safe mode is OFF
2017-07-01 12:54:17,631 WARN org.apache.hadoop.hdfs.server.common.Util: Path /data/hadoop/name should be specified as a URI in configuration files. Please update hdfs configuration.
2017-07-01 12:54:17,632 WARN org.apache.hadoop.hdfs.server.common.Util: Path /data/hadoop/name should be specified as a URI in configuration files. Please update hdfs configuration.
2017-07-01 12:54:17,632 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one image storage directory (dfs.namenode.name.dir) configured. Beware of data loss due to lack of redundant storage directories!
2017-07-01 12:54:17,632 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one namespace edits storage directory (dfs.namenode.edits.dir) configured. Beware of data loss due to lack of redundant storage directories!
2017-07-01 12:54:17,637 WARN org.apache.hadoop.hdfs.server.common.Util: Path /data/hadoop/name should be specified as a URI in configuration files. Please update hdfs configuration.
2017-07-01 12:54:17,637 WARN org.apache.hadoop.hdfs.server.common.Util: Path /data/hadoop/name should be specified as a URI in configuration files. Please update hdfs configuration.
将 <property>
<name>dfs.namenode.name.dir</name>
<value>/data/hadoop/name</value>
</property>
改为
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/data/hadoop/name</value>
</property>
2017-07-27 14:00:47,043 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Problem connecting to server: centos128/192.168.44.128:9000
2017-07-27 14:00:53,052 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: centos128/192.168.44.128:9000. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2017-07-27 14:00:54,058 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: centos128/192.168.44.128:9000. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2017-07-27 14:00:55,060 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: centos128/192.168.44.128:9000. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2017-07-27 14:00:56,063 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: centos128/192.168.44.128:9000. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2017-07-27 14:00:57,066 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: centos128/192.168.44.128:9000. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2017-07-27 14:00:58,072 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: centos128/192.168.44.128:9000. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2017-07-27 14:00:59,074 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: centos128/192.168.44.128:9000. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2017-07-27 14:01:00,078 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: centos128/192.168.44.128:9000. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2017-07-27 14:01:01,082 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: centos128/192.168.44.128:9000. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2017-07-27 14:01:02,084 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: centos128/192.168.44.128:9000. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2017-07-27 14:01:02,086 WARN org.apache.hadoop.ipc.Client: Failed to connect to server: centos128/192.168.44.128:9000: retries get failed due to exceeded maximum allowed retries number: 10
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
解决办法:
hdfs dfsadmin -report
17/07/27 14:10:03 WARN ipc.Client: Failed to connect to server: centos128/192.168.44.128:9000: try once and fail.
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
hadoop namenode -format
$hadoop datanode -format
centos7.3+hbase1.3.2集群安装
服务器
192.168.44.128 centos128
192.168.44.129 centos129
192.168.44.130 centos130
centos128 是master是管理节点上面会有hmaster
1,服务器设置
1) /etc/hosts
[root@centos128 ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.44.128 centos128
192.168.44.129 centos129
192.168.44.130 centos130
其它服务器的/etc/hosts文件都依此安装
2) ssh 互信设置
ssh-keygen -t rsa
ssh-keygen -t dsa
cd ~/.ssh
ssh-copy-id -i id_rsa.pub centos128
ssh-copy-id -i id_dsa.pub centos128
ssh-copy-id -i id_rsa.pub centos129
ssh-copy-id -i id_dsa.pub centos129
ssh-copy-id -i id_rsa.pub centos130
ssh-copy-id -i id_dsa.pub centos130
3) jdk设置
安装jdk1.8
tar zxvf jdk-8u65-linux-x64.tar.gz
mv jdk-8u65-linux-x64 /usr/src/jdk
在/etc/profile中添加如下
JAVA_HOME=/usr/src/jdk/
PATH=$JAVA_HOME/bin:/usr/local/xtrabackup/bin:$PATH
CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
export JAVA_HOME
export PATH
2,安装并设hbase
1)
tar xvf hbase-1.3.1-bin.tar.gz
mv hbase-1.3.1-bin /usr/src/hbase
2)在hbase-env.sh 中添加如下
export JAVA_HOME=/usr/src/jdk
3)将hadoop中/usr/src/hadoop/etc/hadoop/hdfs-site.xml中文件link到/usr/src/hbase/conf
ln -s /usr/src/hadoop/etc/hadoop/hdfs-site.xml /usr/src/hbase/conf/hdfs-site.xml
4)在regionservers 添加
centos128
centos129
centos130
运行regionserver
5)hbase-site.xml 添加
<configuration>
<property>
<name>hbase.rootdir</name>
<value>hdfs://centos128:9000/hbase</value> #在dhfs上存放hbase文件
</property>
<property>
<name>hbase.cluster.distributed</name> #开启cluster
<value>true</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name> #开启各服务上的zookeeper实例
<value>centos128,centos129,centos130</value>
</property>
<property>
<name>hbase.zookeeper.property.dataDir</name> #zookeeper实例存放数据路径
<value>/usr/local/zookeeper</value>
</property>
</configuration>
6) 打包hbase 复制到数据节点解压
tar cvf hbase.tar hbase/*
scp hbase.tar centos129:/usr/src
scp hbase.tar centos130:/usr/src
ssh centos129
cd /usr/src
tar xvf hbase.tar
ssh centos130
cd /usr/src
tar xvf hbase.tar
7)在centos128上启动hbase集群并测试hbase
/usr/src/hbase/bin/start-hbase.sh
登录hbase
[root@centos128 src]# /usr/src/hbase/bin/hbase shell
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/src/hbase/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/src/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
HBase Shell; enter 'help<RETURN>' for list of supported commands.
Type "exit<RETURN>" to leave the HBase Shell
Version 1.3.1, r930b9a55528fe45d8edce7af42fef2d35e77677a, Thu Apr 6 19:36:54 PDT 2017
hbase(main):001:0> list
TABLE
test1
test2
2 row(s) in 0.4170 seconds
=> ["test1", "test2"]
再登录其它服务器,同样的操作,如果成功说明hbase集群配置完成。
8) 观察各节点进程
在centos128运行jps
[root@centos128 ~]# jps
2448 NameNode
4611 HQuorumPeer
4964 Jps
2886 ResourceManager
4681 HMaster
2732 SecondaryNameNode
其中NameNode,SecondaryNameNode,ResourceManager 是hadoop进程
其中HMaster,HQuorumPeer是hbase进程
[root@centos129 ~]# jps
3360 HRegionServer
2293 DataNode
3562 Jps
3276 HQuorumPeer
2414 NodeManager
其中DataNode,NodeManager是hadoop进程
其中HRegionServer,HQuorumPeer是hbase进程
[root@centos130 bin]# jps
3250 HQuorumPeer
2436 NodeManager
3510 Jps
2315 DataNode
3341 HRegionServer
其中DataNode,NodeManager是hadoop进程
其中HRegionServer,HQuorumPeer是hbase进程
登录http://192.168.44.128:16010/master-status 查看hbase集群状态
7)关闭hbase
/usr/src/hbase/bin/stop-hbase.sh
运行完之后,各服务器上的hbase相关进程都关闭。
4.注意事项
当/etc/hosts存在
127.0.0.1 centos128之类的记录时
hbase会一直循环,导致启动或关闭失败
/usr/src/hbase/conf/backup-masters
centos129
centos130
hbase-env.sh 中添加
export HBASE_CLASSPATH=/usr/src/hadoop/etc/hadoop
[root@centos128 conf]# cat hbase-env.sh | grep -v ^# | sed '/^$/d'
export JAVA_HOME=/usr/src/jdk
export HBASE_CLASSPATH=/usr/src/hadoop/etc/hadoop
export HBASE_OPTS="-XX:+UseConcMarkSweepGC"
export HBASE_MASTER_OPTS="$HBASE_MASTER_OPTS -XX:PermSize=128m -XX:MaxPermSize=128m"
export HBASE_REGIONSERVER_OPTS="$HBASE_REGIONSERVER_OPTS -XX:PermSize=128m -XX:MaxPermSize=128m"
/usr/src/hbase/conf/hbase-site.xml
阅读全文
0 0
- centos7.3+hbase1.3.2集群安装
- CentOS 7下安装集群HBase1.2.4
- CentOS7搭建HBase1.0完全分布式集群(Hadoop2.6)
- centos7上hadoop2.6和hbase1.0安装
- Linux安装Hbase(CentOS7+Hbase1.2.5+Hadoop2.8.0)
- Centos7安装Redis-3.2.5集群
- Centos7.3Kubernetes集群安装部署
- Hbase完全分布式集群安装配置(Hbase1.0.0,Hadoop2.6.0)
- Hbase完全分布式集群安装配置(Hbase1.0.0,Hadoop2.6.0)
- Hbase分布式集群安装(Hbase1.1.2与Hadoop2.6.2)
- hbase1.3.0集群版详细安装及简单测试
- Hbase完全分布式集群安装配置(Hbase1.0.0,Hadoop2.6.0)
- Centos7安装Redis集群
- Centos7 安装Kafka集群
- CentOS7安装spark集群
- 【HBase-1.2.3】HBase1.2.3 的安装
- CentOS7安装、配置MariaDB集群
- centos7安装redis3.2.5集群
- Altium Designer 15常用工作面板详解
- 这20个正则表达式,让你少写1,000行代码
- 后序遍历--CSU-ACM2017暑期训练4-dfs
- 解析给ImageView设置资源的五种方法
- 关于Android 4.4 后没有WRITE_SMS权限以及指定Android Studio2.3.3 SDK版本的问题
- centos7.3+hbase1.3.2集群安装
- hdu 6051 If the starlight never fade [欧拉函数] [2017 Multi-University Training Contest
- ZooKeeper Watch Java API浅析getData
- 扩展欧几里德与乘法逆元
- windows 时间同步服务器
- 在万网ECS服务器配置二级域名
- PhpStorm中一些实用的快捷键
- Qt中实现启动画面
- [leetcode]93. Restore IP Addresses