HBase的安装

来源:互联网 发布:国外婚纱照 知乎 编辑:程序博客网 时间:2024/05/19 17:56

介绍一下Hbase的单机版安装,伪分步模式,完全分布式安装

1.单击版安装

首先下载对应hadoop版本的Hbase,这里我用了hadoop1.2.1版本   对应的是hbase-0.98.3-hadoop1-bin.tar.gz

 

下载好后,接着解压


执行

tar -xzvf hbase-0.98.3-hadoop1-bin.tar.gz

然后打开conf中  hbase-env.sh    修改如下

export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_45/

在修改hbase-site.xml

<property><name>hbase.rootdir</name><value>file:///home/administrator/hbase-0.98.3-hadoop1/data</value></property>

单击版就安装好了

验证一下    进入bin   执行./start-hbase.sh

在进入jdk的bin目录   执行  jps

看看HMaster是否成功了

如果成功了就安装好了  

administrator@ubuntu:~/hbase-0.98.3-hadoop1/bin$ ./hbase shell

看看是否能进入shell
hbase(main):001:0> quit

2.安装伪分布式

接着单机版在继续,

修改hbase-env.sh

##/**# * Copyright 2007 The Apache Software Foundation# *# * Licensed to the Apache Software Foundation (ASF) under one# * or more contributor license agreements.  See the NOTICE file# * distributed with this work for additional information# * regarding copyright ownership.  The ASF licenses this file# * to you under the Apache License, Version 2.0 (the# * "License"); you may not use this file except in compliance# * with the License.  You may obtain a copy of the License at# *# *     http://www.apache.org/licenses/LICENSE-2.0# *# * Unless required by applicable law or agreed to in writing, software# * distributed under the License is distributed on an "AS IS" BASIS,# * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.# * See the License for the specific language governing permissions and# * limitations under the License.# */# Set environment variables here.# This script sets variables multiple times over the course of starting an hbase process,# so try to keep things idempotent unless you want to take an even deeper look# into the startup scripts (bin/hbase, etc.)# The java implementation to use.  Java 1.6 required.# export JAVA_HOME=/usr/java/jdk1.6.0/<span style="color:#FF0000;">export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_45/</span># Extra Java CLASSPATH elements.  Optional.# export HBASE_CLASSPATH=<span style="color:#FF0000;">export HBASE_CLASSPATH=/home/administrator/Hadoop/hadoop-1.2.1/conf</span># The maximum amount of heap to use, in MB. Default is 1000.# export HBASE_HEAPSIZE=1000# Extra Java runtime options.# Below are what we set by default.  May only work with SUN JVM.# For more on why as well as other possible settings,# see http://wiki.apache.org/hadoop/PerformanceTuningexport HBASE_OPTS="-XX:+UseConcMarkSweepGC"# Uncomment one of the below three options to enable java garbage collection logging for the server-side processes.# This enables basic gc logging to the .out file.# export SERVER_GC_OPTS="-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps"# This enables basic gc logging to its own file.# If FILE-PATH is not replaced, the log file(.gc) would still be generated in the HBASE_LOG_DIR .# export SERVER_GC_OPTS="-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:<FILE-PATH>"# This enables basic GC logging to its own file with automatic log rolling. Only applies to jdk 1.6.0_34+ and 1.7.0_2+.# If FILE-PATH is not replaced, the log file(.gc) would still be generated in the HBASE_LOG_DIR .# export SERVER_GC_OPTS="-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:<FILE-PATH> -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=1 -XX:GCLogFileSize=512M"# Uncomment one of the below three options to enable java garbage collection logging for the client processes.# This enables basic gc logging to the .out file.# export CLIENT_GC_OPTS="-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps"# This enables basic gc logging to its own file.# If FILE-PATH is not replaced, the log file(.gc) would still be generated in the HBASE_LOG_DIR .# export CLIENT_GC_OPTS="-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:<FILE-PATH>"# This enables basic GC logging to its own file with automatic log rolling. Only applies to jdk 1.6.0_34+ and 1.7.0_2+.# If FILE-PATH is not replaced, the log file(.gc) would still be generated in the HBASE_LOG_DIR .# export CLIENT_GC_OPTS="-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:<FILE-PATH> -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=1 -XX:GCLogFileSize=512M"# Uncomment below if you intend to use the EXPERIMENTAL off heap cache.# export HBASE_OPTS="$HBASE_OPTS -XX:MaxDirectMemorySize="# Set hbase.offheapcache.percentage in hbase-site.xml to a nonzero value.# Uncomment and adjust to enable JMX exporting# See jmxremote.password and jmxremote.access in $JRE_HOME/lib/management to configure remote password access.# More details at: http://java.sun.com/javase/6/docs/technotes/guides/management/agent.html## export HBASE_JMX_BASE="-Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false"# export HBASE_MASTER_OPTS="$HBASE_MASTER_OPTS $HBASE_JMX_BASE -Dcom.sun.management.jmxremote.port=10101"# export HBASE_REGIONSERVER_OPTS="$HBASE_REGIONSERVER_OPTS $HBASE_JMX_BASE -Dcom.sun.management.jmxremote.port=10102"# export HBASE_THRIFT_OPTS="$HBASE_THRIFT_OPTS $HBASE_JMX_BASE -Dcom.sun.management.jmxremote.port=10103"# export HBASE_ZOOKEEPER_OPTS="$HBASE_ZOOKEEPER_OPTS $HBASE_JMX_BASE -Dcom.sun.management.jmxremote.port=10104"# export HBASE_REST_OPTS="$HBASE_REST_OPTS $HBASE_JMX_BASE -Dcom.sun.management.jmxremote.port=10105"# File naming hosts on which HRegionServers will run.  $HBASE_HOME/conf/regionservers by default.# export HBASE_REGIONSERVERS=${HBASE_HOME}/conf/regionservers# Uncomment and adjust to keep all the Region Server pages mapped to be memory resident#HBASE_REGIONSERVER_MLOCK=true#HBASE_REGIONSERVER_UID="hbase"# File naming hosts on which backup HMaster will run.  $HBASE_HOME/conf/backup-masters by default.# export HBASE_BACKUP_MASTERS=${HBASE_HOME}/conf/backup-masters# Extra ssh options.  Empty by default.# export HBASE_SSH_OPTS="-o ConnectTimeout=1 -o SendEnv=HBASE_CONF_DIR"# Where log files are stored.  $HBASE_HOME/logs by default.# export HBASE_LOG_DIR=${HBASE_HOME}/logs# Enable remote JDWP debugging of major HBase processes. Meant for Core Developers # export HBASE_MASTER_OPTS="$HBASE_MASTER_OPTS -Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=8070"# export HBASE_REGIONSERVER_OPTS="$HBASE_REGIONSERVER_OPTS -Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=8071"# export HBASE_THRIFT_OPTS="$HBASE_THRIFT_OPTS -Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=8072"# export HBASE_ZOOKEEPER_OPTS="$HBASE_ZOOKEEPER_OPTS -Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=8073"# A string representing this instance of hbase. $USER by default.# export HBASE_IDENT_STRING=$USER# The scheduling priority for daemon processes.  See 'man nice'.# export HBASE_NICENESS=10# The directory where pid files are stored. /tmp by default.# export HBASE_PID_DIR=/var/hadoop/pids# Seconds to sleep between slave commands.  Unset by default.  This# can be useful in large clusters, where, e.g., slave rsyncs can# otherwise arrive faster than the master can service them.# export HBASE_SLAVE_SLEEP=0.1# Tell HBase whether it should manage it's own instance of Zookeeper or not.# export HBASE_MANAGES_ZK=true# The default log rolling policy is RFA, where the log file is rolled as per the size defined for the # RFA appender. Please refer to the log4j.properties file to see more details on this appender.# In case one needs to do log rolling on a date change, one should set the environment property# HBASE_ROOT_LOGGER to "<DESIRED_LOG LEVEL>,DRFA".# For example:# HBASE_ROOT_LOGGER=INFO,DRFA# The reason for changing default to RFA is to avoid the boundary case of filling out disk space as # DRFA doesn't put any cap on the log size. Please refer to HBase-5655 for more context.

修改hbase-site.xml

<?xml version="1.0"?><?xml-stylesheet type="text/xsl" href="configuration.xsl"?><!--/** * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements.  See the NOTICE file * distributed with this work for additional information * regarding copyright ownership.  The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License.  You may obtain a copy of the License at * *     http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */--><configuration><property><name>hbase.rootdir</name><span style="color:#FF0000;"><value>hdfs://192.168.1.137:9000/hbase</value></span></property><property><name>hbase.cluster.distributed</name><value>true</value></property></configuration>

hbase.cluster.distributed

表示开启分布式模式

administrator@ubuntu:/usr/lib/jvm/jdk1.7.0_45/bin$ jps3148 TaskTracker2565 DataNode2812 SecondaryNameNode2901 JobTracker4018 HRegionServer2262 NameNode3802 HMaster4412 Jps3738 HQuorumPeer

因为只有一台机子,所以hadoop和hbase全部搭建成了伪分布式

也可以访问   http://192.168.1.137:60010   看看hbase是否启动成功


这里有时候有个错误,HMaster会自动消失,找了很久,也没找到解决方法

后来发现好像是进程的关系,不太懂linux。所以直接重启一边就好了


0 0
原创粉丝点击