hbase 源代码分析 (9) hbase启动过程
来源:互联网 发布:工作流数据库设计思路 编辑:程序博客网 时间:2024/05/17 21:57
上一章节:hbase 源代码分析 (8) delete 过程 详解
http://blog.csdn.net/chenfenggang/article/details/75094362
这一章节主要分析shell脚本。hbase的启动过程。
过程如下:
1)运行start-hbase.sh
2) 加载conf,加载需要lib,class文件,包括jdk里面的,hbase本身的。
3)判断安装模式
4)如果集群模式需要启动
a)zookeeper,
b)Master
c)RegionService
d)master-backup
如果本地模式则只需要启动master就够了。
在master里面会new一个zk,和启动一个regionService,但是这个master和regionSerivce是同一个JVM
启动maseter主要是调用org.apache.hadoop.hbase.master.HMaster
而启动regionSeriver主要是启动org.apache.hadoop.hbase.regionserver.HRegionServer里的main函数
zk 对应org.apache.hadoop.hbase.zookeeper.HQuorumPeer
主要结论:
HLog文件查看可以通过WALPrettyPrinter
HFIle可以通过类HFilePrettyPrinter查看
下面开始分析,第一个脚本start-hbase.sh
usage="Usage: start-hbase.sh [--autostart-window-size <window size in hours>]\
[--autostart-window-retry-limit <retry count limit for autostart>]\
[autostart|start]"
#获取当前目录名
bin=`dirname "${BASH_SOURCE-$0}"`
bin=`cd "$bin">/dev/null; pwd`
# default autostart args value indicating infinite window size and no retry limit
AUTOSTART_WINDOW_SIZE=0
AUTOSTART_WINDOW_RETRY_LIMIT=0
#加载配置
. "$bin"/hbase-config.sh
# start hbase daemons
errCode=$?
if [ $errCode -ne 0 ]
then
exit $errCode
fi
if [ "$1" = "autostart" ]
then
commandToRun="--autostart-window-size ${AUTOSTART_WINDOW_SIZE} --autostart-window-retry-limit ${AUTOSTART_WINDOW_RETRY_LIMIT} autostart"
else
commandToRun="start"
fi
#判断模式
# HBASE-6504 - only take the first line of the output in case verbose gc is on
distMode=`$bin/hbase --config "$HBASE_CONF_DIR" org.apache.hadoop.hbase.util.HBaseConfTool hbase.cluster.distributed | head -n 1`
//启动
if [ "$distMode" == 'false' ]
then
"$bin"/hbase-daemon.sh --config "${HBASE_CONF_DIR}" $commandToRun master
else
"$bin"/hbase-daemons.sh --config "${HBASE_CONF_DIR}" $commandToRun zookeeper
"$bin"/hbase-daemon.sh --config "${HBASE_CONF_DIR}" $commandToRun master
"$bin"/hbase-daemons.sh --config "${HBASE_CONF_DIR}" \
--hosts "${HBASE_REGIONSERVERS}" $commandToRun regionserver
"$bin"/hbase-daemons.sh --config "${HBASE_CONF_DIR}" \
--hosts "${HBASE_BACKUP_MASTERS}" $commandToRun master-backup
fi
配置类hbase-config.sh ,这个就是指明bin,conf,home,javahome,master等文件。
- ....
# Allow alternate hbase conf dir location.
HBASE_CONF_DIR="${HBASE_CONF_DIR:-$HBASE_HOME/conf}"
# List of hbase regions servers.
HBASE_REGIONSERVERS="${HBASE_REGIONSERVERS:-$HBASE_CONF_DIR/regionservers}"
# List of hbase secondary masters.
HBASE_BACKUP_MASTERS="${HBASE_BACKUP_MASTERS:-$HBASE_CONF_DIR/backup-masters}"
if [ -n "$HBASE_JMX_BASE" ] && [ -z "$HBASE_JMX_OPTS" ]; then
HBASE_JMX_OPTS="$HBASE_JMX_BASE"
fi
# Thrift JMX opts
if [ -n "$HBASE_JMX_OPTS" ] && [ -z "$HBASE_THRIFT_JMX_OPTS" ]; then
HBASE_THRIFT_JMX_OPTS="$HBASE_JMX_OPTS -Dcom.sun.management.jmxremote.port=10103"
fi
# Thrift opts
if [ -z "$HBASE_THRIFT_OPTS" ]; then
export HBASE_THRIFT_OPTS="$HBASE_THRIFT_JMX_OPTS"
fi
# REST JMX opts
if [ -n "$HBASE_JMX_OPTS" ] && [ -z "$HBASE_REST_JMX_OPTS" ]; then
HBASE_REST_JMX_OPTS="$HBASE_JMX_OPTS -Dcom.sun.management.jmxremote.port=10105"
fi
# REST opts
if [ -z "$HBASE_REST_OPTS" ]; then
export HBASE_REST_OPTS="$HBASE_REST_JMX_OPTS"
fi
# Source the hbase-env.sh. Will have JAVA_HOME defined.
# HBASE-7817 - Source the hbase-env.sh only if it has not already been done. HBASE_ENV_INIT keeps track of it.
if [ -z "$HBASE_ENV_INIT" ] && [ -f "${HBASE_CONF_DIR}/hbase-env.sh" ]; then
. "${HBASE_CONF_DIR}/hbase-env.sh"
export HBASE_ENV_INIT="true"
fi
# Verify if hbase has the mlock agent
if [ "$HBASE_REGIONSERVER_MLOCK" = "true" ]; then
MLOCK_AGENT="$HBASE_HOME/lib/native/libmlockall_agent.so"
if [ ! -f "$MLOCK_AGENT" ]; then
cat 1>&2 <<EOF
Unable to find mlockall_agent, hbase must be compiled with -Pnative
EOF
exit 1
fi
if [ -z "$HBASE_REGIONSERVER_UID" ] || [ "$HBASE_REGIONSERVER_UID" == "$USER" ]; then
HBASE_REGIONSERVER_OPTS="$HBASE_REGIONSERVER_OPTS -agentpath:$MLOCK_AGENT"
else
HBASE_REGIONSERVER_OPTS="$HBASE_REGIONSERVER_OPTS -agentpath:$MLOCK_AGENT=user=$HBASE_REGIONSERVER_UID"
fi
fi
export MALLOC_ARENA_MAX=${MALLOC_ARENA_MAX:-4}
# Now having JAVA_HOME defined is required
if [ -z "$JAVA_HOME" ]; then
cat 1>&2 <<EOF
EOF
exit 1
fi
最后一个。hbase
主要根据参数启动不同的java对象。
bin=`dirname "$0"`
bin=`cd "$bin">/dev/null; pwd`
# This will set HBASE_HOME, etc.
. "$bin"/hbase-config.sh
#判断cygwin
cygwin=false
case "`uname`" in
CYGWIN*) cygwin=true;;
esac
#是否是开发环境
# Detect if we are in hbase sources dir
in_dev_env=false
if [ -d "${HBASE_HOME}/target" ]; then
in_dev_env=true
fi
- #省去参数判断
#获取命令
# get arguments
COMMAND=$1
shift
JAVA=$JAVA_HOME/bin/java
- #去除加载文件
- #加载jruby 文件
- # check if the commmand needs jruby
declare -a jruby_cmds=("shell" "org.jruby.Main")
for cmd in "${jruby_cmds[@]}"; do
if [[ $cmd == "$COMMAND" ]]; then
jruby_needed=true
break
fi
done
# the command needs jrub
计算命令执行那些东西
# figure out which class to run
if [ "$COMMAND" = "shell" ] ; then
#find the hbase ruby sources
if [ -d "$HBASE_HOME/lib/ruby" ]; then
HBASE_OPTS="$HBASE_OPTS -Dhbase.ruby.sources=$HBASE_HOME/lib/ruby"
else
HBASE_OPTS="$HBASE_OPTS -Dhbase.ruby.sources=$HBASE_HOME/hbase-shell/src/main/ruby"
fi
HBASE_OPTS="$HBASE_OPTS $HBASE_SHELL_OPTS"
CLASS="org.jruby.Main -X+O ${JRUBY_OPTS} ${HBASE_HOME}/bin/hirb.rb"
elif [ "$COMMAND" = "hbck" ] ; then
CLASS='org.apache.hadoop.hbase.util.HBaseFsck'
# TODO remove old 'hlog' version
elif [ "$COMMAND" = "hlog" -o "$COMMAND" = "wal" ] ; then
CLASS='org.apache.hadoop.hbase.wal.WALPrettyPrinter'
elif [ "$COMMAND" = "hfile" ] ; then
CLASS='org.apache.hadoop.hbase.io.hfile.HFilePrettyPrinter'
elif [ "$COMMAND" = "zkcli" ] ; then
CLASS="org.apache.hadoop.hbase.zookeeper.ZooKeeperMainServer"
elif [ "$COMMAND" = "backup" ] ; then
CLASS='org.apache.hadoop.hbase.backup.BackupDriver'
elif [ "$COMMAND" = "restore" ] ; then
CLASS='org.apache.hadoop.hbase.backup.RestoreDriver'
elif [ "$COMMAND" = "upgrade" ] ; then
echo "This command was used to upgrade to HBase 0.96, it was removed in HBase 2.0.0."
echo "Please follow the documentation at http://hbase.apache.org/book.html#upgrading."
exit 1
elif [ "$COMMAND" = "snapshot" ] ; then
SUBCOMMAND=$1
shift
if [ "$SUBCOMMAND" = "create" ] ; then
CLASS="org.apache.hadoop.hbase.snapshot.CreateSnapshot"
elif [ "$SUBCOMMAND" = "info" ] ; then
CLASS="org.apache.hadoop.hbase.snapshot.SnapshotInfo"
elif [ "$SUBCOMMAND" = "export" ] ; then
CLASS="org.apache.hadoop.hbase.snapshot.ExportSnapshot"
else
echo "Usage: hbase [<options>] snapshot <subcommand> [<args>]"
echo "$options_string"
echo ""
echo "Subcommands:"
echo " create Create a new snapshot of a table"
echo " info Tool for dumping snapshot information"
echo " export Export an existing snapshot"
exit 1
fi
elif [ "$COMMAND" = "master" ] ; then
CLASS='org.apache.hadoop.hbase.master.HMaster'
if [ "$1" != "stop" ] && [ "$1" != "clear" ] ; then
HBASE_OPTS="$HBASE_OPTS $HBASE_MASTER_OPTS"
fi
elif [ "$COMMAND" = "regionserver" ] ; then
CLASS='org.apache.hadoop.hbase.regionserver.HRegionServer'
if [ "$1" != "stop" ] ; then
HBASE_OPTS="$HBASE_OPTS $HBASE_REGIONSERVER_OPTS"
fi
elif [ "$COMMAND" = "thrift" ] ; then
CLASS='org.apache.hadoop.hbase.thrift.ThriftServer'
if [ "$1" != "stop" ] ; then
HBASE_OPTS="$HBASE_OPTS $HBASE_THRIFT_OPTS"
fi
elif [ "$COMMAND" = "thrift2" ] ; then
CLASS='org.apache.hadoop.hbase.thrift2.ThriftServer'
if [ "$1" != "stop" ] ; then
HBASE_OPTS="$HBASE_OPTS $HBASE_THRIFT_OPTS"
fi
elif [ "$COMMAND" = "rest" ] ; then
CLASS='org.apache.hadoop.hbase.rest.RESTServer'
if [ "$1" != "stop" ] ; then
HBASE_OPTS="$HBASE_OPTS $HBASE_REST_OPTS"
fi
elif [ "$COMMAND" = "zookeeper" ] ; then
CLASS='org.apache.hadoop.hbase.zookeeper.HQuorumPeer'
if [ "$1" != "stop" ] ; then
HBASE_OPTS="$HBASE_OPTS $HBASE_ZOOKEEPER_OPTS"
fi
elif [ "$COMMAND" = "clean" ] ; then
case $1 in
--cleanZk|--cleanHdfs|--cleanAll)
matches="yes" ;;
*) ;;
esac
if [ $# -ne 1 -o "$matches" = "" ]; then
echo "Usage: hbase clean (--cleanZk|--cleanHdfs|--cleanAll)"
echo "Options: "
echo " --cleanZk cleans hbase related data from zookeeper."
echo " --cleanHdfs cleans hbase related data from hdfs."
echo " --cleanAll cleans hbase related data from both zookeeper and hdfs."
exit 1;
fi
"$bin"/hbase-cleanup.sh --config ${HBASE_CONF_DIR} $@
exit $?
elif [ "$COMMAND" = "mapredcp" ] ; then
CLASS='org.apache.hadoop.hbase.util.MapreduceDependencyClasspathTool'
elif [ "$COMMAND" = "classpath" ] ; then
echo $CLASSPATH
exit 0
elif [ "$COMMAND" = "pe" ] ; then
CLASS='org.apache.hadoop.hbase.PerformanceEvaluation'
HBASE_OPTS="$HBASE_OPTS $HBASE_PE_OPTS"
elif [ "$COMMAND" = "ltt" ] ; then
CLASS='org.apache.hadoop.hbase.util.LoadTestTool'
HBASE_OPTS="$HBASE_OPTS $HBASE_LTT_OPTS"
elif [ "$COMMAND" = "canary" ] ; then
CLASS='org.apache.hadoop.hbase.tool.Canary'
HBASE_OPTS="$HBASE_OPTS $HBASE_CANARY_OPTS"
elif [ "$COMMAND" = "version" ] ; then
CLASS='org.apache.hadoop.hbase.util.VersionInfo'
else
CLASS=$COMMAND
fi
# Have JVM dump heap if we run out of memory. Files will be 'launch directory'
# and are named like the following: java_pid21612.hprof. Apparently it doesn't
# 'cost' to have this flag enabled. Its a 1.6 flag only. See:
# http://blogs.sun.com/alanb/entry/outofmemoryerror_looks_a_bit_better
HBASE_OPTS="$HBASE_OPTS -Dhbase.log.dir=$HBASE_LOG_DIR"
HBASE_OPTS="$HBASE_OPTS -Dhbase.log.file=$HBASE_LOGFILE"
HBASE_OPTS="$HBASE_OPTS -Dhbase.home.dir=$HBASE_HOME"
HBASE_OPTS="$HBASE_OPTS -Dhbase.id.str=$HBASE_IDENT_STRING"
HBASE_OPTS="$HBASE_OPTS -Dhbase.root.logger=${HBASE_ROOT_LOGGER:-INFO,console}"
if [ "x$JAVA_LIBRARY_PATH" != "x" ]; then
HBASE_OPTS="$HBASE_OPTS -Djava.library.path=$JAVA_LIBRARY_PATH"
export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$JAVA_LIBRARY_PATH"
fi
# Enable security logging on the master and regionserver only
if [ "$COMMAND" = "master" ] || [ "$COMMAND" = "regionserver" ]; then
HBASE_OPTS="$HBASE_OPTS -Dhbase.security.logger=${HBASE_SECURITY_LOGGER:-INFO,RFAS}"
else
HBASE_OPTS="$HBASE_OPTS -Dhbase.security.logger=${HBASE_SECURITY_LOGGER:-INFO,NullAppender}"
fi
HEAP_SETTINGS="$JAVA_HEAP_MAX $JAVA_OFFHEAP_MAX"
# Exec unless HBASE_NOEXEC is set.
export CLASSPATH
if [ "${HBASE_NOEXEC}" != "" ]; then
"$JAVA" -Dproc_$COMMAND -XX:OnOutOfMemoryError="kill -9 %p" $HEAP_SETTINGS $HBASE_OPTS $CLASS "$@"
else
exec "$JAVA" -Dproc_$COMMAND -XX:OnOutOfMemoryError="kill -9 %p" $HEAP_SETTINGS $HBASE_OPTS $CLASS "$@"
fi
最后执行Java类COMMAND : start/stop/restartOnOutOfMemoryError : 当内存溢出时直接杀死进程HEAP_SETTINGS :设置最大堆,CLASS :这个就是org.apache.hadoop.hbase.regionserver.HRegionServer 等。
最后执行Java类COMMAND : start/stop/restartOnOutOfMemoryError : 当内存溢出时直接杀死进程HEAP_SETTINGS :设置最大堆,CLASS :这个就是org.apache.hadoop.hbase.regionserver.HRegionServer 等。
从上面可以看出,通过
HFilePrettyPrinter 可以参考hfile内容是什么。
WALPrettyPrinter 可以打开HLOG文件,可以
而且hbase shell 是启动的ruby 文件。
HMaster的启动入口
public static void main(String [] args) { VersionInfo.logVersion(); new HMasterCommandLine(HMaster.class).doMain(args);}
HRegionSerive的启动入口
/** * @see org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine */public static void main(String[] args) throws Exception { VersionInfo.logVersion(); Configuration conf = HBaseConfiguration.create(); @SuppressWarnings("unchecked") Class<? extends HRegionServer> regionServerClass = (Class<? extends HRegionServer>) conf .getClass(HConstants.REGION_SERVER_IMPL, HRegionServer.class); new HRegionServerCommandLine(regionServerClass).doMain(args);}
后面将继续将HMaster和HRegionService启动过程
未完待续。。。
阅读全文
0 0
- hbase 源代码分析 (9) hbase启动过程
- hbase 源代码分析 (12) Master和RegionService 启动过程
- hbase 源代码分析 (15)compact 过程
- hbase 源代码分析 (17)MapReduce 过程
- HBase 0.99 源代码分析 - Master启动过程(1)
- HBase 0.99 源代码分析 - Master启动过程(2)
- hbase 源代码分析 (19) HMaster 启动负载均衡过程分析
- hbase 源代码分析(5)regionLocator 获取region过程 详解
- hbase 源代码分析(6)get 过程 详解
- hbase 源代码分析 (7) put 过程 详解
- hbase 源代码分析 (8) delete 过程 详解
- Hbase 源码分析5--Master启动过程
- HBase源码分析_Master启动过程
- HBase 0.94 master启动过程源码分析
- HBase Master启动过程
- HBASE REGIONSERVER启动过程
- hbase 源代码分析(18)负载均衡
- hbase 源码分析 (14) spit 过程
- 【反射】PHP的反射机制【原创】
- Android Studio通过cmake创建FFmpeg项目
- [BZOJ]2809 左偏树
- 截取list集合中任意几条数据
- NYOJ1239 引水工程(最小生成树,Prim)
- hbase 源代码分析 (9) hbase启动过程
- 【面经笔记】TCP超时等待
- PHP最佳实践之DateTime、DateInterval和DateTimeZone
- ZK208 manager使用说明
- 微信小程序不温不火上线半年 传统企业更爱
- (59)字符串练习
- 10大Python数据可视化库
- hrbust 1396 射镖游戏(思维 背包)
- java单例模式升级版-》只能生成3个对象