HBase操作(Shell与Java API)
来源:互联网 发布:asp.net php学那个好 编辑:程序博客网 时间:2024/06/16 03:21
http://blog.csdn.net/u013980127/article/details/52443155
下面代码在hadoop 2.6.4 + hbase 1.2.2 + centos 6.5 + jdk 1.8上运行通过。
HBase操作
一般操作
hbase> status
hbase> status ‘simple’
hbase> status ‘summary’
hbase> status ‘detailed’ version 显示版本。
hbase> version whoami 显示当前用户与组。
hbase> whoami
表管理
1. alter
修改表结构必须先disable
Shell:
语法:alter 't1', {NAME => 'f1'}, {NAME => 'f2', METHOD => 'delete'}必须指定列族。示例:表t1的列族f1,修改或增加VERSIONS为5hbase> alter ‘t1’, NAME => ‘f1’, VERSIONS => 5也可以同时修改多个列族:hbase> alter ‘t1’, ‘f1’, {NAME => ‘f2’, IN_MEMORY => true}, {NAME => ‘f3’, VERSIONS => 5}删除表t1的f1列族:hbase> alter ‘t1’, NAME => ‘f1’, METHOD => ‘delete’或hbase> alter ‘t1’, ‘delete’ => ‘f1’也可以修改table-scope属性,例如MAX_FILESIZE, READONLY,MEMSTORE_FLUSHSIZE, DEFERRED_LOG_FLUSH等。例如,修改region的最大大小为128MB:hbase> alter ‘t1’, MAX_FILESIZE => ‘134217728’也可以设置表的coprocessor属性:hbase> alter ‘t1’,‘coprocessor’=>’hdfs:///foo.jar|com.foo.FooRegionObserver|1001|arg1=1,arg2=2’可以设置复数个coprocessor,这时会自动添加序列以唯一标示coprocessor。coprocessor属性设置语法:[coprocessor jar file location] | class name | [priority] | [arguments]也可以设置configuration给表或列族:hbase> alter ‘t1’, CONFIGURATION => {‘hbase.hregion.scan.loadColumnFamiliesOnDemand’ => ‘true’}hbase> alter ‘t1’, {NAME => ‘f2’, CONFIGURATION => {‘hbase.hstore.blockingStoreFiles’ => ’10’}}也可以移除table-scope属性:hbase> alter ‘t1’, METHOD => ‘table_att_unset’, NAME => ‘MAX_FILESIZE’hbase> alter ‘t1’, METHOD => ‘table_att_unset’, NAME => ‘coprocessor$1’可以通过一个命令进行多项修改:hbase> alter ‘t1’, { NAME => ‘f1’, VERSIONS => 3 },{ MAX_FILESIZE => ‘134217728’ }, { METHOD => ‘delete’, NAME => ‘f2’ },OWNER => ‘johndoe’, METADATA => { ‘mykey’ => ‘myvalue’ }
Java实现:
/** * 修改表结构,增加列族 * * @param tableName 表名 * @param family 列族 * * @throws IOException */public static void putFamily(String tableName, String family) throws IOException { try (Connection connection = ConnectionFactory.createConnection(configuration); Admin admin = connection.getAdmin() ) { TableName tblName = TableName.valueOf(tableName); if (admin.tableExists(tblName)) { admin.disableTable(tblName); HColumnDescriptor cf = new HColumnDescriptor(family); admin.addColumn(TableName.valueOf(tableName), cf); admin.enableTable(tblName); } else { log.warn(tableName + " not exist."); } }}# 调用示例putFamily("blog", "note");
2. create
创建表。
Shell:
语法:create 'table', { NAME => 'family', VERSIONS => VERSIONS } [, { NAME => 'family', VERSIONS => VERSIONS }]示例:hbase> create ‘t1’, {NAME => ‘f1’, VERSIONS => 5}hbase> create ‘t1’, {NAME => ‘f1’}, {NAME => ‘f2’}, {NAME => ‘f3’}hbase> # The above in shorthand would be the following:hbase> create ‘t1’, ‘f1’, ‘f2’, ‘f3’hbase> create ‘t1’, {NAME => ‘f1’, VERSIONS => 1, TTL => 2592000, BLOCKCACHE => true}hbase> create ‘t1’, {NAME => ‘f1’, CONFIGURATION => {‘hbase.hstore.blockingStoreFiles’ => ’10’}}
Java示例:
/** * 创建表 * * @param tableName 表名 * @param familyNames 列族 * * @throws IOException */public static void createTable(String tableName, String[] familyNames) throws IOException { try (Connection connection = ConnectionFactory.createConnection(configuration); Admin admin = connection.getAdmin() ) { TableName table = TableName.valueOf(tableName); if (admin.tableExists(table)) { log.info(tableName + " already exists"); } else { HTableDescriptor hTableDescriptor = new HTableDescriptor(table); for (String family : familyNames) { hTableDescriptor.addFamily(new HColumnDescriptor(family)); } admin.createTable(hTableDescriptor); } }}# 调用例createTable("blog", new String[]{"author", "contents"});
3. describe
查询表结构
hbase> describe ‘t1’
4. disable
无效化指定表
hbase> disable ‘t1’
5. disable_all
无效化(正则)匹配的表
hbase> disable_all ‘t.*’
6. is_disabled
验证指定的表是否是无效的
hbase> is_disabled ‘t1’
7. drop
删除表。表必须是无效的。
Shell:
hbase> drop ‘t1’
Java实现:
/** * 删除表 * * @param tableName 表名 * * @throws IOException */public static void dropTable(String tableName) throws IOException { try (Connection connection = ConnectionFactory.createConnection(configuration); Admin admin = connection.getAdmin() ) { TableName table = TableName.valueOf(tableName); if (admin.tableExists(table)) { admin.disableTable(table); admin.deleteTable(table); } }}#调用例dropTable("blog");
8. drop_all
删除所有正则匹配的表。
hbase> drop_all ‘t.*’
9. enable
使指定表有效化。
hbase> enable ‘t1’
10. enable_all
使正则匹配的所有表有效。
hbase> enable_all ‘t.*’
11. is_enabled
验证指定表是否有效
hbase> is_enabled ‘t1’
12. exists
指定表是否存在。
hbase> exists ‘t1’
13. list
列出HBase中所有表,可以通过正则过滤。
hbase> listhbase> list ‘abc.*’
14. show_filters
显示所有过滤器。
hbase> show_filtershbase(main):066:0> show_filtersDependentColumnFilter KeyOnlyFilter ColumnCountGetFilter SingleColumnValueFilter PrefixFilter SingleColumnValueExcludeFilter FirstKeyOnlyFilter ColumnRangeFilter TimestampsFilter FamilyFilter QualifierFilter ColumnPrefixFilter RowFilter MultipleColumnPrefixFilter InclusiveStopFilter PageFilter ValueFilter ColumnPaginationFilter
15. alter_status
获取alter执行的状态。
语法:alter_status ‘tableName’
hbase> alter_status ‘t1’
16. alter_async
异步执行alter,通过alter_status获取执行状态。
数据操作
1. count
统计表的行数。
Shell:
该操作执行的时间可能会比较长 (运行mapreduce执行统计 '$HADOOP_HOME/bin/hadoop jar hbase.jar rowcount'). 默认每1000行(可以指定步数)显示当前总行数。Scancaching默认开启,默认大小为10,也可以设置:hbase> count ‘t1’hbase> count ‘t1’, INTERVAL => 100000hbase> count ‘t1’, CACHE => 1000hbase> count ‘t1’, INTERVAL => 10, CACHE => 1000也可以通过表的引用执行:hbase> t.counthbase> t.count INTERVAL => 100000hbase> t.count CACHE => 1000hbase> t.count INTERVAL => 10, CACHE => 1000
Java实现:
/** * 统计行数 * * @param tableName 表名 * * @return 行数 * * @throws IOException */public static long count(String tableName) throws IOException { final long[] rowCount = {0}; try (Connection connection = ConnectionFactory.createConnection(configuration); Table table = connection.getTable(TableName.valueOf(tableName)) ) { Scan scan = new Scan(); scan.setFilter(new FirstKeyOnlyFilter()); ResultScanner resultScanner = table.getScanner(scan); resultScanner.forEach(result -> { rowCount[0] += result.size(); }); } System.out.println("行数: " + rowCount[0]); return rowCount[0];}#调用示例count("blog");
2. delete
删除指定数据。
Shell:
语法:delete 'table', 'rowkey', 'family:column' [, 'timestamp']删除t1表的r1行、c1列并且时间戳为ts1的数据:hbase> delete ‘t1’, ‘r1’, ‘c1’, ts1也可以通过表引用调用该命令:hbase> t.delete ‘r1’, ‘c1’, ts1
Java实现:
/** * 删除指定数据 * <p> * columns为空, 删除指定列族的全部数据; * family为空时, 删除指定行键的全部数据; * </p> * * @param tableName 表名 * @param rowKey 行键 * @param family 列族 * @param columns 列集合 * * @throws IOException */public static void deleteData(String tableName, String rowKey, String family, String[] columns) throws IOException { try (Connection connection = ConnectionFactory.createConnection(configuration); Table table = connection.getTable(TableName.valueOf(tableName)) ) { Delete delete = new Delete(Bytes.toBytes(rowKey)); if (null != family && !"".equals(family)) { if (null != columns && columns.length > 0) { // 删除指定列 for (String column : columns) { delete.addColumn(Bytes.toBytes(family), Bytes.toBytes(column)); } } else { // 删除指定列族 delete.addFamily(Bytes.toBytes(family)); } } else { // 删除指定行 // empty, nothing to do } table.delete(delete); }}# 调用示例deleteData("blog", "rk12", "author", new String[] { "name", "school" });deleteData("blog", "rk11", "author", new String[] { "name" });deleteData("blog", "rk10", "author", null);deleteData("blog", "rk9", null, null);
3. deleteall
删除行。
语法:deleteall 'tableName', 'rowkey' [, 'column', 'timestamp']hbase> deleteall ‘t1’, ‘r1’hbase> deleteall ‘t1’, ‘r1’, ‘c1’hbase> deleteall ‘t1’, ‘r1’, ‘c1’, ts1也可以通过表引用调用该命令:hbase> t.deleteall ‘r1’hbase> t.deleteall ‘r1’, ‘c1’hbase> t.deleteall ‘r1’, ‘c1’, ts1
4. get
获取某行数据。
Shell:
语法:get 'tableName', 'rowkey',[,....]选项包括:列集合、时间戳、时间范围或版本示例:hbase> get ‘t1’, ‘r1’hbase> get ‘t1’, ‘r1’, {TIMERANGE => [ts1, ts2]}hbase> get 'blog', 'rk1', 'author:name'hbase>get 'blog', 'rk1', { COLUMN => 'author:name' }hbase> get ‘t1’, ‘r1’, {COLUMN => ‘c1’, TIMESTAMP => ts1}hbase> get ‘t1’, ‘r1’, {COLUMN => ‘c1’, TIMERANGE => [ts1, ts2], VERSIONS => 4}hbase> get ‘t1’, ‘r1’, {COLUMN => ‘c1’, TIMESTAMP => ts1, VERSIONS => 4}hbase> get ‘t1’, ‘r1’, {FILTER => “ValueFilter(=, ‘binary:abc’)”}hbase> get ‘t1’, ‘r1’, ‘c1’hbase> get ‘t1’, ‘r1’, ‘c1’, ‘c2’hbase> get ‘t1’, ‘r1’, [‘c1’, ‘c2’]也可以在列上指定FORMATTER,默认toStringBinary。可以使用org.apache.hadoop.hbase.util.Bytes中预定义的方法 (例如:toInt, toString) ;也可以自定义方法:'c(MyFormatterClass).format'。例如 cf:qualifier1 and cf:qualifier2:hbase> get ‘t1’, ‘r1’ {COLUMN => [‘cf:qualifier1:toInt’,‘cf:qualifier2:c(org.apache.hadoop.hbase.util.Bytes).toInt’] }注:只能在列上指定FORMATTER,不能针对列族的所有列。表的引用(通过get_table orcreate_table获得引用)也可以使用get命令,例如t是表t1的引用(t = get_table 't1'),则:hbase> t.get ‘r1’hbase> t.get ‘r1’, {TIMERANGE => [ts1, ts2]}hbase> t.get ‘r1’, {COLUMN => ‘c1’}hbase> t.get ‘r1’, {COLUMN => [‘c1’, ‘c2’, ‘c3’]}hbase> t.get ‘r1’, {COLUMN => ‘c1’, TIMESTAMP => ts1}hbase> t.get ‘r1’, {COLUMN => ‘c1’, TIMERANGE => [ts1, ts2], VERSIONS => 4}hbase> t.get ‘r1’, {COLUMN => ‘c1’, TIMESTAMP => ts1, VERSIONS => 4}hbase> t.get ‘r1’, {FILTER => “ValueFilter(=, ‘binary:abc’)”}hbase> t.get ‘r1’, ‘c1’hbase> t.get ‘r1’, ‘c1’, ‘c2’hbase> t.get ‘r1’, [‘c1’, ‘c2’]
Java实现:
/** * 获取指定数据 * <p> * column为空, 检索指定列族的全部数据; * family为空时, 检索指定行键的全部数据; * </p> * * @param tableName 表名 * @param rowKey 行键 * @param family 列族 * @param columns 列名集合 * * @throws IOException */public static void getData(String tableName, String rowKey, String family, String[] columns) throws IOException { try (Connection connection = ConnectionFactory.createConnection(configuration); Table table = connection.getTable(TableName.valueOf(tableName)) ) { Get get = new Get(Bytes.toBytes(rowKey)); Result result = table.get(get); if (null != family && !"".equals(family)) { if (null != columns && columns.length > 0) { // 表里指定列族的列值 for (String column : columns) { byte[] rb = result.getValue(Bytes.toBytes(family), Bytes.toBytes(column)); System.out.println(Bytes.toString(rb)); } } else { // 指定列族的所有值 Map<byte[], byte[]> columnMap = result.getFamilyMap(Bytes.toBytes(family)); for (Map.Entry<byte[], byte[]> entry : columnMap.entrySet()) { System.out.println(Bytes.toString(entry.getKey()) + " " + Bytes.toString(entry.getValue())); } } } else { // 指定行键的所有值 Cell[] cells = result.rawCells(); for (Cell cell : cells) { System.out.println("family => " + Bytes.toString(cell.getFamilyArray(), cell.getFamilyOffset(), cell.getFamilyLength()) + "\n" + "qualifier => " + Bytes.toString(cell.getQualifierArray(), cell.getQualifierOffset(), cell.getQualifierLength()) + "\n" + "value => " + Bytes.toString(cell.getValueArray(), cell.getValueOffset(), cell.getValueLength())); } } }}# 调用示例getData("blog", "rk1", null, null);getData("blog", "rk1", "author", null);getData("blog", "rk1", "author", new String[] { "name", "school" });
5. get_counter
获取计数器的值。
语法:get_counter 'tableName', 'row', 'column'示例:hbase> get_counter ‘t1’, ‘r1’, ‘c1’同样,也可以在表引用上使用:hbase> t.get_counter ‘r1’, ‘c1’
6. incr
计数器
Shell:
语法:incr 'tableName', 'row', 'column', value例如:表t1的r1行c1列增加1(可省略),或10:hbase> incr ‘t1’, ‘r1’, ‘c1’hbase> incr ‘t1’, ‘r1’, ‘c1’, 1hbase> incr ‘t1’, ‘r1’, ‘c1’, 10同样,也可以在表引用上使用hbase> t.incr ‘r1’, ‘c1’hbase> t.incr ‘r1’, ‘c1’, 1hbase> t.incr ‘r1’, ‘c1’, 10
Java实现:
/** * 计数器自增 * * @param tableName 表名 * @param rowKey 行键 * @param family 列族 * @param column 列 * @param value 增量 * * @throws IOException */public static void incr(String tableName, String rowKey, String family, String column, long value) throws IOException { try (Connection connection = ConnectionFactory.createConnection(configuration); Table table = connection.getTable(TableName.valueOf(tableName)) ) { long count = table.incrementColumnValue(Bytes.toBytes(rowKey), Bytes.toBytes(family), Bytes.toBytes(column), value); System.out.println("增量后的值: " + count); }}#调用示例incr("scores", "lisi", "courses", "eng", 2);
7. put
插入数据。
Shell:
语法:put 'table','rowkey','family:column','value'[,'timestamp']例如:插入表t1,行r1,列c1,时间戳ts1hbase> put ‘t1’, ‘r1’, ‘c1’, ‘value’, ts1同样,也可以在表引用上使用hbase> t.put ‘r1’, ‘c1’, ‘value’, ts1
Java实现:
/** * 插入数据 * * @param tableName 表名 * @param rowKey 行键 * @param familys 列族信息(Key: 列族; value: (列名, 列值)) */public static void putData(String tableName, String rowKey, Map<String, Map<String, String>> familys) throws IOException { try (Connection connection = ConnectionFactory.createConnection(configuration); Table table = connection.getTable(TableName.valueOf(tableName)) ) { Put put = new Put(Bytes.toBytes(rowKey)); for (Map.Entry<String, Map<String, String>> family : familys.entrySet()) { for (Map.Entry<String, String> column : family.getValue().entrySet()) { put.addColumn(Bytes.toBytes(family.getKey()), Bytes.toBytes(column.getKey()), Bytes.toBytes(column.getValue())); } } table.put(put); }}# 调用例// 行键1Map<String, Map<String, String>> map1 = new HashMap<>();// 列族author的列值Map<String, String> author1 = new HashMap<>();author1.put("name", "张三");author1.put("school", "MIT");map1.put("author", author1);// 列族contents的列值Map<String, String> contents1 = new HashMap<>();contents1.put("content", "吃饭了吗?");map1.put("contents", contents1);putData("blog", "rk1", map1);
8. scan
扫描全表。
语法:scan 'table' [, {COLUMNS => [ 'family:column', .... , LIMIT => num} ]可以使用以下限定词:TIMERANGE, FILTER, LIMIT, STARTROW, STOPROW, TIMESTAMP, MAXLENGTH,, COLUMNS, CACHE。如果没有限定词,则扫描全表。列族的列指定为空时,扫描列族中全部数据('col_family:')。指定过滤条件有两种方式:1. 使用过滤字符串 – 详细见[HBASE-4176 JIRA](https://issues.apache.org/jira/browse/HBASE-4176)2. 使用过滤器的整个包名称。示例如下:hbase> scan ‘.META.’hbase> scan ‘.META.’, {COLUMNS => ‘info:regioninfo’}hbase> scan ‘t1’, {COLUMNS => [‘c1’, ‘c2’], LIMIT => 10, STARTROW => ‘xyz’}hbase> scan ‘t1’, {COLUMNS => ‘c1’, TIMERANGE => [1303668804, 1303668904]}hbase> scan ‘t1’, {FILTER => “(PrefixFilter (‘row2’) AND(QualifierFilter (>=, ‘binary:xyz’))) AND (TimestampsFilter ( 123, 456))”}hbase> scan ‘t1’, {FILTER =>org.apache.hadoop.hbase.filter.ColumnPaginationFilter.new(1, 0)}CACHE_BLOCKS:切换block caching,默认可用。示例:hbase> scan ‘t1’, {COLUMNS => [‘c1’, ‘c2’], CACHE_BLOCKS => false}RAW:扫描返回所有数据 (包括delete markers和uncollected deleted)。该选项不能和指定COLUMNS共用。默认disable。示例:hbase> scan ‘t1’, {RAW => true, VERSIONS => 10}默认使用toStringBinary格式化,scan支持对列的自定义格式化。FORMATTER约定:1. 使用org.apache.hadoop.hbase.util.Bytes的方法(例如toInt, toString);2. 使用自定义类的方法例如'c(MyFormatterClass).format'。例如 cf:qualifier1 和 cf:qualifier2:hbase> scan ‘t1’, {COLUMNS => [‘cf:qualifier1:toInt’,‘cf:qualifier2:c(org.apache.hadoop.hbase.util.Bytes).toInt’] }注:只能指定列的FORMATTER,不能指定列族中所有列的FORMATTER。可以使用表的引用调用该方法:hbase> t = get_table ‘t’hbase> t.scan
Java实现:
/** * 全表扫描 * * @param tableName 表名 * * @throws IOException */public static void scan(String tableName) throws IOException { try (Connection connection = ConnectionFactory.createConnection(configuration); Table table = connection.getTable(TableName.valueOf(tableName)) ) { Scan scan = new Scan(); ResultScanner resultScanner = table.getScanner(scan); for (Result result : resultScanner) { List<Cell> cells = result.listCells(); for (Cell cell : cells) { System.out.println("row => " + Bytes.toString(CellUtil.cloneRow(cell)) + "\n" + "family => " + Bytes.toString(CellUtil.cloneFamily(cell)) + "\n" + "qualifier => " + Bytes.toString(CellUtil.cloneQualifier(cell)) + "\n" + "value => " + Bytes.toString(CellUtil.cloneValue(cell))); } } }}# 调用示例scan("blog");
9. truncate
无效、删除并重新创建表。
Shell:
hbase>truncate ‘t1’
Java示例:
工具
hbase> assign ‘REGION_NAME’
balancer 触发集群均衡器。hbase> balancer
balance_switch 切换均衡器。hbase> balance_switch true
hbase> balance_switch false
close_region 关闭region。hbase>close_region 'REGIONNAME', 'SERVER_NAME'
compact Compact all regions in a table:hbase> compact ‘t1’
Compact an entire region:
hbase> compact ‘r1’
Compact only a column family within a region:
hbase> compact ‘r1’, ‘c1’
Compact a column family within a table:
hbase> compact ‘t1’, ‘c1’
flush Flush all regions in passed table or pass a region row to flush an individual region. For example:
hbase> flush ‘TABLENAME’
hbase> flush ‘REGIONNAME’
major_compact Compact all regions in a table:hbase> major_compact ‘t1’
Compact an entire region:
hbase> major_compact ‘r1’
Compact a single column family within a region:
hbase> major_compact ‘r1’, ‘c1’
Compact a single column family within a table:
hbase> major_compact ‘t1’, ‘c1’
move 随机移动到某region serverhbase> move ‘ENCODED_REGIONNAME’
移动region到指定的server
hbase>move 'ENCODED_REGIONNAME', 'SERVER_NAME'
split Split entire table or pass a region to split individual region. With the second parameter, you can specify an explicit split key for the region.Examples:
split ‘tableName’
split ‘regionName’ # format: ‘tableName,startKey,id’
split ‘tableName’, ‘splitKey’
split ‘regionName’, ‘splitKey’
unassign Unassign a region. Unassign will close region in current location and then reopen it again. Pass ‘true’ to force the unassignment (‘force’ will clear all in-memory state in master before the reassign. If results in double assignment use hbck -fix to resolve. To be used by experts). Use with caution. For expert use only.Examples:
hbase> unassign ‘REGIONNAME’
hbase> unassign ‘REGIONNAME’, true
hlog_roll Roll the log writer. That is, start writing log messages to a new file. The name of the regionserver should be given as the parameter. A ‘server_name’ is the host, port plus startcode of a regionserver. For example:host187.example.com,60020,1289493121758 (find servername in master ui or when you do detailed status in shell)
hbase>hlog_roll
zk_dump Dump status of HBase cluster as seen by ZooKeeper. Example:hbase>zk_dump
集群复制
hbase.zookeeper.quorum:hbase.zookeeper.property.clientPort:zookeeper.znode.parent This gives a full path for HBase to connect to another cluster.Examples:
hbase>add_peer ‘1’, “server1.cie.com:2181:/hbase”
hbase>add_peer ‘2’, “zk1,zk2,zk3:2182:/hbase-prod”
remove_peer Stops the specified replication stream and deletes all the meta information kept about it. Examples:hbase> remove_peer ‘1’
list_peers List all replication peer clusters.hbase> list_peers
enable_peer Restarts the replication to the specified peer cluster, continuing from where it was disabled. Examples:hbase> enable_peer ‘1’
disable_peer Stops the replication stream to the specified cluster, but still keeps track of new edits to replicate.Examples: hbase> disable_peer ‘1’
start_replication Restarts all the replication features. The state in which each stream starts in is undetermined. WARNING: start/stop replication is only meant to be used in critical load situations. Examples:
hbase> start_replication
stop_replication Stops all the replication features. The state in which each stream stops in is undetermined. WARNING: start/stop replication is only meant to be used in critical load situations. Examples:
hbase> stop_replication
权限控制
语法:集合’RWXCA’中任意个字符。
READ(‘R’)
WRITE(‘W’)
EXEC(‘X’)
CREATE(‘C’)
ADMIN(‘A’)
例如:
hbase> grant ‘bobsmith’, ‘RWXCA’
hbase> grant ‘bobsmith’, ‘RW’, ‘t1’, ‘f1’, ‘col1’
revoke 移除用户权限。语法:revoke
hbase> revoke ‘bobsmith’, ‘t1’, ‘f1’, ‘col1’
user_permission 显示用户权限。语法:user_permission ‘table’
hbase> user_permission ‘table1’
参考
HBase shell scan 模糊查询
HBase 5种写入数据方式
HBase 常用Shell命令
hbase 1.1.4增删查改demo
java 获取 hbase数据 springdatahadoop – hbasetemplate
HBase shell commands
HBase Maven Dependency
HBase内置过滤器的一些总结
HBase(0.96以上版本)过滤器Filter详解及实例代码
HBase java 统计表行数
HBase之计数器
- HBase操作(Shell与Java API)
- HBase Shell与API操作
- HBase shell操作及Java API
- hbase初识---hbase shell操作以及对应java API开发
- Hbase操作shell和API
- hbase shell端运行正常,使用java api操作出错
- hbase java api操作
- hbase java api操作
- hbase Java Api 操作
- hbase java api操作
- Hbase-Java API操作
- Java API 操作Hbase
- Java API操作hbase
- java api操作 hbase
- java api操作hbase
- java api操作hbase
- hbase-shell + hbase的java api
- Ubuntu下HBase安装与使用(shell+Java API)
- boost库的weak_ptr剖析
- 方法的定义
- 抽象类(abstract class)与接口(interface)的异同
- Linux常用命令学习总结(一)
- LeetCode 7. Reverse Integer
- HBase操作(Shell与Java API)
- HTML5新特性结合PHP实现多图片格式转换功能 欢迎讨论
- unity - 二进制文件操作-存储与读取
- java工程如何打jar包和war包
- [CSU 1804 有向无环图] DP+拓扑排序
- Android databinding笔记
- 红黑树和AVL树的效率对比
- 无向图的深度优先生成森
- Request对象的主要方法: