Hbase压力测试

来源:互联网 发布:王者荣耀点券充值淘宝 编辑:程序博客网 时间:2024/06/16 01:08

一、自己写Java程序测试hbase单机模式

1.测试数据:插入测试数据的TestTable表结构为一个列族info,一个列data,每行rowkey插入的数据量大小为900个英文字符。如:

value=XXXXXXXXXXXXXXXXJJJJJJJJSSSSSSSSRRRRRRRRFFFFFFFFQQQQQQQQKKKKKKKKQQQQQQQQBBBBBBBBJJJJJJJJDDDDDDDDXXXXXXXXWWWWWWWWJJJJJJJJZZZZZZZZUUUUUUUUBBBBBBBBEEEEEEEEBBBBBBBBLLLLLLLLFFFFFFFFHHHHHHHHXXXXXXXXCCCCCCCCFFFFFFFFPPPPPPPPGGGGGGGGTTTTTTTTKKKKKKKKPPPPPPPPIIIIIIIIXXXXXXXXUUUUUUUUPPPPPPPPDDDDDDDDEEEEEEEEIIIIIIIIJJJJJJJJOOOOOOOONNNNNNNNEEEEEEEEBBBBBBBBIIIIIIIIVVVVVVVVPPPPPPPPTTTTTTTTZZZZZZZZWWWWWWWWXXXXXXXXFFFFFFFFKKKKKKKKOOOOOOOONNNNNNNNNNNNNNNNYYYYYYYYEEEEEEEEUUUUUUUURRRRRRRRDDDDDDDDWWWWWWWWIIIIIIIIPPPPPPPPJJJJJJJJPPPPPPPPPPPPPPPPJJJJJJJJYYYYYYYYJJJJJJJJHHHHHHHHLLLLLLLLZZZZZZZZMMMMMMMMMMMMMMMMLLLLLLLLZZZZZZZZHHHHHHHHKKKKKKKKAAAAAAAAZZZZZZZZFFFFFFFFTTTTTTTTSSSSSSSSCCCCCCCCOOOOOOOOFFFFFFFFEEEEEEEEUUUUUUUUNNNNNNNNEEEEEEEENNNNNNNNSSSSSSSSWWWWWWWWTTTTTTTTPPPPPPPPZZZZZZZZAAAAAAAAZZZZZZZZKKKKKKKKHHHHHHHHDDDDDDDDWWWWWWWWOOOOOOOODDDDDDDDBBBBBBBBMMMMMMMMAAAAAAAADDDDDDDDVVVVVVVVUUUUUUUUYYYYYYYYZZZZZZZZPPPPPPPPJJJJJJJJPPPPPPPPXXXXXXXXFFFFFFFFHHHHHHHHGGGGGGGGHHHHHHHHMMMMMMMMCCCCCCCCEEEEEEEEBBBBBBBBXXXXXXXX


2.测试环境:

Centos7.2 64

jdk1.8.0_91

hbase-1.0.0-cdh5.5.2(单机模式安装,各参数基本为默认值)


3.虚拟机配置:

CPU,内存为1024MB


4.测试结果:

插入数据量(条)

测试10次平均总耗时(s

每秒写速率(条/s

1000

2.1

476.20

10000

3.1

3225.81

50000

6.76

7396.45

80000

9.02

8869.18

90000

9.05

9944.75

95000

11.26

8436.94

100000

13.38

7473.84

150000

20.08

7470.12

175000

23.91

7160.91

注:当插入数据量小于100条的时候,所花时间基本维持在1.8s2s之间,并不是插入数据量越少所花时间越少的。比如插入一条数据和插入100条数据所用的时间基本差不多。


注意:当插入数据量大于180000条的时候,hbaseHMaster进程会很吃力并且偶尔会挂掉,当超过一定值后会直接报Exception in thread "main" java.lang.OutOfMemoryError: Java heap space


5.当插入150000条数据是,在Linux中使用top命令:


[hadoop@h153 hui]$ jps

21815 HMaster

27225 Jps

27144 InsertContactJava2


6.hbase中创建相应表命令:

hbase(main):001:0> create 'TestTable','info'


7.测试代码:

import java.io.IOException;import java.util.ArrayList;import org.apache.hadoop.conf.Configuration;import org.apache.hadoop.hbase.HBaseConfiguration;import org.apache.hadoop.hbase.client.HTable;import org.apache.hadoop.hbase.client.Put;public class InsertContactJava2 {public static long startTime;public static long endTime;public static void main(String[] args) throws IOException {Thread t1 = new Thread();startTime = System.currentTimeMillis();System.out.println("start time = " + startTime);insert_one();t1.start();endTime = System.currentTimeMillis();long costTime = endTime - startTime;System.out.println("end time = " + endTime);System.out.println("cost time = " + costTime*1.0/1000+"s");}public static void insert_one() throws IOException {Configuration conf = HBaseConfiguration.create();HTable table = new HTable(conf,"TestTable");ArrayList<Put> list = new ArrayList<Put>();int count1 = 50000;for(int i=0;i<count1;i++){String rowname = "row"+i;Put p = new Put(rowname.getBytes());p.add("info".getBytes(), "data".getBytes(),("XXXXXXXXXXXXXXXXJJJJJJJJSSSSSSSSRRRRRRRRFFFFFFFFQQQQQQQQKKKKKKKKQQQQQQQQBBBBBBBBJJJJJJJJDDDDDDDDXXXXXXXXWWWWWWWWJJJJJJJJZZZZZZZ"+"ZUUUUUUUUBBBBBBBBEEEEEEEEBBBBBBBBLLLLLLLLFFFFFFFFHHHHHHHHXXXXXXXXCCCCCCCCFFFFFFFFPPPPPPPPGGGGGGGGTTTTTTTTKKKKKKKKPPPPPPPPIIIIIIIIXXXXXXXXUUUUUUUUPPPPPPPPDDDDDDDDEEEEEEEEIIIIIII"+"IJJJJJJJJOOOOOOOONNNNNNNNEEEEEEEEBBBBBBBBIIIIIIIIVVVVVVVVPPPPPPPPTTTTTTTTZZZZZZZZWWWWWWWWXXXXXXXXFFFFFFFFKKKKKKKKOOOOOOOONNNNNNNNNNNNNNNNYYYYYYYYEEEEEEEEUUUUUUUURRRRRRRRDDDDDDD"+"DWWWWWWWWIIIIIIIIPPPPPPPPJJJJJJJJPPPPPPPPPPPPPPPPJJJJJJJJYYYYYYYYJJJJJJJJHHHHHHHHLLLLLLLLZZZZZZZZMMMMMMMMMMMMMMMMLLLLLLLLZZZZZZZZHHHHHHHHKKKKKKKKAAAAAAAAZZZZZZZZFFFFFFFFTTTTTTT"+"TSSSSSSSSCCCCCCCCOOOOOOOOFFFFFFFFEEEEEEEEUUUUUUUUNNNNNNNNEEEEEEEENNNNNNNNSSSSSSSSWWWWWWWWTTTTTTTTPPPPPPPPZZZZZZZZAAAAAAAAZZZZZZZZKKKKKKKKHHHHHHHHDDDDDDDDWWWWWWWWOOOOOOOODDDDDDD"+"DBBBBBBBBMMMMMMMMAAAAAAAADDDDDDDDVVVVVVVVUUUUUUUUYYYYYYYYZZZZZZZZPPPPPPPPJJJJJJJJPPPPPPPPXXXXXXXXFFFFFFFFHHHHHHHHGGGGGGGGHHHHHHHHMMMMMMMMCCCCCCCCEEEEEEEEBBBBBBBBXXXXXXXX").getBytes());list.add(p);}table.put(list);table.flushCommits();table.close();System.out.println("共插入数据:"+count1);}}

二、hbase自己已经带了测试工具,但需要安装有hadoop集群,测试的是完全分布式

顺序写命令: 
hbase org.apache.hadoop.hbase.PerformanceEvaluation sequentialWrite 1
说明: 命令行创建一个单独客户端,并且执行持续的写入测试。命令行将一直显示完成的进度直到打印最后的结果,当用户确定客户端服务器负载并不大时,可增加一定数量的客户端(也就是说线程或者MapReduce任务) 
hbase org.apache.hadoop.hbase.PerformanceEvaluation sequentialWrite 4


顺序读: 
hbase org.apache.hadoop.hbase.PerformanceEvaluation sequentialRead 1
随机写: 
hbase org.apache.hadoop.hbase.PerformanceEvaluation randomWrite 1
随机读: 
hbase org.apache.hadoop.hbase.PerformanceEvaluation randomRead 1


参数说明:

Usage: java org.apache.hadoop.hbase.PerformanceEvaluation \  <OPTIONS> [-D<property=value>]* <command> <nclients>Options: nomapred        Run multiple clients using threads (rather than use mapreduce) rows            Rows each client runs. Default: One million size            Total size in GiB. Mutually exclusive with --rows. Default: 1.0. sampleRate      Execute test on a sample of total rows. Only supported by randomRead. Default: 1.0 traceRate       Enable HTrace spans. Initiate tracing every N rows. Default: 0 table           Alternate table name. Default: 'TestTable' multiGet        If >0, when doing RandomRead, perform multiple gets instead of single gets. Default: 0 compress        Compression type to use (GZ, LZO, ...). Default: 'NONE' flushCommits    Used to determine if the test should flush the table. Default: false writeToWAL      Set writeToWAL on puts. Default: True autoFlush       Set autoFlush on htable. Default: False oneCon          all the threads share the same connection. Default: False presplit        Create presplit table. Recommended for accurate perf analysis (see guide).  Default: disabled inmemory        Tries to keep the HFiles of the CF inmemory as far as possible. Not guaranteed that reads are always served from memory.  Default: false usetags         Writes tags along with KVs. Use with HFile V3. Default: false numoftags       Specify the no of tags that would be needed. This works only if usetags is true. filterAll       Helps to filter out all the rows on the server side there by not returning any thing back to the client.  Helps to check the server side performance.  Uses FilterAllFilter internally.  latency         Set to report operation latencies. Default: False bloomFilter      Bloom filter type, one of [NONE, ROW, ROWCOL] valueSize       Pass value size to use: Default: 1024 valueRandom     Set if we should vary value size between 0 and 'valueSize'; set on read for stats on size: Default: Not set. valueZipf       Set if we should vary value size between 0 and 'valueSize' in zipf form: Default: Not set. period          Report every 'period' rows: Default: opts.perClientRunRows / 10 multiGet        Batch gets together into groups of N. Only supported by randomRead. Default: disabled replicas        Enable region replica testing. Defaults: 1. splitPolicy     Specify a custom RegionSplitPolicy for the table. randomSleep     Do a random sleep before each get between 0 and entered value. Defaults: 0 Note: -D properties will be applied to the conf used.   For example:    -Dmapreduce.output.fileoutputformat.compress=true   -Dmapreduce.task.timeout=60000Command: filterScan      Run scan test using a filter to find a specific row based on it's value (make sure to use --rows=20) randomRead      Run random read test randomSeekScan  Run random seek and scan 100 test randomWrite     Run random write test scan            Run scan test (read every row) scanRange10     Run random seek scan with both start and stop row (max 10 rows) scanRange100    Run random seek scan with both start and stop row (max 100 rows) scanRange1000   Run random seek scan with both start and stop row (max 1000 rows) scanRange10000  Run random seek scan with both start and stop row (max 10000 rows) sequentialRead  Run sequential read test sequentialWrite Run sequential write testArgs: nclients        Integer. Required. Total number of clients (and HRegionServers)                 running: 1 <= value <= 500Examples: To run a single evaluation client: $ bin/hbase org.apache.hadoop.hbase.PerformanceEvaluation sequentialWrite 1

hadoop2版本:

[hadoop@h149 hbase-1.0.0-cdh5.5.2]$ bin/hbase org.apache.hadoop.hbase.PerformanceEvaluation --rows=10000 sequentialWrite 12017-09-29 16:27:05,553 INFO  [main] Configuration.deprecation: hadoop.native.lib is deprecated. Instead, use io.native.lib.available2017-09-29 16:27:11,287 INFO  [main] hbase.PerformanceEvaluation: SequentialWriteTest test run options={"autoFlush":false,"blockEncoding":"NONE","bloomType":"ROW","cmdName":"sequentialWrite","compression":"NONE","filterAll":false,"flushCommits":true,"inMemoryCF":false,"multiGet":0,"noOfTags":1,"nomapred":false,"numClientThreads":1,"oneCon":false,"perClientRunRows":10000,"period":104857,"presplitRegions":0,"randomSleep":0,"replicas":1,"reportLatency":false,"sampleRate":1.0,"size":0.0,"splitPolicy":null,"startRow":0,"tableName":"TestTable","totalRows":10000,"traceRate":0.0,"useTags":false,"valueRandom":false,"valueSize":1000,"valueZipf":false,"writeToWAL":true}SLF4J: Class path contains multiple SLF4J bindings.SLF4J: Found binding in [jar:file:/home/hadoop/hbase-1.0.0-cdh5.5.2/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J: Found binding in [jar:file:/home/hadoop/hadoop-2.6.0-cdh5.5.2/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]2017-09-29 16:27:11,852 WARN  [main] util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable2017-09-29 16:27:13,479 INFO  [main] zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x320ca662 connecting to ZooKeeper ensemble=h150:2181,h149:2181,h151:21812017-09-29 16:27:13,537 INFO  [main] zookeeper.ZooKeeper: Client environment:zookeeper.version=3.4.5-cdh5.5.2--1, built on 01/25/2016 17:46 GMT2017-09-29 16:27:13,537 INFO  [main] zookeeper.ZooKeeper: Client environment:host.name=h1492017-09-29 16:27:13,537 INFO  [main] zookeeper.ZooKeeper: Client environment:java.version=1.7.0_652017-09-29 16:27:13,537 INFO  [main] zookeeper.ZooKeeper: Client environment:java.vendor=Oracle Corporation2017-09-29 16:27:13,537 INFO  [main] zookeeper.ZooKeeper: Client environment:java.home=/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.65.x86_64/jre2017-09-29 16:27:13,551 INFO  [main] zookeeper.ZooKeeper: Client environment:java.library.path=/home/hadoop/hadoop-2.6.0-cdh5.5.2/lib/native:/home/hadoop/hbase-1.0.0-cdh5.5.2/bin/../lib/native/Linux-amd64-642017-09-29 16:27:13,551 INFO  [main] zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp2017-09-29 16:27:13,551 INFO  [main] zookeeper.ZooKeeper: Client environment:java.compiler=<NA>2017-09-29 16:27:13,551 INFO  [main] zookeeper.ZooKeeper: Client environment:os.name=Linux2017-09-29 16:27:13,551 INFO  [main] zookeeper.ZooKeeper: Client environment:os.arch=amd642017-09-29 16:27:13,551 INFO  [main] zookeeper.ZooKeeper: Client environment:os.version=2.6.32-504.el6.x86_642017-09-29 16:27:13,551 INFO  [main] zookeeper.ZooKeeper: Client environment:user.name=hadoop2017-09-29 16:27:13,551 INFO  [main] zookeeper.ZooKeeper: Client environment:user.home=/home/hadoop2017-09-29 16:27:13,551 INFO  [main] zookeeper.ZooKeeper: Client environment:user.dir=/home/hadoop/hbase-1.0.0-cdh5.5.22017-09-29 16:27:13,553 INFO  [main] zookeeper.ZooKeeper: Initiating client connection, connectString=h150:2181,h149:2181,h151:2181 sessionTimeout=90000 watcher=hconnection-0x320ca6620x0, quorum=h150:2181,h149:2181,h151:2181, baseZNode=/hbase2017-09-29 16:27:14,110 INFO  [main-SendThread(h149:2181)] zookeeper.ClientCnxn: Opening socket connection to server h149/192.168.205.149:2181. Will not attempt to authenticate using SASL (unknown error)2017-09-29 16:27:14,138 INFO  [main-SendThread(h149:2181)] zookeeper.ClientCnxn: Socket connection established, initiating session, client: /192.168.205.149:33937, server: h149/192.168.205.149:21812017-09-29 16:27:14,327 INFO  [main-SendThread(h149:2181)] zookeeper.ClientCnxn: Session establishment complete on server h149/192.168.205.149:2181, sessionid = 0x15ecca7da5d0005, negotiated timeout = 400002017-09-29 16:27:21,271 INFO  [main] hbase.PerformanceEvaluation: Table 'TestTable', {NAME => 'info', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '1', TTL => 'FOREVER', MIN_VERSIONS => '0', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'} created2017-09-29 16:27:21,336 INFO  [main] client.ConnectionManager$HConnectionImplementation: Closing master protocol: MasterService2017-09-29 16:27:21,336 INFO  [main] client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x15ecca7da5d00052017-09-29 16:27:21,341 INFO  [main-EventThread] zookeeper.ClientCnxn: EventThread shut down2017-09-29 16:27:21,341 INFO  [main] zookeeper.ZooKeeper: Session: 0x15ecca7da5d0005 closed2017-09-29 16:27:21,716 INFO  [main] hbase.PerformanceEvaluation: maptask input={"autoFlush":false,"blockEncoding":"NONE","bloomType":"ROW","cmdName":"sequentialWrite","compression":"NONE","filterAll":false,"flushCommits":true,"inMemoryCF":false,"multiGet":0,"noOfTags":1,"nomapred":false,"numClientThreads":1,"oneCon":false,"perClientRunRows":1000,"period":104857,"presplitRegions":0,"randomSleep":0,"replicas":1,"reportLatency":false,"sampleRate":1.0,"size":0.0,"splitPolicy":null,"startRow":0,"tableName":"TestTable","totalRows":10000,"traceRate":0.0,"useTags":false,"valueRandom":false,"valueSize":1000,"valueZipf":false,"writeToWAL":true}2017-09-29 16:27:21,716 INFO  [main] hbase.PerformanceEvaluation: maptask input={"autoFlush":false,"blockEncoding":"NONE","bloomType":"ROW","cmdName":"sequentialWrite","compression":"NONE","filterAll":false,"flushCommits":true,"inMemoryCF":false,"multiGet":0,"noOfTags":1,"nomapred":false,"numClientThreads":1,"oneCon":false,"perClientRunRows":1000,"period":104857,"presplitRegions":0,"randomSleep":0,"replicas":1,"reportLatency":false,"sampleRate":1.0,"size":0.0,"splitPolicy":null,"startRow":1000,"tableName":"TestTable","totalRows":10000,"traceRate":0.0,"useTags":false,"valueRandom":false,"valueSize":1000,"valueZipf":false,"writeToWAL":true}2017-09-29 16:27:21,716 INFO  [main] hbase.PerformanceEvaluation: maptask input={"autoFlush":false,"blockEncoding":"NONE","bloomType":"ROW","cmdName":"sequentialWrite","compression":"NONE","filterAll":false,"flushCommits":true,"inMemoryCF":false,"multiGet":0,"noOfTags":1,"nomapred":false,"numClientThreads":1,"oneCon":false,"perClientRunRows":1000,"period":104857,"presplitRegions":0,"randomSleep":0,"replicas":1,"reportLatency":false,"sampleRate":1.0,"size":0.0,"splitPolicy":null,"startRow":2000,"tableName":"TestTable","totalRows":10000,"traceRate":0.0,"useTags":false,"valueRandom":false,"valueSize":1000,"valueZipf":false,"writeToWAL":true}2017-09-29 16:27:21,717 INFO  [main] hbase.PerformanceEvaluation: maptask input={"autoFlush":false,"blockEncoding":"NONE","bloomType":"ROW","cmdName":"sequentialWrite","compression":"NONE","filterAll":false,"flushCommits":true,"inMemoryCF":false,"multiGet":0,"noOfTags":1,"nomapred":false,"numClientThreads":1,"oneCon":false,"perClientRunRows":1000,"period":104857,"presplitRegions":0,"randomSleep":0,"replicas":1,"reportLatency":false,"sampleRate":1.0,"size":0.0,"splitPolicy":null,"startRow":3000,"tableName":"TestTable","totalRows":10000,"traceRate":0.0,"useTags":false,"valueRandom":false,"valueSize":1000,"valueZipf":false,"writeToWAL":true}2017-09-29 16:27:21,717 INFO  [main] hbase.PerformanceEvaluation: maptask input={"autoFlush":false,"blockEncoding":"NONE","bloomType":"ROW","cmdName":"sequentialWrite","compression":"NONE","filterAll":false,"flushCommits":true,"inMemoryCF":false,"multiGet":0,"noOfTags":1,"nomapred":false,"numClientThreads":1,"oneCon":false,"perClientRunRows":1000,"period":104857,"presplitRegions":0,"randomSleep":0,"replicas":1,"reportLatency":false,"sampleRate":1.0,"size":0.0,"splitPolicy":null,"startRow":4000,"tableName":"TestTable","totalRows":10000,"traceRate":0.0,"useTags":false,"valueRandom":false,"valueSize":1000,"valueZipf":false,"writeToWAL":true}2017-09-29 16:27:21,717 INFO  [main] hbase.PerformanceEvaluation: maptask input={"autoFlush":false,"blockEncoding":"NONE","bloomType":"ROW","cmdName":"sequentialWrite","compression":"NONE","filterAll":false,"flushCommits":true,"inMemoryCF":false,"multiGet":0,"noOfTags":1,"nomapred":false,"numClientThreads":1,"oneCon":false,"perClientRunRows":1000,"period":104857,"presplitRegions":0,"randomSleep":0,"replicas":1,"reportLatency":false,"sampleRate":1.0,"size":0.0,"splitPolicy":null,"startRow":5000,"tableName":"TestTable","totalRows":10000,"traceRate":0.0,"useTags":false,"valueRandom":false,"valueSize":1000,"valueZipf":false,"writeToWAL":true}2017-09-29 16:27:21,717 INFO  [main] hbase.PerformanceEvaluation: maptask input={"autoFlush":false,"blockEncoding":"NONE","bloomType":"ROW","cmdName":"sequentialWrite","compression":"NONE","filterAll":false,"flushCommits":true,"inMemoryCF":false,"multiGet":0,"noOfTags":1,"nomapred":false,"numClientThreads":1,"oneCon":false,"perClientRunRows":1000,"period":104857,"presplitRegions":0,"randomSleep":0,"replicas":1,"reportLatency":false,"sampleRate":1.0,"size":0.0,"splitPolicy":null,"startRow":6000,"tableName":"TestTable","totalRows":10000,"traceRate":0.0,"useTags":false,"valueRandom":false,"valueSize":1000,"valueZipf":false,"writeToWAL":true}2017-09-29 16:27:21,717 INFO  [main] hbase.PerformanceEvaluation: maptask input={"autoFlush":false,"blockEncoding":"NONE","bloomType":"ROW","cmdName":"sequentialWrite","compression":"NONE","filterAll":false,"flushCommits":true,"inMemoryCF":false,"multiGet":0,"noOfTags":1,"nomapred":false,"numClientThreads":1,"oneCon":false,"perClientRunRows":1000,"period":104857,"presplitRegions":0,"randomSleep":0,"replicas":1,"reportLatency":false,"sampleRate":1.0,"size":0.0,"splitPolicy":null,"startRow":7000,"tableName":"TestTable","totalRows":10000,"traceRate":0.0,"useTags":false,"valueRandom":false,"valueSize":1000,"valueZipf":false,"writeToWAL":true}2017-09-29 16:27:21,718 INFO  [main] hbase.PerformanceEvaluation: maptask input={"autoFlush":false,"blockEncoding":"NONE","bloomType":"ROW","cmdName":"sequentialWrite","compression":"NONE","filterAll":false,"flushCommits":true,"inMemoryCF":false,"multiGet":0,"noOfTags":1,"nomapred":false,"numClientThreads":1,"oneCon":false,"perClientRunRows":1000,"period":104857,"presplitRegions":0,"randomSleep":0,"replicas":1,"reportLatency":false,"sampleRate":1.0,"size":0.0,"splitPolicy":null,"startRow":8000,"tableName":"TestTable","totalRows":10000,"traceRate":0.0,"useTags":false,"valueRandom":false,"valueSize":1000,"valueZipf":false,"writeToWAL":true}2017-09-29 16:27:21,718 INFO  [main] hbase.PerformanceEvaluation: maptask input={"autoFlush":false,"blockEncoding":"NONE","bloomType":"ROW","cmdName":"sequentialWrite","compression":"NONE","filterAll":false,"flushCommits":true,"inMemoryCF":false,"multiGet":0,"noOfTags":1,"nomapred":false,"numClientThreads":1,"oneCon":false,"perClientRunRows":1000,"period":104857,"presplitRegions":0,"randomSleep":0,"replicas":1,"reportLatency":false,"sampleRate":1.0,"size":0.0,"splitPolicy":null,"startRow":9000,"tableName":"TestTable","totalRows":10000,"traceRate":0.0,"useTags":false,"valueRandom":false,"valueSize":1000,"valueZipf":false,"writeToWAL":true}2017-09-29 16:27:22,573 INFO  [main] client.RMProxy: Connecting to ResourceManager at h149/192.168.205.149:80322017-09-29 16:27:30,334 INFO  [main] input.FileInputFormat: Total input paths to process : 12017-09-29 16:27:30,833 INFO  [main] mapreduce.JobSubmitter: number of splits:102017-09-29 16:27:32,526 INFO  [main] mapreduce.JobSubmitter: Submitting tokens for job: job_1501877781016_00012017-09-29 16:27:34,622 INFO  [main] impl.YarnClientImpl: Submitted application application_1501877781016_00012017-09-29 16:27:34,900 INFO  [main] mapreduce.Job: The url to track the job: http://h149:8088/proxy/application_1501877781016_0001/2017-09-29 16:27:34,900 INFO  [main] mapreduce.Job: Running job: job_1501877781016_00012017-09-29 16:28:09,630 INFO  [main] mapreduce.Job: Job job_1501877781016_0001 running in uber mode : false2017-09-29 16:28:09,632 INFO  [main] mapreduce.Job:  map 0% reduce 0%2017-09-29 16:32:10,412 INFO  [main] mapreduce.Job:  map 7% reduce 0%2017-09-29 16:33:05,358 INFO  [main] mapreduce.Job:  map 13% reduce 0%2017-09-29 16:33:21,061 INFO  [main] mapreduce.Job:  map 20% reduce 0%2017-09-29 16:33:29,363 INFO  [main] mapreduce.Job:  map 27% reduce 0%2017-09-29 16:33:51,427 INFO  [main] mapreduce.Job:  map 33% reduce 0%2017-09-29 16:33:52,524 INFO  [main] mapreduce.Job:  map 40% reduce 0%2017-09-29 16:34:03,102 INFO  [main] mapreduce.Job:  map 47% reduce 0%2017-09-29 16:34:12,750 INFO  [main] mapreduce.Job:  map 53% reduce 0%2017-09-29 16:34:23,640 INFO  [main] mapreduce.Job:  map 67% reduce 0%2017-09-29 16:35:12,341 INFO  [main] mapreduce.Job:  map 73% reduce 0%2017-09-29 16:35:18,062 INFO  [main] mapreduce.Job:  map 80% reduce 0%2017-09-29 16:35:24,950 INFO  [main] mapreduce.Job:  map 83% reduce 0%2017-09-29 16:35:26,346 INFO  [main] mapreduce.Job:  map 87% reduce 0%2017-09-29 16:35:30,343 INFO  [main] mapreduce.Job:  map 90% reduce 0%2017-09-29 16:35:40,044 INFO  [main] mapreduce.Job:  map 93% reduce 0%2017-09-29 16:35:43,497 INFO  [main] mapreduce.Job:  map 97% reduce 0%2017-09-29 16:35:44,501 INFO  [main] mapreduce.Job:  map 100% reduce 0%2017-09-29 16:35:57,725 INFO  [main] mapreduce.Job:  map 100% reduce 100%2017-09-29 16:35:58,736 INFO  [main] mapreduce.Job: Job job_1501877781016_0001 completed successfully2017-09-29 16:35:59,102 INFO  [main] mapreduce.Job: Counters: 51        File System Counters                FILE: Number of bytes read=186                FILE: Number of bytes written=1565088                FILE: Number of read operations=0                FILE: Number of large read operations=0                FILE: Number of write operations=0                HDFS: Number of bytes read=32653                HDFS: Number of bytes written=109                HDFS: Number of read operations=33                HDFS: Number of large read operations=0                HDFS: Number of write operations=2        Job Counters                 Launched map tasks=10                Launched reduce tasks=1                Other local map tasks=10                Total time spent by all maps in occupied slots (ms)=4338707                Total time spent by all reduces in occupied slots (ms)=24958                Total time spent by all map tasks (ms)=4338707                Total time spent by all reduce tasks (ms)=24958                Total vcore-seconds taken by all map tasks=4338707                Total vcore-seconds taken by all reduce tasks=24958                Total megabyte-seconds taken by all map tasks=4442835968                Total megabyte-seconds taken by all reduce tasks=25556992        Map-Reduce Framework                Map input records=10                Map output records=10                Map output bytes=160                Map output materialized bytes=240                Input split bytes=1480                Combine input records=0                Combine output records=0                Reduce input groups=10                Reduce shuffle bytes=240                Reduce input records=10                Reduce output records=10                Spilled Records=20                Shuffled Maps =10                Failed Shuffles=0                Merged Map outputs=10                GC time elapsed (ms)=217689                CPU time spent (ms)=481450                Physical memory (bytes) snapshot=2756542464                Virtual memory (bytes) snapshot=17472843776                Total committed heap usage (bytes)=3008745472        Shuffle Errors                BAD_ID=0                CONNECTION=0                IO_ERROR=0                WRONG_LENGTH=0                WRONG_MAP=0                WRONG_REDUCE=0        HBase Performance Evaluation                Elapsed time in milliseconds=773757                Row count=10000        File Input Format Counters                 Bytes Read=31173        File Output Format Counters                 Bytes Written=109

说明:hadoop2版本我只在Redhat6.6 64位hadoop-2.6.0-cdh5.5.2  hbase-1.0.0-cdh5.5.2 zookeeper-3.4.5-cdh5.5.2 1.7.0_65(自带的jdk版本)的环境中换算能正常跑起来,但数据量到80000的时候也扛不住了。而在其他环境中(如Centos7.2,Centos6.6,Debian8.2,而RedHat5.5还比这些强一些,5000的数据量还勉强能跑完)更完犊子了,连10000跑的都够呛,我也不知道是啥原因。按说基础配置都一样啊(单CPU,内存1024MB),但为啥差别这么大呢?!


思考:这个hbase自带的测试工具在hadoop2版本中运行的是mapreduce程序,整体来说感觉测试效果很差。后来发现在hadoop1版本中却并没有跑mapreduce,而且测试效果也更好些,这让我很是奇怪,难道hadoop还倒退了不成?


hadoop1版本:

[hadoop@h149 hbase-0.90.6-cdh3u5]$ bin/hbase org.apache.hadoop.hbase.PerformanceEvaluation --rows=10000 sequentialWrite 1Java HotSpot(TM) 64-Bit Server VM warning: Using incremental CMS is deprecated and will likely be removed in a future release17/09/21 16:04:03 INFO zookeeper.ZooKeeper: Client environment:zookeeper.version=3.3.5-cdh3u5--1, built on 10/06/2012 00:31 GMT17/09/21 16:04:03 INFO zookeeper.ZooKeeper: Client environment:host.name=h14917/09/21 16:04:03 INFO zookeeper.ZooKeeper: Client environment:java.version=1.8.0_9117/09/21 16:04:03 INFO zookeeper.ZooKeeper: Client environment:java.vendor=Oracle Corporation17/09/21 16:04:03 INFO zookeeper.ZooKeeper: Client environment:java.home=/usr/jdk1.8.0_91/jre17/09/21 16:04:03 INFO zookeeper.ZooKeeper: Client environment:java.class.path=/home/hadoop/hbase-0.90.6-cdh3u5/bin/../conf:/usr/jdk1.8.0_91/lib/tools.jar:/home/hadoop/hbase-0.90.6-cdh3u5/bin/..:/home/hadoop/hbase-0.90.6-cdh3u5/bin/../hbase-0.90.6-cdh3u5.jar:/home/hadoop/hbase-0.90.6-cdh3u5/bin/../hbase-0.90.6-cdh3u5-tests.jar:/home/hadoop/hbase-0.90.6-cdh3u5/bin/../lib/activation-1.1.jar:/home/hadoop/hbase-0.90.6-cdh3u5/bin/../lib/asm-3.2.jar:/home/hadoop/hbase-0.90.6-cdh3u5/bin/../lib/avro-1.5.4.jar:/home/hadoop/hbase-0.90.6-cdh3u5/bin/../lib/avro-ipc-1.5.4.jar:/home/hadoop/hbase-0.90.6-cdh3u5/bin/../lib/commons-cli-1.2.jar:/home/hadoop/hbase-0.90.6-cdh3u5/bin/../lib/commons-codec-1.4.jar:/home/hadoop/hbase-0.90.6-cdh3u5/bin/../lib/commons-el-1.0.jar:/home/hadoop/hbase-0.90.6-cdh3u5/bin/../lib/commons-httpclient-3.1.jar:/home/hadoop/hbase-0.90.6-cdh3u5/bin/../lib/commons-io-2.1.jar:/home/hadoop/hbase-0.90.6-cdh3u5/bin/../lib/commons-lang-2.5.jar:/home/hadoop/hbase-0.90.6-cdh3u5/bin/../lib/commons-logging-1.1.1.jar:/home/hadoop/hbase-0.90.6-cdh3u5/bin/../lib/commons-net-1.4.1.jar:/home/hadoop/hbase-0.90.6-cdh3u5/bin/../lib/core-3.1.1.jar:/home/hadoop/hbase-0.90.6-cdh3u5/bin/../lib/guava-r06.jar:/home/hadoop/hbase-0.90.6-cdh3u5/bin/../lib/guava-r09-jarjar.jar:/home/hadoop/hbase-0.90.6-cdh3u5/bin/../lib/hadoop-core-0.20.2-cdh3u5.jar:/home/hadoop/hbase-0.90.6-cdh3u5/bin/../lib/jackson-core-asl-1.8.8.jar:/home/hadoop/hbase-0.90.6-cdh3u5/bin/../lib/jackson-jaxrs-1.8.8.jar:/home/hadoop/hbase-0.90.6-cdh3u5/bin/../lib/jackson-mapper-asl-1.8.8.jar:/home/hadoop/hbase-0.90.6-cdh3u5/bin/../lib/jackson-xc-1.8.8.jar:/home/hadoop/hbase-0.90.6-cdh3u5/bin/../lib/jamon-runtime-2.3.1.jar:/home/hadoop/hbase-0.90.6-cdh3u5/bin/../lib/jasper-compiler-5.5.23.jar:/home/hadoop/hbase-0.90.6-cdh3u5/bin/../lib/jasper-runtime-5.5.23.jar:/home/hadoop/hbase-0.90.6-cdh3u5/bin/../lib/jaxb-api-2.1.jar:/home/hadoop/hbase-0.90.6-cdh3u5/bin/../lib/jaxb-impl-2.2.3-1.jar:/home/hadoop/hbase-0.90.6-cdh3u5/bin/../lib/jersey-core-1.8.jar:/home/hadoop/hbase-0.90.6-cdh3u5/bin/../lib/jersey-json-1.8.jar:/home/hadoop/hbase-0.90.6-cdh3u5/bin/../lib/jersey-server-1.8.jar:/home/hadoop/hbase-0.90.6-cdh3u5/bin/../lib/jettison-1.1.jar:/home/hadoop/hbase-0.90.6-cdh3u5/bin/../lib/jetty-6.1.26.jar:/home/hadoop/hbase-0.90.6-cdh3u5/bin/../lib/jetty-util-6.1.26.jar:/home/hadoop/hbase-0.90.6-cdh3u5/bin/../lib/jruby-complete-1.6.0.jar:/home/hadoop/hbase-0.90.6-cdh3u5/bin/../lib/jsp-2.1-6.1.14.jar:/home/hadoop/hbase-0.90.6-cdh3u5/bin/../lib/jsp-api-2.1-6.1.14.jar:/home/hadoop/hbase-0.90.6-cdh3u5/bin/../lib/jsp-api-2.1.jar:/home/hadoop/hbase-0.90.6-cdh3u5/bin/../lib/jsr311-api-1.1.1.jar:/home/hadoop/hbase-0.90.6-cdh3u5/bin/../lib/log4j-1.2.16.jar:/home/hadoop/hbase-0.90.6-cdh3u5/bin/../lib/netty-3.2.4.Final.jar:/home/hadoop/hbase-0.90.6-cdh3u5/bin/../lib/protobuf-java-2.3.0.jar:/home/hadoop/hbase-0.90.6-cdh3u5/bin/../lib/servlet-api-2.5-6.1.14.jar:/home/hadoop/hbase-0.90.6-cdh3u5/bin/../lib/servlet-api-2.5.jar:/home/hadoop/hbase-0.90.6-cdh3u5/bin/../lib/slf4j-api-1.5.8.jar:/home/hadoop/hbase-0.90.6-cdh3u5/bin/../lib/slf4j-log4j12-1.5.8.jar:/home/hadoop/hbase-0.90.6-cdh3u5/bin/../lib/snappy-java-1.0.3.2.jar:/home/hadoop/hbase-0.90.6-cdh3u5/bin/../lib/stax-api-1.0.1.jar:/home/hadoop/hbase-0.90.6-cdh3u5/bin/../lib/thrift-0.2.0.jar:/home/hadoop/hbase-0.90.6-cdh3u5/bin/../lib/velocity-1.5.jar:/home/hadoop/hbase-0.90.6-cdh3u5/bin/../lib/xmlenc-0.52.jar:/home/hadoop/hbase-0.90.6-cdh3u5/bin/../lib/zookeeper-3.3.5-cdh3u5.jar17/09/21 16:04:03 INFO zookeeper.ZooKeeper: Client environment:java.library.path=/home/hadoop/hbase-0.90.6-cdh3u5/bin/../lib/native/Linux-amd64-6417/09/21 16:04:03 INFO zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp17/09/21 16:04:03 INFO zookeeper.ZooKeeper: Client environment:java.compiler=<NA>17/09/21 16:04:03 INFO zookeeper.ZooKeeper: Client environment:os.name=Linux17/09/21 16:04:03 INFO zookeeper.ZooKeeper: Client environment:os.arch=amd6417/09/21 16:04:03 INFO zookeeper.ZooKeeper: Client environment:os.version=2.6.32-504.el6.x86_6417/09/21 16:04:03 INFO zookeeper.ZooKeeper: Client environment:user.name=hadoop17/09/21 16:04:03 INFO zookeeper.ZooKeeper: Client environment:user.home=/home/hadoop17/09/21 16:04:03 INFO zookeeper.ZooKeeper: Client environment:user.dir=/home/hadoop/hbase-0.90.6-cdh3u517/09/21 16:04:03 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=h150:2181,h149:2181,h151:2181 sessionTimeout=180000 watcher=hconnection17/09/21 16:04:03 INFO zookeeper.ClientCnxn: Opening socket connection to server h149/192.168.205.149:218117/09/21 16:04:03 INFO zookeeper.ClientCnxn: Socket connection established to h149/192.168.205.149:2181, initiating session17/09/21 16:04:03 INFO zookeeper.ClientCnxn: Session establishment complete on server h149/192.168.205.149:2181, sessionid = 0x15ea363bcf80002, negotiated timeout = 4000017/09/21 16:04:04 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=h150:2181,h149:2181,h151:2181 sessionTimeout=180000 watcher=catalogtracker-on-org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@1efbd81617/09/21 16:04:04 INFO zookeeper.ClientCnxn: Opening socket connection to server h150/192.168.205.150:218117/09/21 16:04:04 INFO zookeeper.ClientCnxn: Socket connection established to h150/192.168.205.150:2181, initiating session17/09/21 16:04:04 INFO zookeeper.ClientCnxn: Session establishment complete on server h150/192.168.205.150:2181, sessionid = 0x25ea36a70d20002, negotiated timeout = 4000017/09/21 16:04:04 DEBUG catalog.CatalogTracker: Starting catalog tracker org.apache.hadoop.hbase.catalog.CatalogTracker@6d8a00e317/09/21 16:04:04 DEBUG catalog.CatalogTracker: Stopping catalog tracker org.apache.hadoop.hbase.catalog.CatalogTracker@6d8a00e317/09/21 16:04:04 DEBUG client.HConnectionManager$HConnectionImplementation: The connection to hconnection-0x15ea363bcf80002 has been closed.17/09/21 16:04:04 INFO zookeeper.ClientCnxn: EventThread shut down17/09/21 16:04:04 INFO zookeeper.ZooKeeper: Session: 0x25ea36a70d20002 closed17/09/21 16:04:04 INFO hbase.PerformanceEvaluation: Start class org.apache.hadoop.hbase.PerformanceEvaluation$SequentialWriteTest at offset 0 for 10000 rows17/09/21 16:04:04 DEBUG client.HConnectionManager$HConnectionImplementation: Lookedup root region location, connection=org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@1efbd816; hsa=h150:6002017/09/21 16:04:04 DEBUG client.HConnectionManager$HConnectionImplementation: Cached location for .META.,,1.1028785192 is h150:6002017/09/21 16:04:04 DEBUG client.HConnectionManager$HConnectionImplementation: The connection to hconnection-0x15ea363bcf80002 has been closed.17/09/21 16:04:04 DEBUG client.MetaScanner: Scanning .META. starting at row=TestTable,,00000000000000 for max=10 rows17/09/21 16:04:04 DEBUG client.HConnectionManager$HConnectionImplementation: Cached location for TestTable,,1505979942279.7438a52ed003f181e317ba243e2e7e62. is h151:6002017/09/21 16:04:04 DEBUG client.HConnectionManager$HConnectionImplementation: The connection to hconnection-0x15ea363bcf80002 has been closed.17/09/21 16:04:04 INFO hbase.PerformanceEvaluation: 0/1000/1000017/09/21 16:04:05 INFO hbase.PerformanceEvaluation: 0/2000/1000017/09/21 16:04:05 INFO hbase.PerformanceEvaluation: 0/3000/1000017/09/21 16:04:05 INFO hbase.PerformanceEvaluation: 0/4000/1000017/09/21 16:04:05 INFO hbase.PerformanceEvaluation: 0/5000/1000017/09/21 16:04:05 INFO hbase.PerformanceEvaluation: 0/6000/1000017/09/21 16:04:05 INFO hbase.PerformanceEvaluation: 0/7000/1000017/09/21 16:04:05 INFO hbase.PerformanceEvaluation: 0/8000/1000017/09/21 16:04:05 INFO hbase.PerformanceEvaluation: 0/9000/1000017/09/21 16:04:05 INFO hbase.PerformanceEvaluation: Finished class org.apache.hadoop.hbase.PerformanceEvaluation$SequentialWriteTest in 1219ms at offset 0 for 10000 rows

下面是插入不同数据量的测试效果:

Redhat6.6 64位hadoop-0.20.2-cdh3u5  hbase-0.90.6-cdh3u5 zookeeper-3.3.5-cdh3u5 jdk1.8.0_91
bin/hbase org.apache.hadoop.hbase.PerformanceEvaluation --rows=10000 sequentialWrite 1
Finished class org.apache.hadoop.hbase.PerformanceEvaluation$SequentialWriteTest in 0ms at offset 0 for 1 rows

Finished class org.apache.hadoop.hbase.PerformanceEvaluation$SequentialWriteTest in 20ms at offset 0 for 100 rows

Finished class org.apache.hadoop.hbase.PerformanceEvaluation$SequentialWriteTest in 4251ms at offset 0 for 5000 rows

Finished class org.apache.hadoop.hbase.PerformanceEvaluation$SequentialWriteTest in 4380ms at offset 0 for 8000 rows

Finished class org.apache.hadoop.hbase.PerformanceEvaluation$SequentialWriteTest in 5853ms at offset 0 for 10000 rows
Finished class org.apache.hadoop.hbase.PerformanceEvaluation$SequentialWriteTest in 886ms at offset 0 for 10000 rows

Finished class org.apache.hadoop.hbase.PerformanceEvaluation$SequentialWriteTest in 2144ms at offset 0 for 30000 rows
Finished class org.apache.hadoop.hbase.PerformanceEvaluation$SequentialWriteTest in 1314ms at offset 0 for 30000 rows

Finished class org.apache.hadoop.hbase.PerformanceEvaluation$SequentialWriteTest in 3240ms at offset 0 for 40000 rows

Finished class org.apache.hadoop.hbase.PerformanceEvaluation$SequentialWriteTest in 5919ms at offset 0 for 50000 rows
Finished class org.apache.hadoop.hbase.PerformanceEvaluation$SequentialWriteTest in 2438ms at offset 0 for 50000 rows
Finished class org.apache.hadoop.hbase.PerformanceEvaluation$SequentialWriteTest in 2195ms at offset 0 for 50000 rows
Finished class org.apache.hadoop.hbase.PerformanceEvaluation$SequentialWriteTest in 5087ms at offset 0 for 50000 rows
Finished class org.apache.hadoop.hbase.PerformanceEvaluation$SequentialWriteTest in 2989ms at offset 0 for 50000 rows

Finished class org.apache.hadoop.hbase.PerformanceEvaluation$SequentialWriteTest in 3469ms at offset 0 for 80000 rows

Finished class org.apache.hadoop.hbase.PerformanceEvaluation$SequentialWriteTest in 21597ms at offset 0 for 100000 rows
Finished class org.apache.hadoop.hbase.PerformanceEvaluation$SequentialWriteTest in 5334ms at offset 0 for 100000 rows
Finished class org.apache.hadoop.hbase.PerformanceEvaluation$SequentialWriteTest in 10186ms at offset 0 for 100000 rows

Finished class org.apache.hadoop.hbase.PerformanceEvaluation$SequentialWriteTest in 105883ms at offset 0 for 200000 rows
Finished class org.apache.hadoop.hbase.PerformanceEvaluation$SequentialWriteTest in 25812ms at offset 0 for 200000 rows
Finished class org.apache.hadoop.hbase.PerformanceEvaluation$SequentialWriteTest in 24607ms at offset 0 for 200000 rows
Finished class org.apache.hadoop.hbase.PerformanceEvaluation$SequentialWriteTest in 12438ms at offset 0 for 200000 rows

Finished class org.apache.hadoop.hbase.PerformanceEvaluation$SequentialWriteTest in 29798ms at offset 0 for 300000 rows
Finished class org.apache.hadoop.hbase.PerformanceEvaluation$SequentialWriteTest in 30498ms at offset 0 for 300000 rows

Finished class org.apache.hadoop.hbase.PerformanceEvaluation$SequentialWriteTest in 71312ms at offset 0 for 500000 rows

bin/hbase org.apache.hadoop.hbase.PerformanceEvaluation sequentialWrite 1
Finished class org.apache.hadoop.hbase.PerformanceEvaluation$SequentialWriteTest in 182983ms at offset 0 for 1048576 rows
Finished class org.apache.hadoop.hbase.PerformanceEvaluation$SequentialWriteTest in 216587ms at offset 0 for 1048576 rows


注意:hadoop1版本和hadoop2版本自带测试所插入的数据不一样。

hadoop1版本:ROW                                                          COLUMN+CELL                                                                                                                                                                      0000000000                                                  column=info:data, timestamp=1505979502795, value=\x0E4\xFD\xE59Ngu\xA4\xACz\xCBY\x8F\xEE\x91\x94Ue\xFA\xF9\xED;o\xA9\xF7\xFBG(I\x89|\xE4\xBF\x87\xA2\xC0\xD5?e\xC8\x96\xC0\x0E:\                                                             x88p\x07t\xF7\xF9\xD2\x09}\xC5&,\x0A\xEA}\x1F\x9E\xE5b2\xD2\xB8\xC4N{\xF6\x9C1\xC5\xC0S\x99\xF4]8\x81\xB2\x92\\xBD@\xC4#\x05\x8B\x141o\xD7\xF5_\x81\xEE\xCBu*&_1\x8Cq3,\x81\x87e                                                             \xC2\x93\xE7\x0B5G\x08\xFA\xCD\x84\xAE\x93\xB8\x1C\xEB\xCB\xDF\x1B\x96\x85\xEC\x13\xE2O\xF7\xE9\x92j\x0A\x8Fo\x03U\x88\x8D6p\x90\x1A\xC4H\x9BX-l\xFE\xEFj\xC7\x90\xBCC\x9D\xB8yM                                                             )=\xA6\x9ELu\xB5US\xC0m+j\xB1\xF8\xE6\x8E_\xED\xD5\xD8?\xE4\xF2\xC03L\xF8\xFA?\xE1@\xCB\xDE\x06\xDC\xDCx\xEA\xAD\xBF\xA1p\x19\xD5y\xF0\x1F2\x8A\xB1J)\xE1\xC3\x0En\x9B\xBA?\x96\                                                             xBFp\x7F\x16\x0A\xD7\x12\x83\xF3&:\xCD\x14\xFA\xE00\xEB~f#\xFC\xBAg\xF66\xEF\x9E\x08#\xAD\x05\x1C\xC5\xEE\xAFw`x\x0Bu\xD2X_\xE5\x06V\x0E\x93\xBC\x82\x07\xDC6q\x1B\xC5f\xE1\xD9>                                                             i\xADi\xB1\xD3jV\x8Fd\x94A\xA6\x18\xE2\x9EG\xE84\x05&\xD9>\x04[\xDD\x9D\x19\x92\xEC6]\xC5\xD2\xED\xD5n\xDC\xE7\x105\x00\xB5\xE7\xD7Y\x87!\xAC08o\x0E\xDF,_\xAD=\x0D\xB0\xED\x19\                                                             x04\xE9.\x05\x1EzH\xCA\xD6\xCE\xB4\xFEl\x95o@\xAEAz\xD6\x8E?v\xA1\x9F@#\xC3\xBE^T\x90{\xBF\x16lO\x9C\x86\xF9`\x8D\\x8F\x10b\xA9\x0B\xEC\xAE5\xD8\xA1\xDDT\xA65\x1D\xE7\x82\xA5\x                                                             15\xA3\xC8\xE4\xE2\x8F\xB6\xE9q\x0C\xB9\x82b\xF8\xC7\xFDm\xD0\x0A\xE4\x04\xBE\xB6\xD0H,\xA9\x1BHg\xAF\xF89\xC0\x8E\xE2\xAB\x18X\xAFr\x10\xCF\xF5\xE0\x09\x1Eu\x1C\xC7#v\xEF\xDD\                                                             xF6=G[\x17\x1F\x9B\xE5%\x14\xD5\xD4\x80\x09r\x07\x07\xD2W\x95\xF4\xEB\x1E_\xF0\x01\x80NQ=6p\xA1\xAB\xF1\xD4\x94\x90\x1E\x8C\xE2\xE8\x8E\xBB\x92\xC6@\x82?\x179\xE7U\x99\xE9,\xFE                                                             r\xBB* ^I\x12\xC4g.\x06v\xF6\xDA\x98\x88\xFDU\x9D\x8D\x95\xD2\x98\xBF\x0Cz\x9C\x11\x8F\xEF\xF6\xEBbYx3\xA5)\xF5\xF0H\xEDqH\x8F>\x8E\xDD\xFE\xD6\x0D\x8B\x87L\x05\x0BMy\x1E\xBF\x                                                             E5\xEB_\xC0P\xDCD\xBF\xF5\xC5\xDE\xBB\xBA6\x15I\xA4\xC0\xD8K\xE0%\}\x19tm\xE9\xD33\x8A\xE57\x1CsiH\x09\xC3\x0D\x08(\xD7=\xDA_\xF4\xEB\xAB\xAC\x05a\x8C:6\xA5ZY\xBB\xE4bN\xCD\x82                                                             l\x8B\x176\x1B\xAFA\xFCw\x0AN\x94\xA2\xF5JT<?zP%#0\x1Cc\xAFI\xCD\xF56@\x85Q\xFB$[(\xECa|#\xB2\xBE`)1\xB8\xE0\xCD~\x95n\xD5\x0D:M\x1AM-4GP\xA2S\x06w\x01p\xB54\xE8\x14\xD9$\xF9G\                                                             xEB\x06j\x9DMzJ\xB7/\x8C\x92xM\x90\xDA\xF3dYIE6\xD8\xC5\xF1\xD6\x9B\xE9\V\x17Ar(#\xB9\xCD\x7F\xD7\x8BG \xB2L0\x16\x9D\xC2\xA4T\x83\x89'#\xC9\xA6\xBF\xE8 \x05\x84\x96,d\x0FQe\xE                                                             0h2\xDBT\xE1l\xF2\x13\xDE\x90\xB3I&\xC8\xE6\xD7\xBB\x7F\x0D\xC7\x9D\x96\x03\xCF\xCCp`\xC2\xCD\x81\xACz\x97?U\xB7k\xC5\xDAq;\xE3\x15\xA5\x1C\xE2K\xED4\x1A\x1C\xA7\xB4t`\x92\x0A\                                                             x87\xB7\x04c\x10\x07::`\x1A\x19pd\x11|\xDA\x928\xC4m\x0Av;\xBBVS\x1D\x0F(\x98\x13'\x84\x8E\xE7\x09(\x97\xDBc\xCE\xF5\xB6\x9C?6{\x9C+7f\xC5\xD2e[\xEEyR! \x1FC\x0C\xA2\xAA"\x0F\x                                                             FCu\xBE\xB3\xD9\xB7<\x96D\xC8\x89\xCD96\xC7\xE1\x13\xEBB\xB7\\xC3A\xBC9\xDA\xB3\xAF\x0B\xB3-V\xFF\xE5bN\x96\x80mI\xDC;{\x02\xBE~#\xF3\x09&\xE0\x96\x0AX\x13\x1F\x14\xC4I\xA5\x89                                                             \xFA\xAEg\xD5n^\x87I\xD5\x84O~\x84\x80\xEE\x09\xAB\xA9\x18\xA0\x8D\x0B\xE2\xF1\xFC\x067                                                                                          0000000001                                                  column=info:data, timestamp=1505979502795, value=L\x9A\x16\x8CU\x01<\xBE\xC8\xEEk/\xD5\x8D\xF2`j\xA9\xD9@O!$\xA4\x91\x87\x1C`}\xA1\xFA\x17\xE1e\x98I=9rT%\x18#\xCA\xD0_X\xB0\x80                                                             \xCF!\xF7\x97>\xFA\xAB\xDC\x09r\xF8\x0F\xBAQ;\xB8\x03\x8A\x1B"$dk\xBEm\xD51\x05a|\x00\xC7\xC9\xCF\xE4=\xDD\xC0tE\xDF\xD2{|j}\xE1\x19=\x0C\xA3z\xC2\x1B\xCDfO^\xF6\xF3Y\x19t\xEA9                                                             \x08gp+\xE5\xC0\xD2\xFB\xE1\xCA\x8D\x80o\xDB/p\xA9\xE5\xCEQ\x8C\xDAx\x168\x1CK\xE8v'\xB6@cG\xB3\xD1k\xB1N\xE1\x18\xC2}>\x82\xA0\x8C\xC8\xF5\x8E=[\x8D\x85QBhyuq\xF3\xF6\x9E\xBE\                                                             x87\xC2`i\xFE\x87\xCA\x13\xFA/~\xF5\x9E\xDC\xA2\x07\xCE\x93\xF8\x11\xB9\x10\xD1\xB4\x14N\x0C\xE5\xE9P7$\xBE\xA1 C\xC1\xA7\x16\x9Em>4p\xAD\x91J\xDDv\xB6\xC4\x01H\x10\xDF\xFE\xD3                                                             \xDE^O\x9D\x94\xBC\xD2\xDCXt}\xAB\xA7L\xA6y$8-\x96~A\xD0\x1EQ{\x0DD\xB0\xE2\xBC\x0A\xC7\xED;\x14K\x8B\x0C0$0k[&F\x12\xC9\xA9\x96ia\xBD\xA6\xB6\x84:\x8B\xBD\xAC\xB5H\x03\xE8q\xF                                                             0\xE7\x0Dz\xA5u?4\xFAI\xA1T#<\xB2\x01O\xFB<q~\xBE\xC4\x8Eu\xDEf\x09\x02\xDB\xD2\xA8\xD2D{l2$\x93\x0F\xCFH:\xD9\xC6\xB6\xAF\xC7\xD8\xFBr\xEC0\x06\x8E\xA4Oo\x11\xDD\x06\xCEZn\xB2                                                             \xCA%\x8F\xFB@\xDAt\x7F\xDEu\xC5W"}s\x99B\x9B\xED\x0A\xC5B}C\xE0\x19\xA3\xCC\x83\xFDl\xE2\x1Ank\xEE7\x152,p\xBF\xCF\xC0\x10Jop\xAF\x91r\xBA\x03\xC5\x8C\x0E\xA1\xA9$\xD2mS\x19ir                                                             L]\x9Ex\xA5\x97\xCFGF\xEE0\xFA\xB4\x16+E\xFB&\xD5\xE4\xF1\xCB\x82\xFFF\xF9\x18\x84uG\x81\xEC\xFC\\xC1\xC6dh\xB3\xDA=\xB9\x9CA\x80I\x11\x14d\xA3S}\x0D\xA7\x04VC\x85\xA5\x11\xAE[                                                             \x19\x95\x01\x18\xB0\x1A&j\xF8_p\xB19I3\xE9Z\xF4?\x89\x11=\xA1\x7F\x0E\x8APm\x00\x91\xC6nSKd\xA0\xEA\x11B\x04\x11\xB7\x1D\xA5\x04\x97\x19\x8D\xB2\x10X\x07N\xE7\x8E'\xAE\xDA\xD7                                                             \x1E\xB7\x98\x84\xB6\x07tb,\x91\xEF\x10\xACF\x9E`\xF2:\x09R\xF4\xBA@oy.\x1BZ\xF1\x05\xFF\xEB\xA5\xDC\xD7'C?|\xEC\xBB\xEB,t939\x94S\xE0\x1A\xE0A\x873'\xA5:l\xEA\xAF\xCA\x94\xF0\                                                             x8A\x82"\xFB\xAA\xB9+\xA3\xFC\xE2\x99\xD1\x9FG\xE7s7<8\xF1\xDA!jp\x8D\xDE\x0AHF\xC5\xDEP\xDBE\x8EJ\x8Dh\xC7<p\x85\x0D"\xF2Fj\xEFcY#\xC8\xD3\x06\x02\x0F\xF5\xEA\xFC\x146T\xD3\xD                                                             F\xD1\xBF\x9EP\xEE\xF3\x03|\xAC\x85\xA6\xF5\x0A\xC8\\xED\xC7\x88h\x004\xA4O\xFEo\x12\xE6\xC4\xF1\xE9z\xBB\xCC\x14\x82\xECM5\xC8"\x87\xB9\x9B\x8F\xAB\xB8\xF8u\xED\xDA\xE2\x1B\xF                                                             F\xE2H\xBDRn`\xD5\x95\x89\x9E\xCD_g]3\x03\xBB\xBF\x8A#\x9A\x9C{g\xD5-\x00s\xA6P\x8C\xDA\x98\xBF\xCD\xE5\xFA\xE4\x88\x12N\xC6C\b\xA9.\xD7I\xBD\xCD"\xC7\xC1><4\x08\xDBO\x09\xF8\x                                                             AA^\x12l\xF0\x02\x17D\xE6\xBD\xE7\x87\x98\x96\xB3\x9EY\x11V\xED\xA7\xAA\x83\x0D\x06G\xBCf\xEDla[P\x9C\x91\xC7\xE7\xA8\x08\xAE\xFE\xBB\x13%\x08\x0E\x89\xB2\xAF(\x8B\xD8}\x922|t\                                                             xA4\xA8\x01FH\xD2\xFD\xFD$\xCA\xB6\xCB!vK\xBC\xA9)z\x8A\xD9\x1B#^\x14G60\xC9\xE0\xA4\xF2(\x09\xC15\xEF\xC17\xCC\xEFe\x048\x0F\xEDe\xBB|\x94<\xAC\xE7!\x94j\xC7\x1E\x8F\xA7\xF1\x                                                             B9\xE9!JBc\xEE\x97[\x1B\x8E\xC9\x7F\xAB\xDF\x0E] \xF0\x98O\x89\xE1/Y\xBB\xE0\xBAU\xAFAG\x928UD\x8As\xDF\x88\x0Fi\xDD\x9A(\xF2N\xB1\x99\xBF\x1A\xB3m\xC3+m\x03\x1DY\x8Ah\xE9\x8A\                                                             x16_[1Am\x90K\xC9\x10\x94\x00\xB3\xE6S\xE6+\xDB\x82L5\x93\xC2,\xDB                                                             hadoop2版本:ROW                                                          COLUMN+CELL 00000000000000000000000000                                  column=info:data, timestamp=1506674058770, value=TTTTTTTTYYYYYYYYLLLLLLLLHHHHHHHHEEEEEEEETTTTTTTTSSSSSSSSHHHHHHHHDDDDDDDDFFFFFFFFKKKKKKKKLLLLLLLLMMMMMMMMKKKKKKKKZZZZZZZZIIIIIII                                                             IZZZZZZZZGGGGGGGGCCCCCCCCJJJJJJJJEEEEEEEEJJJJJJJJHHHHHHHHWWWWWWWWUUUUUUUUSSSSSSSSFFFFFFFFMMMMMMMMKKKKKKKKAAAAAAAAGGGGGGGGOOOOOOOOBBBBBBBBKKKKKKKKJJJJJJJJJJJJJJJJCCCCCCCCOOOOOOO                                                             OGGGGGGGGXXXXXXXXCCCCCCCCJJJJJJJJWWWWWWWWKKKKKKKKUUUUUUUUPPPPPPPPTTTTTTTTZZZZZZZZNNNNNNNNYYYYYYYYLLLLLLLLBBBBBBBBFFFFFFFFFFFFFFFFVVVVVVVVGGGGGGGGIIIIIIIIYYYYYYYYKKKKKKKKNNNNNNN                                                             NLLLLLLLLYYYYYYYYJJJJJJJJVVVVVVVVRRRRRRRROOOOOOOOAAAAAAAAUUUUUUUURRRRRRRRXXXXXXXXMMMMMMMMFFFFFFFFNNNNNNNNPPPPPPPPEEEEEEEENNNNNNNNLLLLLLLLLLLLLLLLMMMMMMMMUUUUUUUUVVVVVVVVZZZZZZZ                                                             ZUUUUUUUUCCCCCCCCVVVVVVVVJJJJJJJJQQQQQQQQCCCCCCCCMMMMMMMMKKKKKKKKBBBBBBBBUUUUUUUUTTTTTTTTXXXXXXXXRRRRRRRRHHHHHHHHRRRRRRRRYYYYYYYYGGGGGGGGFFFFFFFFHHHHHHHHJJJJJJJJYYYYYYYYRRRRRRR                                                             RBBBBBBBBZZZZZZZZOOOOOOOOUUUUUUUULLLLLLLLMMMMMMMMXXXXXXXXGGGGGGGGYYYYYYYYBBBBBBBBUUUUUUUUDDDDDDDDQQQQQQQQBBBBBBBBVVVVVVVVPPPPPPPPSSSSSSSSMMMMMMMMQQQQQQQQHHHHHHHHNNNNNNNN        00000000000000000000000001                                  column=info:data, timestamp=1506674058770, value=MMMMMMMMZZZZZZZZUUUUUUUUGGGGGGGGAAAAAAAACCCCCCCCGGGGGGGGNNNNNNNNXXXXXXXXXXXXXXXXNNNNNNNNNNNNNNNNQQQQQQQQMMMMMMMMZZZZZZZZTTTTTTT                                                             TNNNNNNNNRRRRRRRRFFFFFFFFOOOOOOOORRRRRRRRXXXXXXXXRRRRRRRRKKKKKKKKOOOOOOOOCCCCCCCCDDDDDDDDBBBBBBBBXXXXXXXXVVVVVVVVLLLLLLLLHHHHHHHHYYYYYYYYUUUUUUUUQQQQQQQQJJJJJJJJSSSSSSSSUUUUUUU                                                             UTTTTTTTTDDDDDDDDTTTTTTTTUUUUUUUUNNNNNNNNGGGGGGGGMMMMMMMMHHHHHHHHZZZZZZZZUUUUUUUUXXXXXXXXSSSSSSSSFFFFFFFFPPPPPPPPYYYYYYYYNNNNNNNNHHHHHHHHTTTTTTTTFFFFFFFFWWWWWWWWFFFFFFFFBBBBBBB                                                             BJJJJJJJJKKKKKKKKCCCCCCCCHHHHHHHHTTTTTTTTKKKKKKKKEEEEEEEESSSSSSSSIIIIIIIIUUUUUUUUJJJJJJJJKKKKKKKKUUUUUUUUFFFFFFFFSSSSSSSSNNNNNNNNRRRRRRRRQQQQQQQQOOOOOOOOYYYYYYYYCCCCCCCCGGGGGGG                                                             GQQQQQQQQCCCCCCCCPPPPPPPPPPPPPPPPFFFFFFFFPPPPPPPPKKKKKKKKAAAAAAAAHHHHHHHHRRRRRRRRWWWWWWWWHHHHHHHHYYYYYYYYWWWWWWWWGGGGGGGGKKKKKKKKIIIIIIIIMMMMMMMMAAAAAAAATTTTTTTTGGGGGGGGAAAAAAA                                                             ANNNNNNNNCCCCCCCCEEEEEEEEFFFFFFFFTTTTTTTTBBBBBBBBSSSSSSSSSSSSSSSSRRRRRRRROOOOOOOORRRRRRRRDDDDDDDDTTTTTTTTBBBBBBBBHHHHHHHHCCCCCCCCYYYYYYYYKKKKKKKKAAAAAAAALLLLLLLLUUUUUUUU

注意:网上说hbase关闭WAL机制可以提升性能,因为HBase在写数据前会先把操作持久化在WAL中,以保证在异常情况下,HBase可以按照WAL的记录来恢复还未持久化的数据。但是并不建议这么做,因为意外宕机后可能会丢数据。我并没有亲自测试这个的效果。。。。。


可以参考官网:https://www.cloudera.com/documentation/enterprise/5-9-x/topics/cdh_ig_hbase_tools.html

原创粉丝点击