HBase源码解读,Store的实现

来源:互联网 发布:前端 后端 数据库 编辑:程序博客网 时间:2024/04/29 16:06

一张hbase表HTable,大概是这样的管理逻辑。有N个Region,一个Region的每个列簇管理一个自己的Store,Store管理一个Memstore和至少一个StoreFile。
简单的说就是 :

Region:CF = 1:N N >= 1
CF : Store = 1 : 1
Store : Memstore = 1 : 1
Store : Storefile = 1 : N

Store是一个接口,Store.java开头的注释里写了这样一句话

Interface for objects that hold a column family in a Region. Its a memstore and a set of zero or more StoreFiles, which stretch backwards over time
这个接口会持有一个列簇在一个Region上的对象。它是一个Memstore至少一个StoreFile的集合,随着时间的向后推移,会越来越多。

因为HStore它是一个接口,并且当前只有HStore.java一个实现类,所以我们跳到HStore,从这里讲起。
HStore.java的类注释又多了几句话

There’s no reason to consider append-logging at this level; all logging and locking is handled at the HRegion level. Store just provides services to manage sets of StoreFiles. One of the most important of those services is compaction services where files are aggregated once they pass a configurable threshold.
没有理由要在HStore级别添加日志(指的是HLog,也叫WAL,Write ahead log),所有的日志和锁会在HRegion级别做处理。Store只提供管理一系列StoreFiles的服务。其中最重要的一个服务是当文件数量一旦超过一个预设的阈值,对这些文件进行一个聚合服务,叫做Compaction。

The only thing having to do with logs that Store needs to deal with is the reconstructionLog. This is a segment of an HRegion’s log that might NOT be present upon startup. If the param is NULL, there’s nothing to do. If the param is non-NULL, we need to process the log to reconstruct a TreeMap that might not have been written to disk before the process died.
唯一一件与日志产生关系的是Store需要处理重建日志。这是HRegion log的一部分,但是可能不会在启动时出现。如果这个参数为Null,什么都不做,如果参数不是Null,我们需要在进程挂掉前,使用日志重建一个TreeMap来处理还没被写到磁盘上的数据。

It’s assumed that after this constructor returns, the reconstructionLog file will be deleted (by whoever has instantiated the Store)
HStore会假定,当重建完成时,重建日志会被删除,无论谁实例化的这个Store

Locking and transactions are handled at a higher level. This API should not be called directly but by an HRegion manager.
锁和事物应该在更高的级别被处理,这个API不应该被直接调用,除了HRegion Manager。

其实看完注释,其实大致应该明白了,HStore的上面还有一级,叫HRegion,HStore是由HRegion负责管理,专门做一些聚合StoreFiles的操作(Compaction,又分为Major compaction和Minor compaction等)。

HRegion.java有三个重要的变量,其中两个应该会猜到,MemStore和StoreFile。StoreFile是被StoreEngine管理的,根据不同的策略(常见的有StripeStoreEngine,DefaultStoreEngine,还有比较特殊的DateTieredStoreEngine对日期序列有优化)。还有一个是CacheConfig,Store不光要负责写,如果有Scan或者Get,还要将block拿出来并且缓存到BlockCache中(又有LruBlockCache, CombineBlockCache, BucketCache等)。所以,重点需要看MemStore, StoreEngine和 CacheConfig这三个成员变量。

下面是StoreEngine的构造方法,通过这几个泛型就可以大致知道它都做了些什么。

public abstract class StoreEngine<SF extends StoreFlusher, CP extends CompactionPolicy, C extends Compactor, SFM extends StoreFileManager>

OK,接下来有关于HStore对锁的一段注释

RWLock for store operations
Locked in shared mode when the list of component stores is looked at:
- all reads/writes to table data
- checking for split
Locked in exclusive mode when the list of component stores is modified:
- closing
- completing a compaction

读写所在Store里面的几个操作:
符合下面条件时,锁进入共享模式
- 读写数据时
- 检查split时
符合下面条件,进入排他锁
- Store关闭时
- 完成compaction时

下面来看HStore的构造方法:

protected HStore(final HRegion region, final HColumnDescriptor family,      final Configuration confParam) throws IOException

这解释了注释里面说的,Store是由Region,在列簇级别进行管理。
在这个构造方法最开始的一段,初始化了一些我们都很熟悉的获取文件系统,在文件系统上建立这个Region的目录(管理的是一个或多个StoreFiles,所以只能是目录),初始化BlockSize,,DataBlockEncoder等操作,HRegion构造方法太长了,这部分跳过。之后是一个有意思的配置。

long timeToPurgeDeletes = Math.max(conf.getLong("hbase.hstore.time.to.purge.deletes", 0), 0);

解释下 hbase.hstore.time.to.purge.deletes,注意不要和TTL搞混。我们应该都知道HBase在删除操作时实际上是在HLog里面添加delete标记,而真的删除是在Major compaction的时候处理。而TTL每行数据生存时间超过TTL的值,被delete掉。这个参数的意义是在TTL时,再多保留一段时间,为了照顾到两个集群间replication,put可能会出现无序的情况。另外,这个配置默认是0,就是和TTL保持同步,如果不为0时,不能超过TTL的大小,超过了这个配置会无效。

this.blockingFileCount =        conf.getInt(BLOCKING_STOREFILES_KEY, DEFAULT_BLOCKING_STOREFILE_COUNT);

hbase.hstore.blockingStoreFiles是当MemStore在flush数据的时候,会检查这个HStore管理的StoreFile数量,默认是10,如果超过这个数量,flush会等待一个compaction完成,或者是超过hbase.hstore.blockingWaitTime,所以,结合实际情况就是,在60010观察集群性能时,Store和StoreFile的比例如果接近1:10,就要考虑对RegionServer针对业务表进行优化或者对表扩容。
接下来

this.compactionCheckMultiplier = conf.getInt(        COMPACTCHECKER_INTERVAL_MULTIPLIER_KEY, DEFAULT_COMPACTCHECKER_INTERVAL_MULTIPLIER);    if (this.compactionCheckMultiplier <= 0) {      LOG.error("Compaction check period multiplier must be positive, setting default: "          + DEFAULT_COMPACTCHECKER_INTERVAL_MULTIPLIER);      this.compactionCheckMultiplier = DEFAULT_COMPACTCHECKER_INTERVAL_MULTIPLIER;    }

hbase.server.compactchecker.interval.multiplier,检查compaction的时间间隔。通常compaction是在像flush这样的事情之后完成,但是如果这个Region的写入数据量比较低,或者是使用了特殊的Compaction策略,就需要定期检查是否需要进行compaction了。这个时间间隔是由hbase.server.compactchecker.interval.multiplierhbase.server.thread.wakefrequency的乘积共同得出

if (HStore.closeCheckInterval == 0) {      HStore.closeCheckInterval = conf.getInt(          "hbase.hstore.close.check.interval", 10*1000*1000 /* 10 MB */);    }

hbase.hstore.close.check.interval 写入多少字节后,检查compaction状态(这部分放到Compactor里讲解)

    this.storeEngine = StoreEngine.create(this, this.conf, this.comparator);    // StoreEngine会在初始化时,加载所有被它管理的文件            this.storeEngine.getStoreFileManager().loadFiles(loadStoreFiles());    // 通过hbase.hstore.checksum.algorithm加载chucksum算法, 默认是 CRC32, 还有CRC32C等。    this.checksumType = getChecksumType(conf);    // 每hbase.hstore.bytes.per.checksum个字节初始化数据    this.bytesPerChecksum = getBytesPerChecksum(conf);    // flush有可能会失败,重试次数,默认10次    flushRetriesNumber = conf.getInt(        "hbase.hstore.flush.retries.number", DEFAULT_FLUSH_RETRIES_NUMBER);    // flush失败,执行Thread.sleep(pauseTime)后再次重试,默认1秒    pauseTime = conf.getInt(HConstants.HBASE_SERVER_PAUSE, HConstants.DEFAULT_HBASE_SERVER_PAUSE);    if (flushRetriesNumber <= 0) {      throw new IllegalArgumentException(          "hbase.hstore.flush.retries.number must be > 0, not "              + flushRetriesNumber);    }

构造方法里面用到了一个private List<StoreFile> loadStoreFiles() throws IOException方法,功能顾名思义就是从文件系统中获取到StoreFile地址,并用并行的方式将它们打开。功能上没什么特别的,需要注意的是用到了java concurrent包的CompletionService接口,相当于一个打包的Future,并行获取StoreFile,最后获取一个打包的结果,再去遍历初始化这些StoreFile

// initialize the thread pool for opening store files in parallel..    ThreadPoolExecutor storeFileOpenerThreadPool =      this.region.getStoreFileOpenAndCloseThreadPool("StoreFileOpenerThread-" +          this.getColumnFamilyName());    CompletionService<StoreFile> completionService =      new ExecutorCompletionService<StoreFile>(storeFileOpenerThreadPool);    int totalValidStoreFile = 0;    for (final StoreFileInfo storeFileInfo: files) {      // open each store file in parallel      completionService.submit(new Callable<StoreFile>() {        @Override        public StoreFile call() throws IOException {          StoreFile storeFile = createStoreFileAndReader(storeFileInfo);          return storeFile;        }      });      totalValidStoreFile++;    }

HStore的增删改查,可以看到都是在Memstore中处理,并且行锁也是在这里控制。

  // 添加和删除KeyValue,都是在Memstore完成,返回的是代表heap size变化的long型  public long add(final KeyValue kv) {    lock.readLock().lock();    try {      return this.memstore.add(kv);    } finally {      lock.readLock().unlock();    }  }  protected long delete(final KeyValue kv) {    lock.readLock().lock();    try {      return this.memstore.delete(kv);    } finally {      lock.readLock().unlock();    }  }  // 主要用于解决因为快照产生的状态不一致的问题  public void rollback(final KeyValue kv) {    lock.readLock().lock();    try {      this.memstore.rollback(kv);    } finally {      lock.readLock().unlock();    }  }

之后有一些与bulkload相关的方法,这块打算单独开一个主题来将,所以将下面这部分方法跳过

public void assertBulkLoadHFileOk(Path srcPath) throws IOExceptionpublic void bulkLoadHFile(String srcPathStr, long seqNum) throws IOException

再来看另一个比较重要的方法,close

public ImmutableCollection<StoreFile> close() throws IOException {    // 对整个对象加写锁,阻止执行close后有数据写入。    this.lock.writeLock().lock();    try {      // 清空StoreEngine里面正在打开的StoreFile,这个方法会将正在打开的StoreFile返回      ImmutableCollection<StoreFile> result = storeEngine.getStoreFileManager().clearFiles();      if (!result.isEmpty()) {        // 初始化一个并行关闭StoreFiles的线程。 通常一个Store在使用压力不大的情况下,会维护四到五个StoreFile,使用压力大,Store和Storefile的比例会超过1:10        ThreadPoolExecutor storeFileCloserThreadPool = this.region            .getStoreFileOpenAndCloseThreadPool("StoreFileCloserThread-"                + this.getColumnFamilyName());        // close each store file in parallel        CompletionService<Void> completionService =          new ExecutorCompletionService<Void>(storeFileCloserThreadPool);        for (final StoreFile f : result) {          completionService.submit(new Callable<Void>() {            @Override            public Void call() throws IOException {              boolean evictOnClose =                   cacheConf != null? cacheConf.shouldEvictOnClose(): true;               f.closeReader(evictOnClose);              return null;            }          });        }        // 遍历completionService的执行结果,获取里面的报错,并抛出来        IOException ioe = null;        try {          for (int i = 0; i < result.size(); i++) {            try {              Future<Void> future = completionService.take();              future.get();            } catch (InterruptedException e) {              if (ioe == null) {                ioe = new InterruptedIOException();                ioe.initCause(e);              }            } catch (ExecutionException e) {              if (ioe == null) ioe = new IOException(e.getCause());            }          }        } finally {          storeFileCloserThreadPool.shutdownNow();        }        if (ioe != null) throw ioe;      }      LOG.info("Closed " + this);      return result;    } finally {      // 关闭完成,打开写锁,HRegion执行后续的关闭工作      this.lock.writeLock().unlock();    }  }

Store经常被调用的方法,flushCache

   // AtomicLong flushedSize   注意flshedSized的类型。 Memstore是在一个RegionServer上被所有Region共享,所以这里要使用原子型变量  protected List<Path> flushCache(final long logCacheFlushId,      SortedSet<KeyValue> snapshot,      TimeRangeTracker snapshotTimeRangeTracker,      AtomicLong flushedSize,          MonitoredTask status) throws IOException {    // 如果有异常发生,让程序退出,但是不清楚memstore快照。旧的flush snapshot当我们说snapshot的时候,下一次的flush就来了。    // 当异常发生的时候,进行重试,当超过hbase.hstore.flush.retries.number后,退出    StoreFlusher flusher = storeEngine.getStoreFlusher();     // 和StoreFile有关的操作,都是用StoreEngine接口进行管理    IOException lastException = null;    for (int i = 0; i < flushRetriesNumber; i++) {      try {        List<Path> pathNames = flusher.flushSnapshot(            snapshot, logCacheFlushId, snapshotTimeRangeTracker, flushedSize, status);        Path lastPathName = null;        try {          for (Path pathName : pathNames) {            lastPathName = pathName;            validateStoreFile(pathName);          // 这个方法的作用就是用pathName,创建StoreFileReader,看是否会抛出异常          }          // 校验StoreFile无问题,退出          return pathNames;        } catch (Exception e) {          // 如果过程中发生问题,抛出warn,并且sleep,等待下次for重新flush数据          LOG.warn("Failed validating store file " + lastPathName + ", retrying num=" + i, e);          if (e instanceof IOException) {            lastException = (IOException) e;          } else {            lastException = new IOException(e);          }        }      } catch (IOException e) {        LOG.warn("Failed flushing store file, retrying num=" + i, e);        lastException = e;      }      if (lastException != null && i < (flushRetriesNumber - 1)) {        try {          Thread.sleep(pauseTime);        } catch (InterruptedException e) {          IOException iie = new InterruptedIOException();          iie.initCause(e);          throw iie;        }      }    }    throw lastException;  }

HStore的重点,Compaction
首先还是看注释

Compact the StoreFiles. This method may take some time, so the calling thread must be able to block for long periods.
合并StoreFile。这个方法会花费些时间,所以,调用的线程一定要可以被block住很长时间
During this time, the Store can work as usual, getting values from StoreFiles and writing new StoreFiles from the memstore.
合并期间,Store仍然可以正常工作(会慢点),这个过程会从旧的StoreFiles拿到数据,通过Memstore写到新的StoreFile

Existing StoreFiles are not destroyed until the new compacted StoreFile is completely written-out to disk.
存在的StoreFile会直到新的StoreFile完全写完才会被销毁掉

The compactLock prevents multiple simultaneous compactions.
CompactionLock用来防止多个Compaction同时进行

The structureLock prevents us from interfering with other write operations.
StructureLock用来防止我们干扰其他的写操作
We don’t want to hold the structureLock for the whole time, as a compact() can be lengthy and we want to allow cache-flushes during this period.
我们不希望持有StructureLock太长时间,因为compact操作时间很长,我们希望这段时间 cache仍然可以进行flush.

Compaction event should be idempotent, since there is no IO Fencing for the region directory in hdfs. A region server might still try to complete the compaction after it lost the region. That is why the following events are carefully ordered for a compaction:
Compaction操作应该具有等幂性, 因为HDFS对Region所在的路径没有IO限制。一个RegionServer应该在丢失Region后仍然尝试完成合并。所以Compaction严格按照下面的顺序执行的

  1. Compaction writes new files under region/.tmp directory (compaction output)
    Compaction写的新文件要在region/.tmp路径
  2. Compaction atomically moves the temporary file under region directory
    Compaction原子性的将tmp文件移动到region路径下
  3. Compaction appends a WAL edit containing the compaction input and output files. Forces sync on WAL.
    Compaction在WAL上追加包含compaction输入和输出的记录。并且是要强制同步到WAL上
  4. Compaction deletes the input files from the region directory.

Failure conditions are handled like this:
以下失败的情况会被处理:
- If RS fails before 2, compaction wont complete. Even if RS lives on and finishes the compaction later, it will only write the new data file to the region directory. Since we already have this data, this will be idempotent but we will have a redundant copy of the data.
- 如果RS(RegionServer)在2之前挂掉,那么compaction不会完成。即使RS在完成compaction后活过来,也只会将新数据文件写到Region路径下。从我们有这份数据开始,就会一直保持等幂性,但是我们有一份数据的冗余副本。
- If RS fails between 2 and 3, the region will have a redundant copy of the data. The RS that failed won’t be able to finish snyc() for WAL because of lease recovery in WAL.
- 如果RS在2和3之间挂掉,region会有一份数据冗余的拷贝。RS的失败不会完成WAL的同步,因为恢复是在WAL中进行的。
- If RS fails after 3, the region server who opens the region will pick up the the compaction marker from the WAL and replay it by removing the compaction input files. Failed RS can also attempt to delete those files, but the operation will be idempotent
- 如果在3后失败,新打开这个region的RS会从WAL中获取到Compaction的标记,并且重新执行删除compaction input files的过程。挂掉的RS也可以尝试删除这些文件,但是操作会是等幂性的。

public List<StoreFile> compact(CompactionContext compaction,    CompactionThroughputController throughputController, User user) throws IOException {    assert compaction != null;    List<StoreFile> sfs = null;          // compaction后的文件list    CompactionRequest cr = compaction.getRequest();    try {      // 在这里进行一次完整性检查      long compactionStartTime = EnvironmentEdgeManager.currentTimeMillis();      assert compaction.hasSelection();      Collection<StoreFile> filesToCompact = cr.getFiles();      assert !filesToCompact.isEmpty();      synchronized (filesCompacting) {        // 完整性检查就是请求compact和正在进行的文件相同     Preconditions.checkArgument(filesCompacting.containsAll(filesToCompact));      }      // 完整性检查通过,开始合并,这条info在RS log里面经常能见到      LOG.info("Starting compaction of " + filesToCompact.size() + " file(s) in "          + this + " of " + this.getRegionInfo().getRegionNameAsString()          + " into tmpdir=" + fs.getTempDir() + ", totalSize="          + StringUtils.humanReadableInt(cr.getSize()));      // 注释上的步骤1,compaction过程,具体实现会因为StoreEngine的不同而有区别。具体可以看另一篇将compaction过程的文章      List<Path> newFiles = compaction.compact(throughputController, user);      // 如果hbase.hstore.compaction.complete配置为false,会在close的时候将这些block从新的hfile中驱逐出去。      if (!this.conf.getBoolean("hbase.hstore.compaction.complete", true)) {        LOG.warn("hbase.hstore.compaction.complete is set to false");        sfs = new ArrayList<StoreFile>(newFiles.size());        final boolean evictOnClose =            cacheConf != null? cacheConf.shouldEvictOnClose(): true;        for (Path newFile : newFiles) {          StoreFile sf = createStoreFileAndReader(newFile);          sf.closeReader(evictOnClose);          sfs.add(sf);        }        return sfs;      }      // 注释上的步骤2,将合并后的StoreFile挪到新的位置,具体实现在HRegionFileSystem.commitStoreFile 中实现      sfs = moveCompatedFilesIntoPlace(cr, newFiles, user);      // 注释上的步骤3,在WAL上打上compaction的标记,记录好输入,输出,UGI      writeCompactionWalRecord(filesToCompact, sfs);      replaceStoreFiles(filesToCompact, sfs);      if (cr.isMajor()) {        majorCompactedCellsCount += getCompactionProgress().totalCompactingKVs;        majorCompactedCellsSize += getCompactionProgress().totalCompactedSize;      } else {        compactedCellsCount += getCompactionProgress().totalCompactingKVs;        compactedCellsSize += getCompactionProgress().totalCompactedSize;      }      // 步骤4,完成compaction,删掉旧storefile,更新store size信息      completeCompaction(filesToCompact);       // 更新log,告知compaction完成      logCompactionEndMessage(cr, sfs, compactionStartTime);      return sfs;    } finally {      finishCompactionRequest(cr);    }  }

Compaction在HStore做的主要流程就是上面这些,我们再看一个对compact比较重要的方法requestCompaction,用于选取出到底哪些文件要被compaction.

  public CompactionContext requestCompaction(int priority, final CompactionRequest baseRequest,      User user) throws IOException {    // Store不可写入的时候,不接收合并请求    if (!this.areWritesEnabled()) {      return null;    }    // 这个方法主要从要被compaction的storefile列表中过滤掉TTL过期的storefile,这期间会加读锁。    removeUnneededFiles();    final CompactionContext compaction = storeEngine.createCompaction();    this.lock.readLock().lock();    try {      synchronized (filesCompacting) {        final Store thisStore = this;        // 查看一遍协处理器是否要修改storefile列表。也就是说,如果有特殊的情况,需要保留某些storefile,可以直接通过coprecessor来做。最后用新的文件列表创建CompactionRequest。        if (this.getCoprocessorHost() != null) {          final List<StoreFile> candidatesForCoproc = compaction.preSelect(this.filesCompacting);          boolean override = false;          if (user == null) {            override = getCoprocessorHost().preCompactSelection(this, candidatesForCoproc,              baseRequest);          } else {            try {              override = user.getUGI().doAs(new PrivilegedExceptionAction<Boolean>() {                @Override                public Boolean run() throws Exception {                  return getCoprocessorHost().preCompactSelection(thisStore, candidatesForCoproc,                    baseRequest);                }              });            } catch (InterruptedException ie) {              InterruptedIOException iioe = new InterruptedIOException();              iioe.initCause(ie);              throw iioe;            }          }          if (override) {            // Coprocessor is overriding normal file selection.            compaction.forceSelect(new CompactionRequest(candidatesForCoproc));          }        }        // Normal case - coprocessor is not overriding file selection.        if (!compaction.hasSelection()) {          boolean isUserCompaction = priority == Store.PRIORITY_USER;        // hbase有两个参数来定义非高峰时段, hbase.offpeak.start.hour和hbase.offpeak.end.hour。默认是-1,不做合并。        // 如果compaction在这个范围内的时间,会读取一个hbase.hstore.compaction.max.size.offpeak参数,表示非高峰时段        // compaction的大小,不在这个时间范围内的话,读取hbase.hstore.compaction.max.size。         // 如果hbase.hstore.compaction.max.size.offpeak未做设置,初始化为hbase.hstore.compaction.max.size          boolean mayUseOffPeak = offPeakHours.isOffPeakHour() &&              offPeakCompactionTracker.compareAndSet(false, true);          try {            compaction.select(this.filesCompacting, isUserCompaction,              mayUseOffPeak, forceMajor && filesCompacting.isEmpty());          } catch (IOException e) {            if (mayUseOffPeak) {              offPeakCompactionTracker.set(false);            }            throw e;          }          assert compaction.hasSelection();          if (mayUseOffPeak && !compaction.getRequest().isOffPeak()) {            // Compaction policy doesn't want to take advantage of off-peak.            offPeakCompactionTracker.set(false);          }        }        if (this.getCoprocessorHost() != null) {          if (user == null) {            this.getCoprocessorHost().postCompactSelection(              this, ImmutableList.copyOf(compaction.getRequest().getFiles()), baseRequest);          } else {            try {              user.getUGI().doAs(new PrivilegedExceptionAction<Void>() {                @Override                public Void run() throws Exception {                  getCoprocessorHost().postCompactSelection(                    thisStore,ImmutableList.copyOf(compaction.getRequest().getFiles()),baseRequest);                  return null;                }              });            } catch (InterruptedException ie) {              InterruptedIOException iioe = new InterruptedIOException();              iioe.initCause(ie);              throw iioe;            }          }        }        // 检查是否有自定义的文件selection策略,如果有,把选出的文件和当前        // CompactRequest合并        if (baseRequest != null) {          compaction.forceSelect(              baseRequest.combineWith(compaction.getRequest()));        }        // 通过各种策略拿到了要被合并的文件列表,检查是否为空,为空直接返回,否则将这些文件加到CompactingFiles这个list中        final Collection<StoreFile> selectedFiles = compaction.getRequest().getFiles();        if (selectedFiles.isEmpty()) {          return null;        }        addToCompactingFiles(selectedFiles);        // 如果是major compaction,把forceMajor设置为false(没看明白这个参数的作用).        boolean isMajor = selectedFiles.size() == this.getStorefilesCount();        this.forceMajor = this.forceMajor && !isMajor;        // compaction有两个优先级,USER和NO_PRIORITY。USER优先级最高        compaction.getRequest().setPriority(            (priority != Store.NO_PRIORITY) ? priority : getCompactPriority());        compaction.getRequest().setIsMajor(isMajor);        compaction.getRequest().setDescription(            getRegionInfo().getRegionNameAsString(), getColumnFamilyName());      }    } finally {      this.lock.readLock().unlock();    }    LOG.debug(getRegionInfo().getEncodedName() + " - " + getColumnFamilyName() + ": Initiating "        + (compaction.getRequest().isMajor() ? "major" : "minor") + " compaction");    this.region.reportCompactionRequestStart(compaction.getRequest().isMajor());    return compaction;  }
0 0