Solr4.7源码分析-启动篇(四)
来源:互联网 发布:天生丽质难自弃网络语 编辑:程序博客网 时间:2024/06/06 02:29
上一篇文章介绍了solr4.7启动时,创建core的过程中将solrconfig.xml配置文件中的属性实例化的过程,在实例化完那些属性之后,重头戏getSearcher()方法“粉墨登场”。
在solr启动时调用这个方法,是为了创建了一个新的SolrIndexSearcer,这个注册过的searcher(registered searcher)是查询时默认使用的searcher,一般来说,一个searcher在被注册之前,会先预热并调用完event handlers(newSearcher or firstSearcher events),而当调用getSearcher方法时如果没有注册过的searcher,一个新的searcher会在执行完event handlers之前被注册。
这个方法有个forceNew参数,为true时,无论是否已经有searcher或者有正在创建的searcher,都会强制创建一个新的searcher;为false时,若已有searcher直接返回这个searcher,若有searcher已经在创建过程中,则等待这个searcher创建完并被注册后再将其返回;若以上两种状况都不满足,则创建新searcher。
returnSearcher参数为true时,会返回一个RefCounted<SolrIndexSearcher>,并且这个引用计数会加一。
下面具体看下这个方法的代码,代码本身的方法注释中对此方法做了很详细的说明:
/** * Get a {@link SolrIndexSearcher} or start the process of creating a new one. * <p> * The registered searcher is the default searcher used to service queries. * A searcher will normally be registered after all of the warming * and event handlers (newSearcher or firstSearcher events) have run. * In the case where there is no registered searcher, the newly created searcher will * be registered before running the event handlers (a slow searcher is better than no searcher). * * <p> * These searchers contain read-only IndexReaders. To access a non read-only IndexReader, * see newSearcher(String name, boolean readOnly). * * <p> * If <tt>forceNew==true</tt> then * A new searcher will be opened and registered regardless of whether there is already * a registered searcher or other searchers in the process of being created. * <p> * If <tt>forceNew==false</tt> then:<ul> * <li>If a searcher is already registered, that searcher will be returned</li> * <li>If no searcher is currently registered, but at least one is in the process of being created, then * this call will block until the first searcher is registered</li> * <li>If no searcher is currently registered, and no searchers in the process of being registered, a new * searcher will be created.</li> * </ul> * <p> * If <tt>returnSearcher==true</tt> then a {@link RefCounted}<{@link SolrIndexSearcher}> will be returned with * the reference count incremented. It <b>must</b> be decremented when no longer needed. * <p> * If <tt>waitSearcher!=null</tt> and a new {@link SolrIndexSearcher} was created, * then it is filled in with a Future that will return after the searcher is registered. The Future may be set to * <tt>null</tt> in which case the SolrIndexSearcher created has already been registered at the time * this method returned. * <p> * @param forceNew if true, force the open of a new index searcher regardless if there is already one open. * @param returnSearcher if true, returns a {@link SolrIndexSearcher} holder with the refcount already incremented. * @param waitSearcher if non-null, will be filled in with a {@link Future} that will return after the new searcher is registered. * @param updateHandlerReopens if true, the UpdateHandler will be used when reopening a {@link SolrIndexSearcher}. */ public RefCounted<SolrIndexSearcher> getSearcher(boolean forceNew, boolean returnSearcher, final Future[] waitSearcher, boolean updateHandlerReopens) { // it may take some time to open an index.... we may need to make // sure that two threads aren't trying to open one at the same time // if it isn't necessary. synchronized (searcherLock) { // see if we can return the current searcher if (_searcher!=null && !forceNew) { if (returnSearcher) { _searcher.incref(); return _searcher; } else { return null; } } // check to see if we can wait for someone else's searcher to be set if (onDeckSearchers]] > 0 && !forceNew && _searcher==null) { try { searcherLock.wait(); } catch (InterruptedException e) { log.info(SolrException.toStr(e)); } } // check again: see if we can return right now if (_searcher!=null && !forceNew) { if (returnSearcher) { _searcher.incref(); return _searcher; } else { return null; } } // At this point, we know we need to open a new searcher... // first: increment count to signal other threads that we are // opening a new searcher. onDeckSearchers++; if (onDeckSearchers < 1) { // should never happen... just a sanity check log.error(logid+"ERROR!!! onDeckSearchers is " + onDeckSearchers); onDeckSearchers=1; // reset } else if (onDeckSearchers > maxWarmingSearchers) { onDeckSearchers--; String msg="Error opening new searcher. exceeded limit of maxWarmingSearchers="+maxWarmingSearchers + ", try again later."; log.warn(logid+""+ msg); // HTTP 503==service unavailable, or 409==Conflict throw new SolrException(SolrException.ErrorCode.SERVICE_UNAVAILABLE,msg); } else if (onDeckSearchers > 1) { log.warn(logid+"PERFORMANCE WARNING: Overlapping onDeckSearchers=" + onDeckSearchers); } } // a signal to decrement onDeckSearchers if something goes wrong. final boolean[] decrementOnDeckCount=new boolean[]{true}; RefCounted<SolrIndexSearcher> currSearcherHolder = null; // searcher we are autowarming from RefCounted<SolrIndexSearcher> searchHolder = null; boolean success = false; openSearcherLock.lock(); try { // 返回一个SolrIndexSearcher searchHolder = openNewSearcher(updateHandlerReopens, false); // the searchHolder will be incremented once already (and it will eventually be assigned to _searcher when registered) // increment it again if we are going to return it to the caller. if (returnSearcher) { searchHolder.incref(); } final RefCounted<SolrIndexSearcher> newSearchHolder = searchHolder; final SolrIndexSearcher newSearcher = newSearchHolder.get(); boolean alreadyRegistered = false; synchronized (searcherLock) { if (_searcher == null) { // if there isn't a current searcher then we may // want to register this one before warming is complete instead of waiting. if (solrConfig.useColdSearcher) { registerSearcher(newSearchHolder); decrementOnDeckCount[0]=false; alreadyRegistered=true; } } else { // get a reference to the current searcher for purposes of autowarming. currSearcherHolder=_searcher; currSearcherHolder.incref(); } } final SolrIndexSearcher currSearcher = currSearcherHolder==null ? null : currSearcherHolder.get(); Future future=null; // warm the new searcher based on the current searcher. // should this go before the other event handlers or after? if (currSearcher != null) { future = searcherExecutor.submit( new Callable() { @Override public Object call() throws Exception { try { newSearcher.warm(currSearcher); } catch (Throwable e) { SolrException.log(log,e); if (e instanceof Error) { throw (Error) e; } } return null; } } ); } if (currSearcher==null && firstSearcherListeners.size() > 0) { future = searcherExecutor.submit( new Callable() { @Override public Object call() throws Exception { try { for (SolrEventListener listener : firstSearcherListeners) { listener.newSearcher(newSearcher,null); } } catch (Throwable e) { SolrException.log(log,null,e); if (e instanceof Error) { throw (Error) e; } } return null; } } ); } if (currSearcher!=null && newSearcherListeners.size() > 0) { future = searcherExecutor.submit( new Callable() { @Override public Object call() throws Exception { try { for (SolrEventListener listener : newSearcherListeners) { listener.newSearcher(newSearcher, currSearcher); } } catch (Throwable e) { SolrException.log(log,null,e); if (e instanceof Error) { throw (Error) e; } } return null; } } ); } // WARNING: this code assumes a single threaded executor (that all tasks // queued will finish first). final RefCounted<SolrIndexSearcher> currSearcherHolderF = currSearcherHolder; if (!alreadyRegistered) { future = searcherExecutor.submit( new Callable() { @Override public Object call() throws Exception { try { // registerSearcher will decrement onDeckSearchers and // do a notify, even if it fails. registerSearcher(newSearchHolder); } catch (Throwable e) { SolrException.log(log, e); if (e instanceof Error) { throw (Error) e; } } finally { // we are all done with the old searcher we used // for warming... if (currSearcherHolderF!=null) currSearcherHolderF.decref(); } return null; } } ); } if (waitSearcher != null) { waitSearcher[0] = future; } success = true; // Return the searcher as the warming tasks run in parallel // callers may wait on the waitSearcher future returned. return returnSearcher ? newSearchHolder : null; } catch (Exception e) { if (e instanceof SolrException) throw (SolrException)e; throw new SolrException(SolrException.ErrorCode.SERVER_ERROR, e); } finally { if (!success) { synchronized (searcherLock) { onDeckSearchers--; if (onDeckSearchers < 0) { // sanity check... should never happen log.error(logid+"ERROR!!! onDeckSearchers after decrement=" + onDeckSearchers); onDeckSearchers=0; // try and recover } // if we failed, we need to wake up at least one waiter to continue the process searcherLock.notify(); } if (currSearcherHolder != null) { currSearcherHolder.decref(); } if (searchHolder != null) { searchHolder.decref(); // decrement 1 for _searcher (searchHolder will never become _searcher now) if (returnSearcher) { searchHolder.decref(); // decrement 1 because we won't be returning the searcher to the user } } } // we want to do this after we decrement onDeckSearchers so another thread // doesn't increment first and throw a false warning. openSearcherLock.unlock(); } }其中主要的方法是openNewSearcher()方法,其创建一个新的SolrIndexSearcher并返回RefCounted<SolrIndexSearcher>:
/** Opens a new searcher and returns a RefCounted<SolrIndexSearcher> with it's reference incremented. * * "realtime" means that we need to open quickly for a realtime view of the index, hence don't do any * autowarming and add to the _realtimeSearchers queue rather than the _searchers queue (so it won't * be used for autowarming by a future normal searcher). A "realtime" searcher will currently never * become "registered" (since it currently lacks caching). * * realtimeSearcher is updated to the latest opened searcher, regardless of the value of "realtime". * * This method acquires openSearcherLock - do not call with searckLock held! */ public RefCounted<SolrIndexSearcher> openNewSearcher(boolean updateHandlerReopens, boolean realtime) { if (isClosed()) { // catch some errors quicker throw new SolrException(SolrException.ErrorCode.SERVER_ERROR, "openNewSearcher called on closed core"); } SolrIndexSearcher tmp; RefCounted<SolrIndexSearcher> newestSearcher = null; boolean nrt = solrConfig.nrtMode && updateHandlerReopens; // 可重入锁 openSearcherLock.lock(); try { // 这方法在initIndex时见过 String newIndexDir = getNewIndexDir(); String indexDirFile = null; String newIndexDirFile = null; // if it's not a normal near-realtime update, check that paths haven't changed. if (!nrt) { indexDirFile = getDirectoryFactory().normalize(getIndexDir()); newIndexDirFile = getDirectoryFactory().normalize(newIndexDir); } synchronized (searcherLock) { newestSearcher = realtimeSearcher; if (newestSearcher != null) { newestSearcher.incref(); // the matching decref is in the finally block } } if (newestSearcher != null && (nrt || indexDirFile.equals(newIndexDirFile))) { DirectoryReader newReader; DirectoryReader currentReader = newestSearcher.get().getIndexReader(); // SolrCore.verbose("start reopen from",previousSearcher,"writer=",writer); RefCounted<IndexWriter> writer = getUpdateHandler().getSolrCoreState() .getIndexWriter(null); try { if (writer != null && solrConfig.nrtMode) { // if in NRT mode, open from the writer newReader = DirectoryReader.openIfChanged(currentReader, writer.get(), true); } else { // verbose("start reopen without writer, reader=", currentReader); // if not in NRT mode, just re-open the reader newReader = DirectoryReader.openIfChanged(currentReader); // verbose("reopen result", newReader); } } finally { if (writer != null) { writer.decref(); } } if (newReader == null) { // if this is a request for a realtime searcher, just return the same searcher if there haven't been any changes. if (realtime) { newestSearcher.incref(); return newestSearcher; } currentReader.incRef(); newReader = currentReader; } // for now, turn off caches if this is for a realtime reader (caches take a little while to instantiate) tmp = new SolrIndexSearcher(this, newIndexDir, getLatestSchema(), getSolrConfig().indexConfig, (realtime ? "realtime":"main"), newReader, true, !realtime, true, directoryFactory); } else { // newestSearcher == null at this point if (newReaderCreator != null) { // this is set in the constructor if there is a currently open index writer // so that we pick up any uncommitted changes and so we don't go backwards // in time on a core reload DirectoryReader newReader = newReaderCreator.call(); tmp = new SolrIndexSearcher(this, newIndexDir, getLatestSchema(), getSolrConfig().indexConfig, (realtime ? "realtime":"main"), newReader, true, !realtime, true, directoryFactory); // nrt模式 } else if (solrConfig.nrtMode) { // 通过updateHandler取得indexWriter RefCounted<IndexWriter> writer = getUpdateHandler().getSolrCoreState().getIndexWriter(this); DirectoryReader newReader = null; try { // indexReaderFactory在initIndex时通过initIndexReaderFactory()设置过了,nrt模式依旧是通过IndexWriter生成reader newReader = indexReaderFactory.newReader(writer.get(), this); } finally { // writer用完引用-1 writer.decref(); } tmp = new SolrIndexSearcher(this, newIndexDir, getLatestSchema(), getSolrConfig().indexConfig, (realtime ? "realtime":"main"), newReader, true, !realtime, true, directoryFactory); } else { // normal open that happens at startup // verbose("non-reopen START:"); tmp = new SolrIndexSearcher(this, newIndexDir, getLatestSchema(), getSolrConfig().indexConfig, "main", true, directoryFactory); // verbose("non-reopen DONE: searcher=",tmp); } } List<RefCounted<SolrIndexSearcher>> searcherList = realtime ? _realtimeSearchers : _searchers; RefCounted<SolrIndexSearcher> newSearcher = newHolder(tmp, searcherList); // refcount now at 1 // Increment reference again for "realtimeSearcher" variable. It should be at 2 after. // When it's decremented by both the caller of this method, and by realtimeSearcher being replaced, // it will be closed. newSearcher.incref(); synchronized (searcherLock) { if (realtimeSearcher != null) { realtimeSearcher.decref(); } realtimeSearcher = newSearcher; searcherList.add(realtimeSearcher); } return newSearcher; } catch (Exception e) { throw new SolrException(SolrException.ErrorCode.SERVER_ERROR, "Error opening new searcher", e); } finally { openSearcherLock.unlock(); if (newestSearcher != null) { newestSearcher.decref(); } } }对于nrt模式,下面这句代码
RefCounted<IndexWriter> writer = getUpdateHandler().getSolrCoreState().getIndexWriter(this);
中,通过updateHandler取得writer,此处getIndexWriter调用的是从DirectUpdateHandler2中取得的DefaultSolrCoreState实例:
具体看下里面的几个主要方法的代码:
@Override public RefCounted<IndexWriter> getIndexWriter(SolrCore core) throws IOException { synchronized (writerPauseLock) { if (closed) { throw new SolrException(ErrorCode.SERVICE_UNAVAILABLE, "SolrCoreState already closed"); } while (pauseWriter) { try { writerPauseLock.wait(100); } catch (InterruptedException e) {} if (closed) { throw new SolrException(ErrorCode.SERVICE_UNAVAILABLE, "Already closed"); } } if (core == null) { // core == null is a signal to just return the current writer, or null // if none. initRefCntWriter(); if (refCntWriter == null) return null; writerFree = false; writerPauseLock.notifyAll(); if (refCntWriter != null) refCntWriter.incref(); return refCntWriter; } if (indexWriter == null) { // 创建indexWriter indexWriter = createMainIndexWriter(core, "DirectUpdateHandler2"); } initRefCntWriter(); writerFree = false; writerPauseLock.notifyAll(); refCntWriter.incref(); return refCntWriter; } }其中createMainIndexWriter方法,创建了indexWriter: protected SolrIndexWriter createMainIndexWriter(SolrCore core, String name) throws IOException { return SolrIndexWriter.create(name, core.getNewIndexDir(), core.getDirectoryFactory(), false, core.getLatestSchema(), core.getSolrConfig().indexConfig, core.getDeletionPolicy(), core.getCodec()); }调用的SolrIndexWriter中的静态方法create: public static SolrIndexWriter create(String name, String path, DirectoryFactory directoryFactory, boolean create, IndexSchema schema, SolrIndexConfig config, IndexDeletionPolicy delPolicy, Codec codec) throws IOException { SolrIndexWriter w = null; final Directory d = directoryFactory.get(path, DirContext.DEFAULT, config.lockType); try { w = new SolrIndexWriter(name, path, d, create, schema, config, delPolicy, codec); w.setDirectoryFactory(directoryFactory); return w; } finally { if (null == w && null != d) { directoryFactory.doneWithDirectory(d); directoryFactory.release(d); } } }看下new SolrIndexWriter的构造方法: private SolrIndexWriter(String name, String path, Directory directory, boolean create, IndexSchema schema, SolrIndexConfig config, IndexDeletionPolicy delPolicy, Codec codec) throws IOException { super(directory, // 通过config来创建了一个IndexWriterConfig config.toIndexWriterConfig(schema). // OpenMode:create或append,没有create_or_append setOpenMode(create ? IndexWriterConfig.OpenMode.CREATE : IndexWriterConfig.OpenMode.APPEND). setIndexDeletionPolicy(delPolicy).setCodec(codec) ); log.debug("Opened Writer " + name); this.name = name; numOpens.incrementAndGet(); }看下这个创建IndexWriterConfig的方法,这个IndexWriterConfig持有创建IndexWriter的所需的配置属性: public IndexWriterConfig toIndexWriterConfig(IndexSchema schema) { // so that we can update the analyzer on core reload, we pass null // for the default analyzer, and explicitly pass an analyzer on // appropriate calls to IndexWriter IndexWriterConfig iwc = new IndexWriterConfig(luceneVersion, null); if (maxBufferedDocs != -1) iwc.setMaxBufferedDocs(maxBufferedDocs); if (ramBufferSizeMB != -1) iwc.setRAMBufferSizeMB(ramBufferSizeMB); if (termIndexInterval != -1) iwc.setTermIndexInterval(termIndexInterval); if (writeLockTimeout != -1) iwc.setWriteLockTimeout(writeLockTimeout); iwc.setSimilarity(schema.getSimilarity()); iwc.setMergePolicy(buildMergePolicy(schema)); iwc.setMergeScheduler(buildMergeScheduler(schema)); iwc.setInfoStream(infoStream); // do this after buildMergePolicy since the backcompat logic // there may modify the effective useCompoundFile iwc.setUseCompoundFile(getUseCompoundFile()); if (maxIndexingThreads != -1) { iwc.setMaxThreadStates(maxIndexingThreads); } if (mergedSegmentWarmerInfo != null) { // TODO: add infostream -> normal logging system (there is an issue somewhere) IndexReaderWarmer warmer = schema.getResourceLoader().newInstance(mergedSegmentWarmerInfo.className, IndexReaderWarmer.class, null, new Class[] { InfoStream.class }, new Object[] { iwc.getInfoStream() }); iwc.setMergedSegmentWarmer(warmer); } return iwc; }SolrIndexWriter是lucene中IndexWriter的子类,暂时不做详细分析,不过有个需要注意的地方,在IndexWriter的构造函数中,会对write.lock文件加锁,调用的是lucene中的Lock类中的obtain(long lockWaitTimeout)方法,里面不停的循环尝试加锁,每秒一次,直到timeout:
/** Attempts to obtain an exclusive lock within amount of * time given. Polls once per {@link #LOCK_POLL_INTERVAL} * (currently 1000) milliseconds until lockWaitTimeout is * passed. * @param lockWaitTimeout length of time to wait in * milliseconds or {@link * #LOCK_OBTAIN_WAIT_FOREVER} to retry forever * @return true if lock was obtained * @throws LockObtainFailedException if lock wait times out * @throws IllegalArgumentException if lockWaitTimeout is * out of bounds * @throws IOException if obtain() throws IOException */ public boolean obtain(long lockWaitTimeout) throws IOException { failureReason = null; // 加锁 boolean locked = obtain(); if (lockWaitTimeout < 0 && lockWaitTimeout != LOCK_OBTAIN_WAIT_FOREVER) throw new IllegalArgumentException("lockWaitTimeout should be LOCK_OBTAIN_WAIT_FOREVER or a non-negative number (got " + lockWaitTimeout + ")"); long maxSleepCount = lockWaitTimeout / LOCK_POLL_INTERVAL; long sleepCount = 0; // 一直循环尝试,每秒一次,直到timeout,抛异常 while (!locked) { if (lockWaitTimeout != LOCK_OBTAIN_WAIT_FOREVER && sleepCount++ >= maxSleepCount) { String reason = "Lock obtain timed out: " + this.toString(); if (failureReason != null) { reason += ": " + failureReason; } LockObtainFailedException e = new LockObtainFailedException(reason); if (failureReason != null) { e.initCause(failureReason); } throw e; } try { Thread.sleep(LOCK_POLL_INTERVAL); } catch (InterruptedException ie) { throw new ThreadInterruptedException(ie); } locked = obtain(); } return locked; }
之后有了reader,再次回到SolrCore的openNewSearcher方法中,用这个reader new了一个SolrIndexSearcher实例,之后就回到了SolrCore的getSearcher方法,之后,多线程跑listener和register,先看下listener.newSearcher(newSearcher,null);
@Override public void newSearcher(SolrIndexSearcher newSearcher, SolrIndexSearcher currentSearcher) { final SolrIndexSearcher searcher = newSearcher; log.info("QuerySenderListener sending requests to " + newSearcher); List<NamedList> allLists = (List<NamedList>)getArgs().get("queries"); if (allLists == null) return; for (NamedList nlst : allLists) { SolrQueryRequest req = null; try { // bind the request to a particular searcher (the newSearcher) NamedList params = addEventParms(currentSearcher, nlst); // for this, we default to distrib = false if (params.get("distrib") == null) { params.add("distrib", false); } req = new LocalSolrQueryRequest(getCore(),params) { @Override public SolrIndexSearcher getSearcher() { return searcher; } @Override public void close() { } }; // 这里发了一次请求 SolrQueryResponse rsp = new SolrQueryResponse(); SolrRequestInfo.setRequestInfo(new SolrRequestInfo(req, rsp)); getCore().execute(getCore().getRequestHandler(req.getParams().get(CommonParams.QT)), req, rsp); // Retrieve the Document instances (not just the ids) to warm // the OS disk cache, and any Solr document cache. Only the top // level values in the NamedList are checked for DocLists. NamedList values = rsp.getValues(); for (int i=0; i<values.size(); i++) { Object o = values.getVal(i); if (o instanceof ResultContext) { o = ((ResultContext)o).docs; } if (o instanceof DocList) { DocList docs = (DocList)o; for (DocIterator iter = docs.iterator(); iter.hasNext();) { newSearcher.doc(iter.nextDoc()); } } } } catch (Exception e) { // do nothing... we want to continue with the other requests. // the failure should have already been logged. } finally { if (req != null) req.close(); SolrRequestInfo.clearRequestInfo(); } } log.info("QuerySenderListener done."); }然后是多线程注册这个searcher:
// Take control of newSearcherHolder (which should have a reference count of at // least 1 already. If the caller wishes to use the newSearcherHolder directly // after registering it, then they should increment the reference count *before* // calling this method. // // onDeckSearchers will also be decremented (it should have been incremented // as a result of opening a new searcher). private void registerSearcher(RefCounted<SolrIndexSearcher> newSearcherHolder) { synchronized (searcherLock) { try { if (_searcher != null) { _searcher.decref(); // dec refcount for this._searcher _searcher=null; } _searcher = newSearcherHolder; SolrIndexSearcher newSearcher = newSearcherHolder.get(); /*** // a searcher may have been warming asynchronously while the core was being closed. // if this happens, just close the searcher. if (isClosed()) { // NOTE: this should not happen now - see close() for details. // *BUT* if we left it enabled, this could still happen before // close() stopped the executor - so disable this test for now. log.error("Ignoring searcher register on closed core:" + newSearcher); _searcher.decref(); } ***/ newSearcher.register(); // register subitems (caches) log.info(logid+"Registered new searcher " + newSearcher); } catch (Exception e) { // an exception in register() shouldn't be fatal. log(e); } finally { // wake up anyone waiting for a searcher // even in the face of errors. onDeckSearchers--; searcherLock.notifyAll(); } } }SolrIndexSearcher中的register方法,向SolrCore的infoRegistry里注册: /** Register sub-objects such as caches */ public void register() { // register self core.getInfoRegistry().put("searcher", this); core.getInfoRegistry().put(name, this); for (SolrCache cache : cacheList) { cache.setState(SolrCache.State.LIVE); core.getInfoRegistry().put(cache.name(), cache); } registerTime=System.currentTimeMillis(); }之后,CoreContainer的多线程创建core之后,在调用registerCore(cd.isTransient(), name, c, false, false);将这个core注册到solrCores属性里,solrCore就创建完了,创建完core,solr的启动过程基本上就完成了。
最后,总结几个solr中的类,对应封装的配置文件:
ConfigSolr: solr.xml
SolrConfig: solrconfig.xml
SolrIndexConfig: solrconfig.xml里的indexConfig片段
IndexSchema:schema.xml
CoreDescriptor: core的属性,core.properties,solrcore.properties
IndexWriterConfig: 用来持有创建IndexWriter使用的所有配置的类
- Solr4.7源码分析-启动篇(四)
- Solr4.7源码分析-启动篇(一)
- Solr4.7源码分析-启动篇(二)
- Solr4.7源码分析-启动篇(三)
- Solr4.7源码分析-启动篇之Solr Cloud(一)
- Solr4.7源码分析-启动篇之Solr Cloud(二)——solr选举机制
- solr4 facet 源码流程分析
- Elasticsearch2.4学习(四)------源码分析之启动过程
- hadoop 2.7.3 源码分析(四):namenode启动流程
- Elasticsearch2.4学习(四)------源码分析之启动过程
- linux 3.6 启动源码分析(四) rest_init
- Prometheus源码分析(四)Prometheus启动过程
- linux 3.6 启动源码分析(四) rest_init
- Logcat源码分析(四)
- pomelo源码分析(四)
- H264源码分析(四)
- mosquitto源码分析(四)
- openMPM源码分析(四)
- 算法导论,9.3,最坏情线性时间的选择算法
- This file requires _WIN32_WINNT to be #defined at least to 0x0403
- Ubuntu12.04下以tar.gz包方式安装Go语言
- 浅析——ContentProvider的使用
- 汇编入门学习笔记 (八)—— 转移指令
- Solr4.7源码分析-启动篇(四)
- 黑马程序员:银行业务调度系统
- ZOJ-1149
- 关于中文乱码的解决办法
- 深入理解Android Sensor系统 (4.0)
- 二叉树的遍历
- Google Objective-C Style Guide 中文版
- Ruby基础知识-循环语句 while、util、for
- CPU指令的流水线执行