IndexWriter
methods when an IOException
would cause it to
lose previously indexed documents.TokenizerFactory
,
TokenFilterFactory
and CharFilterFactory
.AbstractDistinctValuesCollector.getGroups()
,
representing the value and set of distinct values for the group.PagedMutable
and PagedGrowableWriter
.QueryConfigHandler
and FieldConfig
.AbstractRangeQueryNode
, it should be invoked only by
its extenders.FilteringTokenFilter.incrementToken()
.FieldPhraseList.WeightedPhraseInfo
should be
accepted as a highlighted phrase or if it should be discarded.SpanQuery
.double
value.IndexSearcher
, if it
has not yet been closedIOUtils.close(java.io.Closeable...)
in a
finally clause.MergePolicy.OneMerge
to this
specification.SegmentCommitInfo
.CommonTermsQuery
ParserExtension
instance associated with the given key.Term
to this builder, with a default boost of 1
.Term
with the provided boost.Term
with the provided boost and context.BooleanQuery.Builder
.BooleanQuery.Builder
.BytesRef
DocIdSetIterator
to this builder.DocIdSetIterator
.SegmentCommitInfo
s.IndexWriter.addDocument(Iterable)
and returns the generation that reflects this change.IndexWriter.addDocuments(Iterable)
and
returns the generation that reflects this change.Field
that is tokenized, not stored,
termVectorStored with positions (or termVectorStored with positions and offsets),addField(fieldName, stream, 1.0f)
.IndexableField
to the MemoryIndex using the provided analyzer.IndexableField
to the MemoryIndex using the provided analyzer.Field
.Field
.Field
.IndexWriter.addIndexes(Directory...)
and
returns the generation that reflects this change.IndexWriter.addIndexes(CodecReader...)
and returns the generation that reflects this change.Directory
so that we assign a per-thread
MergeRateLimiter
to all created IndexOutput
s.IndexReader.ReaderClosedListener
.SegTokenPair
current
starting at fromIndex
(inclusive) to state state
.SegToken
to the mapping, creating a new mapping at the token's startOffset if one does not exist.gramSize
will take on values from the circular sequence
{ [ 1, ] ShingleFilter.minShingleSize
[ , ...target
.SegmentInfos.getVersion()
is below newVersion
then update it to this value.ReferenceManager.acquire()
is guaranteed to return the new
reference.RamUsageEstimator.NUM_BYTES_OBJECT_ALIGNMENT
.TermsEnum.postings(PostingsEnum, int)
to get positions, payloads and offsets in the returned enumDocIdSetIterator
that matches all documents up to
maxDoc - 1
.DocumentsWriterPerThreadPool.ThreadState
sFunctionValues
FunctionValues[]
method with the same name, but optimized for
dealing with exactly 2 arguments.StandardQueryConfigHandler.ConfigurationKeys.ALLOW_LEADING_WILDCARD
is defined in the
QueryConfigHandler
.SynonymMap.WORD_SEPARATOR
.Analyzer.tokenStream(String, Reader)
.Analyzer.ReuseStrategy
.Analyzer
used for terms found in the queryAnalyzer.tokenStream(String, java.io.Reader)
.StandardQueryConfigHandler.ConfigurationKeys.ANALYZER
is defined in the QueryConfigHandler
.Analyzer
suitable for Analyzers which wrap
other Analyzers.*
and
?
don't get removed from the search terms.AND
operator (+)AndQueryNode
represents an AND boolean operation performed on a
list of nodes.FunctionValues
FunctionValues[]
method with the same name, but optimized for
dealing with exactly 2 arguments.AnyQueryNode
represents an ANY operator performed on a list of
nodes.true
iff stalledApostropheFilter
.String
to this character sequence.StringBuilder
to this character sequence.CharTermAttribute
to this character sequence.BytesRef
at
the current position.BytesRef
to this BytesRefArray
.BytesRef
to this BytesRefArray
.Analyzer
for Arabic.ArabicAnalyzer.DEFAULT_STOPWORD_FILE
.TokenFilter
that applies ArabicNormalizer
to normalize the orthography.ArabicNormalizationFilter
.TokenFilter
that applies ArabicStemmer
to stem Arabic words..ArabicStemFilter
.Analyzer
for Armenian.ArmenianAnalyzer.DEFAULT_STOPWORD_FILE
.InPlaceMergeSorter
for object arrays.ArrayInPlaceMergeSorter
.IntroSorter
for object arrays.ArrayInPlaceMergeSorter
.TimSorter
for object arrays.ArrayTimSorter
.ASCIIFoldingFilter
.ASCIIFoldingFilter
.DocIdSetIterator
view of the provided
TwoPhaseIterator
.double
value.List
view.TwoPhaseIterator
view of this ConjunctionSpans.TwoPhaseIterator
view of this
Scorer
.double
value.AttributeImpl
s.clazz
for the
attributes it implements.AttributeSource
.AttributeSource
or AttributeImpl
.AttributeSource
is shared with the delegate TermsEnum
.AttributeImpl
s,
and methods to add and get them.AttributeFactory.DEFAULT_ATTRIBUTE_FACTORY
.AttributeFactory
for creating new Attribute
instances.maxThreadCount
and maxMergeCount
,
used to detect whether the index is backed by an SSD or rotational disk and
set maxThreadCount
accordingly.OfflineSorter.ABSOLUTE_MIN_SORT_BUFFER_SIZE
.Automaton.Builder.finish()
creates the Automaton
.RegExp.
RegExp.toAutomaton(AutomatonProvider,int)
Query
that will match terms against a finite-state machine.Automaton
.Automaton
.Automaton
.sumTotalTermFreq / docCount
,
or returns 1
if the index does not store sumTotalTermFreq:
any field that omits frequency information).Rectangle.axisLat(double, double)
.CharFilter
.CompositeReader
s based on an array
of sub-readers.BaseCompositeReader
on the given subReaders.Directory
that uses a LockFactory
for locking.FragListBuilder
.
(x <= min) ? base : sqrt(x+(base**2)-min)
...but with a special case check for 0.Analyzer
for Basque.BasqueAnalyzer.DEFAULT_STOPWORD_FILE
.chainStart
and chainEnd
, return whether slot
is
between the start and end of the chain.BigInteger
field.value
such that unsigned byte order comparison
is consistent with BigInteger.compareTo(BigInteger)
.BytesRef
value.DocValuesFieldUpdates
which holds updates of documents, of a single
BinaryDocValuesField
.BitDocIdSet.BitDocIdSet(BitSet, long)
but uses the set's
approximate cardinality
as a cost.Bits
interface for random access
to matching documents.DocIdSetIterator
which iterates over set bits in a
bit set.FixedBitSet
and creates a DOCS PostingsEnum
from it.BitSet
s per segment.data
.BKDWriter
.IndexInput
to the index location that BKDWriter.finish(org.apache.lucene.store.IndexOutput)
returnedBKDReader.intersect(org.apache.lucene.index.PointValues.IntersectVisitor)
.maxPointsInLeafNode
.Query
that blends index statistics across multiple terms.BlendedTermQuery
.BlendedTermQuery.RewriteMethod
that creates a DisjunctionMaxQuery
out
of the sub queries.BlendedTermQuery.RewriteMethod
defines how queries for individual terms should
be merged.IndexWriter.addDocuments()
or IndexWriter.updateDocuments()
API.parentSort
and not reordering children with a block.BlockPackedWriter
.BlockPackedWriter
.PostingsReaderBase
to produce a PostingsEnum
without re-seeking the
terms dict.PostingsFormat
for this segment uses block
tree.k1 = 1.2
b = 0.75
FunctionValues
implementation which supports retrieving boolean values.BlendedTermQuery.RewriteMethod
that adds all sub queries to a BooleanQuery
which has coords disabled
.BooleanModifierNode
has the same behaviour as
ModifierQueryNode
, it only indicates that this modifier was added by
BooleanQuery2ModifierNodeProcessor
and not by the user.ModifierQueryNode
to BooleanQueryNode
s children.BooleanQuery.getMaxClauseCount()
clauses.ModifierQueryNode
to
BooleanQueryNode
s children.BooleanQuery
BooleanQueryNode
represents a list of elements which do not have an
explicit boolean operator defined between them.BooleanQuery
object from a BooleanQueryNode
object.BulkScorer
that is used for pure disjunctions and disjunctions
that have low values of BooleanQuery.Builder.setMinimumNumberShouldMatch(int)
and dense clauses.BooleanQueryNode
that contains only one
child and returns this child.ValueSource
implementations which
apply boolean logic to their valuesFieldConfig
objects.Attribute
to a TermsEnum
returned by MultiTermQuery.getTermsEnum(Terms,AttributeSource)
and update the boost on each returned term.BoostAttribute
.BoostingQuery
PayloadScoreQuery
Query
wrapper that allows to give a boost to the wrapped query.query
in such a way that the produced
scores will be boosted by boost
.BoostQueryNode
boosts the QueryNode tree which is under this node.Query
object set on the
BoostQueryNode
child using
QueryTreeBuilder.QUERY_TREE_BUILDER_TAGID
and applies the boost value
defined in the BoostQueryNode
.FieldableNode
that has StandardQueryConfigHandler.ConfigurationKeys.BOOST
in its
config.BaseFragmentsBuilder
Packed64.get(int)
.Analyzer
for Brazilian Portuguese language.BrazilianAnalyzer.getDefaultStopSet()
).TokenFilter
that applies BrazilianStemmer
.BrazilianStemFilter
.BoundaryScanner
implementation that uses BreakIterator
to find
boundaries in the text.byte[]
buffer size)
for encoding int
values.byte[]
buffer size)
for encoding long
values.Checksum
with an internal buffer
to speed up checksum calculations.BufferedChecksum.DEFAULT_BUFFERSIZE
ChecksumIndexInput
that wraps
another input and delegates calls.IndexInput
.IOContext
NormalizeCharMap.Builder.add(java.lang.String, java.lang.String)
.StemmerOverrideFilter.StemmerOverrideMap
to be used with the StemmerOverrideFilter
SynonymMap
and returns it.MergeState.DocMap
instance appropriate for
this reader.SortedDocValues
instance as a weight.SortedSetDocValues
instance as a weight.subs
.null
.BlendedTermQuery
.BooleanQuery
based on the parameters that have
been set on this builder.MultiPhraseQuery
.BytesRef
representing
strings in UTF-8.DocIdSet
from the accumulated doc IDs.PackedLongValues
instance that contains values that
have been added to this builder.ResourceLoader
.StemmerOverrideFilter.Builder
with ignoreCase set to false
StemmerOverrideFilter.Builder
CharTermAttributeImpl.getBytesRef()
.FieldInfos.FieldNumbers
.bucket
.Analyzer
for Bulgarian.BulgarianAnalyzer.DEFAULT_STOPWORD_FILE
.TokenFilter
that applies BulgarianStemmer
to stem Bulgarian
words.BulgarianStemFilter
.BulkOperation
for PackedInts.Format.PACKED
.BulkOperation
for PackedInts.Format.PACKED_SINGLE_BLOCK
.Weight.bulkScorer(org.apache.lucene.index.LeafReaderContext)
.BulkScorer
to
score the query and send hits to a Collector
.i
, or
-1
if all bytes that are useful for comparisons are exhausted.i
, or -1
if
its length is less than or equal to k
.ByteBlockPool.Allocator
that never recycles.ByteBlockPool.Allocator
that never recycles, but
tracks how much total RAM is in use.values
values of size bitsPerValue
.BytesStore
, used during building, or during reading when
the FST is very large (more than 1 GB).BytesRef
, element by element, and returns the
number of elements common to both arrays (from the start of each).Outputs
implementation where each output
is a sequence of bytes.BytesRef.EMPTY_BYTES
capacity
.BytesRef
array that stores full
copies of the appended bytes in a ByteBlockPool
.BytesRefArray
with a counter to track allocated bytesBytesRef
instances.BytesRef
comparator that
FixedLengthBytesRefArray.iterator(Comparator)
has optimizations
for.FunctionValues
instances for string based fields.BytesRefHash
is a special purpose hash-map like data-structure
optimized for BytesRef
instances.BytesRefHash
BytesRefHash
BytesRefHash.BytesStartArray
that tracks
memory allocation using a private Counter
instance.BytesRef
iteration.ByteBlockPool
for the given bytesIDBytesTermAttribute
.Counter
reference holding the number of bytes used by this
BytesRefHash.BytesStartArray
.PackedInts.Decoder.byteBlockCount()
byte
blocks.PackedInts.Encoder.byteBlockCount()
byte
blocks.RoaringDocIdSet
.input
.true
if this instance can be reused by
the provided MultiTermsEnum
.CapitalizationFilter
.SortedSetDocValues.setDocument(int)
.Analyzer
for Catalan.CatalanAnalyzer.DEFAULT_STOPWORD_FILE
.CharacterUtils
provides a unified interface to Character-related
operations to implement backwards compatible character operations based on a
Version
instance.CharacterUtils.fill(CharacterBuffer, Reader)
.BreakIterator
CharArrayMap.UnmodifiableCharArrayMap
optimized for speed.CharArrayMap.keySet()
Reader
with additional offset
correction.CharFilter
instances.Outputs
implementation where each output
is a sequence of characters.CharsRef
initialized an empty array zero-lengthCharsRef
initialized with an array of the given
capacityCharsRef
initialized with the given array, offset and
lengthCharsRef
initialized with the given Strings character
arrayCharsRef
instances.CharTermAttribute
.CharTokenizer
instanceCharTokenizer
instanceMergePolicy.MergeAbortedException
if this merge was aborted.ExitableDirectoryReader.ExitingReaderException
if QueryTimeout.shouldExit()
returns true,
or if Thread.interrupted()
returns true.CodecUtil.writeFooter(org.apache.lucene.store.IndexOutput)
.CodecUtil.writeFooter(org.apache.lucene.store.IndexOutput)
, optionally
passing an unexpected exception that has already occurred.CodecUtil.writeHeader(DataOutput, String, int)
.CodecUtil.checkHeader(DataInput,String,int,int)
except this
version assumes the first int has already been read
and validated from the input.CheckIndex.Status
instance detailing
the state of the index.CheckIndex.Status
instance detailing
the state of the index.CheckIndex.checkIndex()
detailing the health and status of the index.CodecUtil.writeIndexHeader(DataOutput, String, int, byte[], String)
.position
.NamedSPILoader.NamedSPI
CodecUtil.checkFooter(org.apache.lucene.store.ChecksumIndexInput)
IndexInput.toString()
.null
.RandomAccessFile
mallocs
a native buffer outside of stack if the read buffer size is larger.Analyzer
that tokenizes text with StandardTokenizer
,
normalizes content with CJKWidthFilter
, folds case with
LowerCaseFilter
, forms bigrams of CJK with CJKBigramFilter
,
and filters stopwords with StopFilter
CJKAnalyzer.getDefaultStopSet()
.CJKBigramFilter
.TokenFilter
that normalizes CJK width differences:
Folds fullwidth ASCII variants into the equivalent basic latin
Folds halfwidth Katakana variants into the equivalent kana
CJKWidthFilter
.ClassicTokenizer
with ClassicFilter
, LowerCaseFilter
and StopFilter
, using a list of
English stop words.ClassicAnalyzer.STOP_WORDS_SET
).ClassicTokenizer
.ClassicFilter
.encodes
norm values as a single byte before being stored.ClassicTokenizer
.AttributeFactory
ClassicTokenizer
.ResourceLoader
that uses ClassLoader.getResourceAsStream(String)
and Class.forName(String,boolean,ClassLoader)
to open resources and
classes, respectively.BooleanQuery
.null
if not supported.SegmentCommitInfo
s.TermContext
internal state and removes all
registered TermState
sBytesRefArray
BytesRefHash.BytesStartArray
and returns the cleared instance.BytesRefArray
index
to false.AttributeImpl.clear()
on each Attribute implementation.AttributeImpl
instances returned in a new
AttributeSource
instance.IndexInput
s.acquiring
.SearcherLifetimeManager.release(org.apache.lucene.search.IndexSearcher)
after they are
done.IndexInput
.IndexOutput
.Codec
used to write new segments.CharSequence
.seq
.CodepointCountFilter
.CodepointCountFilter
.CharTermAttributeImpl
that encodes the term
text as a binary Unicode collation key instead of as UTF-8 bytes.CollationKey
, and then
encodes the bytes as an index term.TokenStream.DEFAULT_TOKEN_ATTRIBUTE_FACTORY
as the
factory for all other attributes.SortedDocValuesField
.KeywordTokenizer
with CollationAttributeFactory
.AbstractFirstPassGroupingCollector
,
tracking the top doc and FieldComparator
slot.CollectionStatistics
for a field.LeafCollector.collect(int)
to prematurely
terminate collection of the current leaf.IndexCommit
that IndexWriter
is
opened on.IndexWriter.close()
should first do a commit.SegTokenFilter
CommonGramsFilter
.CommonGramsQueryFilter
.CommonTermsQuery
CommonTermsQuery
i
and j
.j
, similarly to
compare(i, j)
.i
from the temporary storage with element
j
from the slice to sort, similarly to
Sorter.compare(int, int)
.IndexReaderContext
for CompositeReader
instance.CompositeReaderContext
for intermediate readers that aren't
not top-level readers in the current contextCompositeReaderContext
for top-level readers with parent set to null
CompoundWordTokenFilterBase.termAtt
.out
.bytes[off:off+len]
into out
using
at most 16KB of memory.bytes[off:off+len]
into out
.StoredFieldsFormat
that compresses documents in chunks in
order to improve the compression ratio.CompressingStoredFieldsFormat
with an empty segment
suffix.CompressingStoredFieldsFormat
.CompressingStoredFieldsIndexWriter
.Codec
s.Document
.TermVectorsFormat
that compresses chunks of documents together in
order to improve the compression ratio.CompressingTermVectorsFormat
.BLOCK_SIZE
values with the provided PackedInts.Decoder
.
1/sqrt( steepness * (abs(x-min) + abs(x-max) - (max-min)) + 1 )
.FieldInvertState
).TFIDFSimilarity
.p(w|C)
according to the language model
strategy for the current term.MergeScheduler
that runs each merge using a
separate thread.AbstractQueryConfig
.TwoPhaseIterator
view of a ConjunctionDISI.TwoPhase
conjunction.MultiTermQuery.SCORING_BOOLEAN_REWRITE
except
scores are not computed.ScoringRewrite.SCORING_BOOLEAN_REWRITE
except
scores are not computed.BulkScorer
so that if the CSQ
wraps a query with its own optimized top-level
scorer (e.g.ConstantScoreQuery
Scorer
.DocIdSetIterator
which will be used to
drive iteration.TwoPhaseIterator
.ConstNumberSource
is the base class for all constant numbersConstValueSource
returns a constant for all documentsTeeSinkTokenFilter
passes all tokens to the added sinks when itself is consumed.len
chars of text
starting at off
are in the setCharSequence
is in the setfieldName
exists in the map and is of the
same dvType
.SegmentCommitInfo
is contained.CharSequence
is in the CharArrayMap.keySet()
true
if this map contains a mapping for the specified key.IOContext
to pass to Directory.openInput(String,IOContext)
.IOContext
for all writes; you should pass this
to Directory.createOutput(String,IOContext)
.ReferenceManager
, with methods to wait for a specific
index changes to become visible.ReferenceManager
.SegToken
so that it is ready for indexing.overlap / maxOverlap
.CharArrayMap
.CharArraySet
.src[srcPos:srcPos+len]
into
dest[destPos:destPos+len]
using at most mem
bytes.PackedInts.copy(Reader, int, Mutable, int, int, int)
but using a pre-allocated buffer.src
to slot dest
.CharsRef
referenced content into this instance.TermState
to this instanceAttributeSource
to the given target AttributeSource
.sandbox
and queries
modules in addition to core queries.queries
module in addition to core queries.DocIdSetIterator.cost()
for bulk scorers.DocIdSetIterator
.MatchingReaders.matchingReaders
are set.IndexCommit
we are preventing from deletion.CachingCollector
which does not wrap another collector.CachingCollector
that wraps the given collector and
caches documents and scores up to the specified RAM threshold.CachingCollector
that wraps the given collector and
caches documents and scores up to the specified max docs threshold.AbstractAllGroupHeadsCollector
instance based on the supplied arguments.AbstractAllGroupHeadsCollector
instance based on the supplied arguments.TermsCollector
implementation.TermsWithScoreCollector
implementation.TopFieldCollector
from the given
arguments.TopFieldCollector
from the given
arguments.TopScoreDocCollector
given the number of hits to
collect and whether documents are scored in order by the input
Scorer
to LeafCollector.setScorer(Scorer)
.TopScoreDocCollector
given the number of hits to
collect, the bottom of the previous page, and whether documents are scored in order by the input
Scorer
to LeafCollector.setScorer(Scorer)
.AttributeImpl
for the supplied Attribute
interface class.Analyzer.TokenStreamComponents
instance for this analyzer.Analyzer.TokenStreamComponents
used to tokenize all the text in the provided Reader
.Analyzer.TokenStreamComponents
which tokenizes all the text in the provided Reader
.Analyzer.TokenStreamComponents
used to tokenize all the text in the provided Reader
.Analyzer.TokenStreamComponents
which tokenizes all the text in the provided Reader
.Analyzer.TokenStreamComponents
which tokenizes all the text in the provided Reader
.Analyzer.TokenStreamComponents
used to tokenize all the text in the provided Reader
.Analyzer.TokenStreamComponents
used to tokenize all the text in the provided Reader
.Analyzer.TokenStreamComponents
which tokenizes all the text in the provided Reader
.Analyzer.TokenStreamComponents
used to tokenize all the text in the provided Reader
.Analyzer.TokenStreamComponents
used to tokenize all the text in the provided Reader
.Analyzer.TokenStreamComponents
which tokenizes all the text in the provided Reader
.Analyzer.TokenStreamComponents
which tokenizes all the text in the provided Reader
.Analyzer.TokenStreamComponents
which tokenizes all the text in the provided Reader
.Analyzer.TokenStreamComponents
used to tokenize all the text in the provided Reader
.Analyzer.TokenStreamComponents
which tokenizes all the text in the provided Reader
.Analyzer.TokenStreamComponents
used to tokenize all the text in the provided Reader
.Analyzer.TokenStreamComponents
which tokenizes all the text in the provided Reader
.Analyzer.TokenStreamComponents
which tokenizes all the text in the provided Reader
.Analyzer.TokenStreamComponents
used to tokenize all the text in the provided Reader
.Analyzer.TokenStreamComponents
which tokenizes all the text in the provided Reader
.Analyzer.TokenStreamComponents
which tokenizes all the text in the provided Reader
.Analyzer.TokenStreamComponents
used to tokenize all the text in the provided Reader
.Analyzer.TokenStreamComponents
which tokenizes all the text in the provided Reader
.Analyzer.TokenStreamComponents
which tokenizes all the text in the provided Reader
.Analyzer.TokenStreamComponents
which tokenizes all the text in the provided Reader
.TokenStream
which tokenizes all the
text in the provided Reader
.Analyzer.TokenStreamComponents
which tokenizes all the text in the provided Reader
.Analyzer.TokenStreamComponents
which tokenizes all the text in the provided Reader
.Analyzer.TokenStreamComponents
which tokenizes all the text in the provided Reader
.Analyzer.TokenStreamComponents
used to tokenize all the text in the provided Reader
.Analyzer.TokenStreamComponents
which tokenizes all the text in the provided Reader
.Analyzer.TokenStreamComponents
used to tokenize all the text in the provided Reader
.Analyzer.TokenStreamComponents
which tokenizes all the text in the provided Reader
.A
.FieldType.LegacyNumericType
is deprecated, instead use JoinUtil.createJoinQuery(String, boolean, String, Class, Query, IndexSearcher, ScoreMode)
Method for query time joining for numeric fields. It supports multi- and single- values longs and ints.
All considerations from JoinUtil.createJoinQuery(String, boolean, String, Query, IndexSearcher, ScoreMode)
are applicable here too,
though memory consumption might be higher.
JoinUtil.createJoinQuery(String, Query, Query, IndexSearcher, ScoreMode, MultiDocValues.OrdinalMap, int, int)
,
but disables the min and max filtering.Query
.SegGraph
for a sentence..tmp
.Weight
for the given query, potentially adding caching
if possible and configured.CustomAnalyzer
.FunctionQuery
scores.FunctionQuery
score.CustomScoreQuery.getCustomScoreProvider(org.apache.lucene.index.LeafReaderContext)
, if you want
to modify the custom score calculation of a CustomScoreQuery
.IndexReader
.FunctionQuery
(or queries).
Subclasses can modify the computation by overriding CustomScoreQuery.getCustomScoreProvider(org.apache.lucene.index.LeafReaderContext)
.FunctionQuery
.FunctionQuery
.BreakIterator
that breaks the text whenever a certain separator, provided as a constructor argument, is found.Analyzer
for Czech language.CzechAnalyzer.getDefaultStopSet()
).CzechStemFilter
.TokenFilter
that applies CzechStemmer
to stem Czech words.CzechStemFilter
.Automaton
that accepts a set of
strings.char
labels on transitions.Analyzer
for Danish.DanishAnalyzer.DEFAULT_STOPWORD_FILE
.DateTools.Resolution
.DateFormat
.DateFormat.DEFAULT
and Locale.ENGLISH
to create a DateFormat
instance.DateRecognizerFilter
.[:General_Category=Decimal_Number:]
to Basic Latin digits (0-9
).input
DecimalDigitFilter
.iterations * blockCount()
blocks from blocks
,
decode them and write iterations * valueCount()
values into
values
.8 * iterations * blockCount()
blocks from blocks
,
decode them and write iterations * valueCount()
values into
values
.iterations * blockCount()
blocks from blocks
,
decode them and write iterations * valueCount()
values into
values
.8 * iterations * blockCount()
blocks from blocks
,
decode them and write iterations * valueCount()
values into
values
.PayloadHelper.encodeFloat(float)
.GeoEncodingUtils.encodeLatitude(double)
back into a double.GeoEncodingUtils.encodeLongitude(double)
back into a double.CompoundWordTokenFilterBase.termAtt
and places CompoundWordTokenFilterBase.CompoundToken
instances in the CompoundWordTokenFilterBase.tokens
list.offset
and
offset+length
in the original stream from the compressed
stream in
to bytes
.decompressedLen
bytes into
dest[dOff:]
.DocValuesProducer
generations.SegmentInfos
are no longer in use.other
other
other
other
AttributeImpl
s using the
class name of the supplied Attribute
interface class by appending Impl
to it.IndexWriter.close()
include a commit.maxItemsInBlock
parameter to BlockTreeTermsWriter.BlockTreeTermsWriter(SegmentWriteState,PostingsWriterBase,int,int)
.maxItemsInBlock
parameter to VersionBlockTreeTermsWriter.VersionBlockTreeTermsWriter(SegmentWriteState,PostingsWriterBase,int,int)
.Operations.determinize(org.apache.lucene.util.automaton.Automaton, int)
should create.minItemsInBlock
parameter to BlockTreeTermsWriter.BlockTreeTermsWriter(SegmentWriteState,PostingsWriterBase,int,int)
.minItemsInBlock
parameter to VersionBlockTreeTermsWriter.VersionBlockTreeTermsWriter(SegmentWriteState,PostingsWriterBase,int,int)
.SleepingLockWrapper.obtainLock(java.lang.String)
waits, in milliseconds,
in between attempts to acquire the lock.IndexWriterConfig.setReaderPooling(boolean)
.AttributeFactory
instance that should be used for TokenStreams.true
).Encoder
implementation that does not modify the outputPostingsHighlighter.getFormatter(java.lang.String)
is called,
and then reused.StandardQueryConfigHandler.ConfigurationKeys.PHRASE_SLOP
is defined in the QueryConfigHandler
.PostingsHighlighter.getScorer(java.lang.String)
is called,
and then reused.ValueSource
implementation which only returns the values from the provided
ValueSources which are available for a particular docId.IndexWriter.deleteAll()
and returns the
generation that reflects this change.IndexWriter.deleteDocuments(Term...)
and
returns the generation that reflects this change.IndexWriter.deleteDocuments(Term...)
and
returns the generation that reflects this change.IndexWriter.deleteDocuments(Query...)
and
returns the generation that reflects this change.IndexWriter.deleteDocuments(Query...)
and returns the generation that reflects this change.DeletedQueryNode
represents a node that was deleted from the query
node tree.DelimitedPayloadTokenFilter
.IndexDeletionPolicy
controlling when commit
points are deleted.PackedLongValues.Builder
that will compress efficiently integers that
are close to each other.Dictionary.FlagParsingStrategy
that assumes each flag is encoded as two ASCII characters whose codes
must be combined into a single character.Dictionary.FlagParsingStrategy
that assumes each flag is encoded in its numerical form.Dictionary.FlagParsingStrategy
that treats the chars in each String as a individual flags.TokenFilter
that decomposes compound words found in many Germanic languages.DictionaryCompoundWordTokenFilter
DictionaryCompoundWordTokenFilter
DictionaryCompoundWordTokenFilter
.IntBlockPool.DirectAllocator
with a default block sizeDirectMonotonicWriter
.DirectMonotonicReader
to read data from disk.Directory
where this segment is read from.Directory
where this segment will be written
to.CompositeReader
that can read indexes in a Directory
.DirectoryReader
on the given subReaders.DirectWriter
DisiPriorityQueue
.BlendedTermQuery.DisjunctionMaxRewrite
instance with a tie-breaker of 0.01
.DocIdSetIterator
which is a disjunction of the approximations of
the provided iterators.DisjunctionMaxQuery
BlendedTermQuery.RewriteMethod
will create DisjunctionMaxQuery
instances that have the provided tie breaker.ConjunctionScorer
.DisjunctionScorer
.TopDocsCollector
that controls diversity in results by ensuring no
more than maxHitsPerKey results from a common source are collected in the
final results..getIndexReader().document(docID)
.getIndexReader().document(docID, fieldVisitor)
.getIndexReader().document(docID, fieldsToLoad)
weight
that will cache
matching docs per-segment accordingly to the given policy
.numHits
term
.TermState
instances passed to TermContext.register(TermState, int, int, long)
.DocFreqValueSource
returns the number of documents containing the term.-1
if DocIdSetIterator.nextDoc()
or
DocIdSetIterator.advance(int)
were not called yet.DocIdSet
s.0
and maxDoc
.DocIdSetBuilder
instance that is optimized for
accumulating docs that match the given Terms
.DocIdSetBuilder
instance that is optimized for
accumulating docs that match the given PointValues
.MultiDocValues.MultiSortedDocValues.values
MultiDocValues.MultiSortedSetDocValues.values
dv
that have a value.dv
that have a value.dv
that have a value.n
th
Document
in this index.IndexReader.document(int)
but only loads the specified
fields.StoredFieldVisitor
that creates a Document
from stored fields.Set<String>
.DocumentsWriterDeleteQueue
is a non-blocking linked pending deletes
queue.DocumentsWriterPerThread
flushing during
indexing.DocumentsWriterPerThread.IndexingChain.getChain(DocumentsWriterPerThread)
method
which returns the DocConsumer that the DocumentsWriter calls to process the
documents.DocumentsWriterPerThreadPool
controls DocumentsWriterPerThreadPool.ThreadState
instances
and their thread assignments during indexing.DocumentsWriterPerThreadPool.ThreadState
references and guards a
DocumentsWriterPerThread
instance that is used during indexing to
build a in-memory index segment.DocumentsWriter
sessions.DocValuesTermsQuery
, but this query only
runs on a long NumericDocValuesField
or a
SortedNumericDocValuesField
, matching
all documents whose value in the specified field is
contained in the provided set of long values.Query
that only accepts documents whose
term value in the specified field is contained in the
provided set of allowed terms.DocValuesType
: how the field's value will be indexed
into docValues.DocValuesType
: how the field's value will be indexed
into docValues.IndexWriter.merge(org.apache.lucene.index.MergePolicy.OneMerge)
DirectoryReader.openIfChanged(DirectoryReader)
.DirectoryReader.openIfChanged(DirectoryReader,IndexCommit)
.DirectoryReader.openIfChanged(DirectoryReader,IndexWriter,boolean)
.context
.context
.ConcurrentMergeScheduler.maybeStall(org.apache.lucene.index.IndexWriter)
to pause the calling thread for a bit.NumericUtils
, e.g.Double.compare(double, double)
for numHits
.FunctionValues
implementation which supports retrieving double values.Double.doubleToRawLongBits(double)
.LeafReader.getNumericDocValues(java.lang.String)
and makes
those values available as other numeric types, casting as needed.double
field for fast range filters.double
value to a sortable signed long
.ValueSource
implementation which wraps two ValueSources
and applies an extendible float function to their values.DummyQueryNodeBuilder
object.Analyzer
for Dutch language.DutchAnalyzer.getDefaultStopSet()
)
and a few default entries for the stem exclusion table.EarlyTerminatingSortingCollector
instance.EdgeNGramTokenFilter
.EdgeNGramTokenizer
.TokenStream
.ElisionFilter
.dot
language.PointsFormat
that has nothing indexedPostingsFormat
array.DocIdSet
instanceDocIdSetIterator
instance0
.Fields
array.ReaderSlice
array.Terms
.BytesRef.EMPTY_BYTES
for every documentBytesRef.EMPTY_BYTES
for every documentSortedSetDocValues.NO_MORE_ORDS
for every documentBytesRef
iterations * valueCount()
values from values
,
encode them and write iterations * blockCount()
blocks into
blocks
.iterations * valueCount()
values from values
,
encode them and write 8 * iterations * blockCount()
blocks into
blocks
.iterations * valueCount()
values from values
,
encode them and write iterations * blockCount()
blocks into
blocks
.iterations * valueCount()
values from values
,
encode them and write 8 * iterations * blockCount()
blocks into
blocks
.bitsPerValue
bits per value with format format
.boost / sqrt(length)
with SmallFloat.floatToByte315(float)
.TokenStream.incrementToken()
returned false
(using the new TokenStream
API).TokenStream.incrementToken()
returned false
(using the new TokenStream
API).true
iff the current slice is fully read.Spans.nextStartPosition()
was not yet called on the current doc.true
iff the ref ends with the given suffix.Analyzer
for English.EnglishAnalyzer.getDefaultStopSet()
.TokenFilter
that applies EnglishMinimalStemmer
to stem
English words.EnglishMinimalStemFilter
.EnglishPossessiveFilter
.FixedBitSet
is large enough to hold numBits+1
,
returns the given bits, otherwise returns a new FixedBitSet
which
can hold the requested number of bits.LongBitSet
is large enough to hold
numBits+1
, returns the given bits, otherwise returns a new
LongBitSet
which can hold the requested number of bits.AlreadyClosedException
if this
IndexWriter has been closed or is in the process of closing.AlreadyClosedException
if this IndexWriter has been
closed (closed=true
) or is in the process of
closing (closing=true
).LeafReader.getNumericDocValues(java.lang.String)
and makes
those values available as other numeric types, casting as needed.o
is equal to this.o
is equal to this.SrndQuery
within the package
org.apache.lucene.queryparser.surround.query
it is not necessary to override this method,o
is equal to this.o
is equal to this.o
is equal to this.o
is equal to this.o
is equal to this.o
is equal to this.other
is equal to this.o
is equal to this.o
is equal to this.\
.\
.ESCAPE
operator (\)EscapeQuerySyntax
to allow the QueryNode
to escape the queries, when the toQueryString method is called.EscapeQuerySyntax
for the standard lucene
syntax.PagedGrowableWriter
given the
actual number of stored elements.TwoPhaseCommit.prepareCommit()
all objects and only if all succeed,
it proceeds with TwoPhaseCommit.commit()
.values
FunctionValues.exists(int)
for the specified doc, else false.values
FunctionValues.exists(int)
for the specified doc, else false.FunctionValues.exists(int)
for each document.ExitableDirectoryReader
wraps a real index DirectoryReader
and
allows for a QueryTimeout
implementation object to be checked periodically
to see if the thread should exit or not.CheckIndex.checkIndex()
.doc
scored against
query
.doc
scored against
weight
.ExtendableQueryParser
enables arbitrary query parser extension
based on a customizable field naming scheme.ExtendableQueryParser
instanceExtendableQueryParser
instanceExtensionQuery
holds all query components extracted from the original
query string like the query field and the extension query string.ExtensionQuery
Extensions
class represents an extension mapping to associate
ParserExtension
instances with extension keys.Extensions
instance with the
Extensions.DEFAULT_EXTENSION_FIELD_DELIMITER
as a delimiter character.Extensions
instancefield
, and returns equivalent
automata that will match terms.big
and little
.BulkScorer
s that need to pass a Scorer
to LeafCollector.setScorer(org.apache.lucene.search.Scorer)
.CompressionMode.FAST
but it spends more time
compressing in order to improve the compression ratio.PackedInts.Format
and number of bits per value that would
restore from disk the fastest reader whose overhead is less than
acceptableOverheadRatio
.DateTools.Resolution
map that is used
to normalize each date field value.FieldableNode
interface to indicate that its
children and itself are associated to a specific field.StandardQueryConfigHandler.ConfigurationKeys.BOOST
to the
equivalent FieldConfig
based on a defined map: fieldName -> boostValue stored in
StandardQueryConfigHandler.ConfigurationKeys.FIELD_BOOST_MAP
.TopFieldCollector
.LeafReader.getNumericDocValues(java.lang.String)
and sorts by ascending valueLeafReader.getNumericDocValues(String)
and sorts by ascending valueLeafReader.getNumericDocValues(String)
and sorts by ascending valueLeafReader.getNumericDocValues(String)
and sorts by ascending valueFieldComparator
for custom field sorting.FieldConfig
StandardQueryConfigHandler.ConfigurationKeys.DATE_RESOLUTION
to the equivalent FieldConfig
based
on a defined map: fieldName -> DateTools.Resolution
stored in
StandardQueryConfigHandler.ConfigurationKeys.FIELD_DATE_RESOLUTION_MAP
.FieldInfo
of current field being written.FieldInfo
s (accessible by number or by name).FieldInfos
describing all fields in this
segment.FieldInfos
describing all fields in this
segment.FieldInfos
SpanQuery
objects participate in composite
single-field SpanQueries by 'lying' about their search field.defaultField
FieldQueryNode
represents a element that contains field/text tupleTermQuery
object from a FieldQueryNode
object.Terms
.Fields
for this reader.DocValuesConsumer
to write docvalues to the
index.DocValuesProducer
to read docvalues from the index.StoredFieldsReader
to load stored
fields.StoredFieldsWriter
to write stored
fields.FieldTermStack
is a stack that keeps query terms in the specified field
of the document to be highlighted.FieldType
for this field.ref
IndexableFieldType
describing the properties
of this field.PointValues
insteadFieldComparator
slot.FieldValueHitQueue
which is optimized in case
there is more than one comparator.FieldValueHitQueue
which is optimized in case
there is just one comparator.QueryNode
that holds a field
and an arbitrary value.Query
that matches documents that have a value for a given field
as reported by LeafReader.getDocsWithField(String)
.field
.SegmentCommitInfo
into the files argument.ResourceLoader
that opens resource files
from the local file system, optionally resolving against
a base directory.null
to refer to CWD).null
to refer to CWD).CharacterUtils.CharacterBuffer
with characters read from the given
reader Reader
.fill(buffer, reader, buffer.buffer.length)
.fromIndex
(inclusive) to
toIndex
(exclusive) with val
.BasicStats
in stats
.PagedBytes
starting at start with a
given length.PriorityQueue.top()
slice as well as all slices that are positionned
on the same term to tops
and return how many of them there are.SegToken
FilterCodecReader
contains another CodecReader, which it
uses as its basic source of data, possibly transforming the data along the
way or providing additional functionality.Collector
delegator.TermsEnum
on a terms enum.TermsEnum
on a terms enum.END
.FilteringTokenFilter
.Iterator
implementation that filters elements with a boolean predicate.LeafCollector
delegator.FilterLeafReader
contains another LeafReader, which it
uses as its basic source of data, possibly transforming the data along the
way or providing additional functionality.Fields
implementations.PostingsEnum
implementations.Terms
implementations.TermsEnum
implementations.TermsEnum
by accepting only prefix coded 32 bit
terms with a shift value of 0.TermsEnum
by accepting only prefix coded 64 bit
terms with a shift value of 0.FilterScorer
contains another Scorer
, which it
uses as its basic source of data, possibly transforming the data along the
way or providing additional functionality.Spans
implementation wrapping another spans instance,
allowing to filter spans matches easily by implementing FilterSpans.accept(org.apache.lucene.search.spans.Spans)
Spans
.FilterSpans.accept(Spans)
that indicates
whether a candidate match should be accepted, rejected, or rejected
and move on to the next document.BytesRef
.MethodHandle
for the no-arg ctor of the given class.<=
the specified segment count.DocumentsWriterPerThreadPool.ThreadState
with
at least one indexed document.MergePolicy.MergeSpecification
if so.FingerprintFilter
.StoredFieldsWriter.close()
, passing in the number
of documents that were written.TermVectorsWriter.close()
, passing in the number
of documents that were written.Automaton
and returns it.IndexOutput
and returns the file offset where index was written.added
.Analyzer
for Finnish.FinnishAnalyzer.DEFAULT_STOPWORD_FILE
.TokenFilter
that applies FinnishLightStemmer
to stem Finnish
words.FinnishLightStemFilter
.FixedBitSet.getBits()
)
long[], accessed with an int index, implementing Bits
and
DocIdSet
.BytesRefArray
except all values have the same length.BytesRefArray
with a counter to track allocated bytesTokenizer
chain,
e.g.FlagsAttribute
.NumericUtils
, e.g.Float.compare(float, float)
for numHits
.FunctionValues
implementation which supports retrieving float values.Float.floatToRawIntBits(float)
.BytesRef
.LeafReader.getNumericDocValues(java.lang.String)
and makes those
values available as other numeric types, casting as needed.float
field for fast range filters.float
value to a sortable signed int
.maxItemsPerBlock
) in the terms file.Directory
, but does not commit
(fsync) them (call IndexWriter.commit()
for that).DataOutput
.numBytes
.FlushPolicy
implementation that flushes new segments based on
RAM used and document count depending on the IndexWriter's
IndexWriterConfig
.true
if this FlushPolicy
flushes on
IndexWriterConfig.getMaxBufferedDeleteTerms()
, otherwise
false
.true
if this FlushPolicy
flushes on
IndexWriterConfig.getMaxBufferedDocs()
, otherwise
false
.true
if this FlushPolicy
flushes on
IndexWriterConfig.getRAMBufferSizeMB()
, otherwise
false
.FlushPolicy
controls when segments are flushed from a RAM resident
internal data-structure to the IndexWriter
s Directory
.FlushPolicy
to control when segments are
flushed.<= maxNumSegments
.IndexWriter.forceMerge(int)
, except you can
specify whether the call should block until
all merging completes.IndexWriter.forceMergeDeletes()
, except you can
specify whether the call should block until the
operation completes.passages
from content
into a human-readable text snippet.ForUtil
instance and save state into out
.Highlighter
class.FragmentsBuilder
is an interface for fragments (snippets) builder classes.Analyzer
for French language.FrenchAnalyzer.getDefaultStopSet()
).TokenFilter
that applies FrenchLightStemmer
to stem French
words.FrenchLightStemFilter
.TokenFilter
that applies FrenchMinimalStemmer
to stem French
words.FrenchMinimalStemFilter
.IndexOptions.DOCS
.Fields
interface over the in-RAM buffered
fields/terms/postings, to flush postings through the
PostingsFormat.TermsEnum.postings(PostingsEnum, int)
if you require term frequencies in the returned enum.maxSize
items.SegGraph
Document
using an analyzerDocument
using an analyzerDocument
using an analyzerAbstractAllGroupHeadsCollector
for retrieving the most relevant groups when grouping
by ValueSource
.FunctionAllGroupHeadsCollector
instance.FunctionAllGroupsCollector
instance.AbstractDistinctValuesCollector
.AbstractFirstPassGroupingCollector
that groups based on
ValueSource
instances.ValueSource
that matches docs in which the values in the value source match a configured
range.AbstractSecondPassGroupingCollector
that groups based on
ValueSource
instances.FunctionSecondPassGroupingCollector
instance.MutableValue
.FuzzyConfig
used to create fuzzy queries.FUZZY
operators: (~) on single termsFuzzyQuery
sFuzzyLikeThisQuery
maxEdits
to term
.FuzzyQueryNode
represents a element that contains
field/text/similarity tupleFuzzyQuery
object from a FuzzyQueryNode
object.FuzzyQueryNode
, when this kind of node is found, it checks on the
query configuration for
StandardQueryConfigHandler.ConfigurationKeys.FUZZY_CONFIG
, gets the
fuzzy prefix length and default similarity from it and set to the fuzzy node.reader
which share a prefix of
length prefixLength
with term
and which have a fuzzy similarity >
minSimilarity
.Analyzer
for Galician.GalicianAnalyzer.DEFAULT_STOPWORD_FILE
.TokenFilter
that applies GalicianMinimalStemmer
to stem
Galician words.GalicianMinimalStemFilter
.TokenFilter
that applies GalicianStemmer
to stem
Galician words.GalicianStemFilter
.a
and b
,
consistently with BigInteger.gcd(BigInteger)
.Analyzer
for German language.GermanAnalyzer.getDefaultStopSet()
.TokenFilter
that applies GermanLightStemmer
to stem German
words.GermanLightStemFilter
.TokenFilter
that applies GermanMinimalStemmer
to stem German
words.GermanMinimalStemFilter
.GermanNormalizationFilter
.TokenFilter
that stems German words.GermanStemFilter
instanceGermanStemFilter
.null
if the key is not in the FST dictionary.len
chars of text
starting at off
CharSequence
Similarity
for scoring a field.index
.BytesRefArray
BytesRef
that points to the internal content of this
builder.BytesRef
with the bytes for the given
bytesID.CharsRef
that points to the internal content of this
builder.IntsRef
that points to the internal content of this
builder.index
.len
longs starting
from index
into arr[off:off+len]
and return
the actual number of values that have been read.SetOnce.set(Object)
.clazz
using the context classloader.clazz
using the given classloader.BytesRef
for the given index.DocumentsWriterPerThreadPool.ThreadState
instances.seed.getCoreCacheKey()
GroupingSearch.setAllGroupHeads(boolean)
was set to true or an empty bit set.GroupingSearch.setAllGroups(boolean)
was set to true
then all matching groups are returned, otherwise
an empty collection is returned.AttributeSource
from the TokenStream
that provided the indexed tokens for this
field.b
parameterBinaryDocValues
for this field.DocValues.emptyBinary()
if it has none.BinaryDocValues
for this field, or
null if no BinaryDocValues
were indexed for
this field.BitSet
matching the expected documents on the given
segment.BKDReader
.1.0f
.BreakIterator
to use for
dividing text into passages.MSBRadixSorter.HISTOGRAM_SIZE
.Util.getByOutput(FST, long)
except reusing
BytesReader, initial and scratch Arc, and result.FST.BytesReader
to pass to the StemmerOverrideFilter.StemmerOverrideMap.get(char[], int, FST.Arc, FST.BytesReader)
method.FST.BytesReader
for this FST, positioned at
position 0.DocumentsWriterPerThread
c
parameter.c
parameter.DocIdSet
s which are currently stored
in the cache.CharType
constant of a given character.BooleanClause.Occur
.Codec
.Codec
that wrote this segment.IndexWriter.setCommitData(Map)
.Version
wrote this commit, or null if the
version this index was written with did not directly record the version.true
if IndexWriter.close()
should first commit before closing.FieldComparator
to use for
sorting.FieldComparatorSource
used for
custom sortingnull
of the term that triggered the boost change.LiveIndexWriterConfig
, which can be used to query the IndexWriter
current settings, as well as modify "live" ones.IndexReaderContext
for this
IndexReader
's sub-reader tree.SnowballProgram.getCurrentBuffer()
.CustomScoreProvider
that calculates the custom scores
for the given IndexReader
.DateTools.Resolution
used for certain field when
no DateTools.Resolution
is defined for this field.DateTools.Resolution
map used to normalize each date field.PackedInts.Decoder
.InputStream
in a reader using a CharsetDecoder
.CharsetDecoder
.IndexWriterConfig
s.InfoStream
used by a newly instantiated classes.StandardQueryConfigHandler.Operator.AND
or StandardQueryConfigHandler.Operator.OR
.SHOULD
or MUST
.QueryCache
or null
if the cache is disabled.QueryCachingPolicy
.DirectoryReader
.LeafReader
.DocIdSetIterator
.Directory
.Directory
for the index.Directory
of the index that hit
the exception.Directory
we use to create temp files.PackedInts.Reader
from an IndexInput
.PackedInts.Reader
from a stream without reading
metadata at the beginning of the stream.MultiLevelSkipListReader.skipTo(int)
has skipped.IndexReader
.MergePolicy.OneMerge.getMergeReaders()
reorders document IDs, this method
must be overridden to return a mapping from the natural doc ID
(the doc ID that would result from a natural merge) to the actual doc
ID.MultiLevelSkipListReader.skipTo(int)
has skipped.matchers
matches tokens.Bits
at the size of reader.maxDoc()
,
with turned on bits for each docid that does have a value for this field.Bits
matching nothing if it has none.Bits
at the size of reader.maxDoc()
,
with turned on bits for each docid that does have a value for this field,
or null if no DocValues were indexed for this field.Bits
field
and returns a bit set at the size of
reader.maxDoc()
, with turned on bits for each docid that
does have a value for this field.Bits
instance representing documents that have a value in this segment.field
and returns a DocTermOrds
instance, providing a method to retrieve
the terms (as ords) per document.field
.field
.DocValuesProducer
for the given generation.DocValuesType
of the docValues; this is
DocValuesType.NONE
if the field has no docvalues.PackedInts.Encoder
.EntityResolver
to be used by DocumentBuilder
.ErrorHandler
to be used by DocumentBuilder
.MergePolicy.OneMerge.setException(java.lang.Throwable)
.ParserExtension
instance for the given key or
null
if no extension can be found for the key.FieldConfig
for a specific field name.FieldInfos
describing all fields in
this reader.FieldInfos
describing all fields in
this reader.SegmentInfos
.QueryParserBase.getFieldQuery(String,String,boolean)
.FieldQuery
object.FieldQuery
object.IndexableField
s with the given name.Fields
instance for this
reader, merging fields/terms/docs/positions on the
fly.MultiDocValues.OrdinalMap.getFirstSegmentNumber(long)
).FixedBitSet
, returns it, otherwise returns null.Dictionary.FlagParsingStrategy
based on the FLAG definition line taken from the affix filePassageFormatter
to use for
formatting passages into highlighted snippets.Highlighter
has no more tokens for the current fragment -
the Scorer returns the weighting it has derived for the most recent
fragment, typically based on the results of Scorer.getTokenScore()
.IndexInput
.QueryParserBase.getWildcardQuery(java.lang.String, java.lang.String)
).Counter
LongValues
instance that maps
segment ordinals to global ordinals.TimeLimitingCollector.TimerThread
.BooleanClause.Occur
used for high frequency terms.Sorter
.ignoreCase
optionbaseClass
in which this method is overridden/implemented
in the inheritance path between baseClass
and the given subclass subclazz
.field
.IndexCommit
as specified in
IndexWriterConfig.setIndexCommit(IndexCommit)
or the default,
null
which specifies to open the latest index commit point.IndexCommit
from its generation;
returns null if this IndexCommit is not currently
snapshottedIndexDeletionPolicy
specified in
IndexWriterConfig.setIndexDeletionPolicy(IndexDeletionPolicy)
or
the default KeepOnlyLastCommitDeletionPolicy
/DocumentsWriterPerThreadPool
instance.IndexReader
this searches.IndexWriter
.IndexWriter
InfoStream
InfoStream
used for debugging.infoStream
.FieldCache.setInfoStream(PrintStream)
CharacterUtils
implementation.numValues
into monotonic
blocks of 2blockShift
values.bitsPerValue
for each valueoffset
of the given slice
decoding bitsPerValue
for each valuenumValues
using bitsPerValue
BKDReader.IntersectState
ConcurrentMergeScheduler.enableAutoIOThrottle()
was called, else Double.POSITIVE_INFINITY
.k1
parametercollector
to collect the given context.LeafFieldComparator
to collect the given
LeafReaderContext
.StandardQueryParser.getPointsConfigMap()
CharacterUtils.CharacterBuffer.getOffset()
Bits
representing live (not
deleted) docs.Bits
instance for this
reader, merging live Documents on the
fly.BooleanClause.Occur
used for low frequency terms.Passage.getMatchStarts()
.Bits
instance representing documents that match this
weight on the given context.slice
Passage.getMatchStarts()
.IndexReader
.PointValues.size(org.apache.lucene.index.IndexReader, java.lang.String)
is 0
trackMaxScores=true
was passed
on
construction
.maxThreadCount
.SegmentCommitInfo
for the merged segment,
or null if it hasn't been set yet.MergeScheduler
that was set by
IndexWriterConfig.setMergeScheduler(MergeScheduler)
.MergePolicy
to avoid
selecting merges for segments already being merged.IndexReader
.PointValues.size(org.apache.lucene.index.IndexReader, java.lang.String)
is 0
RateLimiter.pause(long)
.null
if an alternative IndexFormatTooOldException.getReason()
is provided.total
number of times that a query has
been looked up, return how many times this query was not contained in the
cache.μ
null
PackedInts.getMutable(int, int, float)
with a pre-computed number
of bits per value and format.IndexOutput
.MergeScheduler
calls this method to retrieve the next
merge requested by the MergePolicynoCFSRatio
.NumericDocValues
for this field.NumericDocValues
representing norms
for this field, or null if no NumericDocValues
were indexed.NumberFormat
used to parse a String
to
Number
NumberFormat
used to parse a String
to
Number
NumberFormat
used to convert the value to String
.NumberFormat
used to convert the value to String
.DocumentsWriterPerThread
NumericDocValues
for this field.DocValues.emptyNumeric()
if it has none.LegacyNumericConfig
associated with the lower and upper bounds.NumericDocValues
for this field, or
null if no NumericDocValues
were indexed for
this field.NumericDocValues
NumericDocValues
over the values found in documents in the given
field.Passage.getMatchStarts()
, Passage.getMatchEnds()
,
Passage.getMatchTerms()
positionIncrement == 0
.Analyzer.getPositionIncrementGap(java.lang.String)
, except for
Token offsets instead.IndexWriterConfig.OpenMode
set by IndexWriterConfig.setOpenMode(OpenMode)
.DefaultIndexingChain.PerField
,
absorbing the type information from FieldType
,
and creates a new DefaultIndexingChain.PerField
if this field name
wasn't seen yet.result
, to the byte[] slice holding this valueLeafReader
s that were passed on init.name
DefaultIndexingChain.PerField
, or null
if this field name wasn't seen yet.PointsConfig
associated with the lower and upper bounds.PointValues
used for numeric or
spatial searches, or null if there are no point fields.state
.field
.field
.int
.long
.QueryParserBase.getWildcardQuery(java.lang.String, java.lang.String)
).true
if mapped pages should be loaded.Query
.SpanQuery
.IndexSearcher
.IndexSearcher
.QueryConfigHandler
associated to the query tree if any,
otherwise it returns null
QueryNodeProcessor.getQueryConfigHandler()
.QueryNodeProcessor.getQueryConfigHandler()
.null
if no processor is used.LiveIndexWriterConfig.setRAMBufferSizeMB(double)
if enabled.DocumentsWriterPerThread
can
consume until forcefully flushed.Scorer
that matches documents with values between the specified range,
and that which produces scores equal to FunctionValues.floatVal(int)
.long
with all LegacyNumericTokenStream.LegacyNumericTermAttribute.getShift()
applied, undefined before first tokenSegmentReader
.PointReader
iterator to step through all previously added pointsPackedInts.Reader
from a stream.PackedInts.ReaderIterator
PackedInts.ReaderIterator
from a stream without reading
metadata at the beginning of the stream.PackedInts.Reader
from a stream without reading metadata at
the beginning of the stream.true
if IndexWriter
should pool readers even if
DirectoryReader.open(IndexWriter)
has not been called.Analyzer.ReuseStrategy
.Scorer
that matches all documents,
and that which produces scores equal to FunctionValues.floatVal(int)
.PassageScorer
to use for
ranking passages.IndexReader
using the provided SearcherFactory
.generation
the current searcher is guaranteed to include.SegmentInfos
for this reader.segments_N
) associated
with this commit point.PriorityQueue.PriorityQueue(int,boolean)
constructor to fill the queue, so that the code which uses that queue can always
assume it's full and only change the top without attempting to insert any new
object.PriorityQueue.lessThan(T, T)
should always favor the
non-sentinel values).Similarity
implementation used by this
IndexWriter
.Similarity
to use to compute scores.PhraseQuery
.AbstractAnalysisFactory.getWordSet(ResourceLoader, String, boolean)
,
except the input is in snowball format.Sort
order that is used to sort segments when merging.SortedDocValues
for this field.DocValues.emptySorted()
if it has none.SortedDocValues
for this field, or
null if no SortedDocValues
were indexed for
this field.SortedDocValues
SortedNumericDocValues
for this field.DocValues.emptySortedNumeric(int)
if it has none.SortedNumericDocValues
for this field, or
null if no SortedNumericDocValues
were indexed for
this field.SortedSetDocValues
for this field.DocValues.emptySortedSet()
if it has none.SortedSetDocValues
for this field, or
null if no SortedSetDocValues
were indexed for
this field.big
that contain at least one spans from little
.little
that are contained in a spans from big
.SparseFixedBitSet
, returns it, otherwise returns null.label
.clazz
for the
attributes it implements.MergeInfo
describing this merge.TermsEnum.docFreq()
for all terms in this field,
or -1 if this measure isn't stored by the codec.TermsEnum.docFreq()
for
all terms in this field, or -1 if this measure isn't
stored by the codec.TermsEnum.totalTermFreq()
for all terms in this
field, or -1 if this measure isn't stored by the codec (or if this fields
omits term freq and positions).TermsEnum.totalTermFreq()
for
all terms in this field, or -1 if this measure isn't
stored by the codec (or if this fields omits term freq
and positions).Directory.createTempOutput(java.lang.String, java.lang.String, org.apache.lucene.store.IOContext)
to generate temporary files.PostingsEnum
for the specified field and
term.PostingsEnum
for the specified field and
term, with control over whether freqs are required.String
and
escaped using the given EscapeQuerySyntax
.String
and
escaped using the given EscapeQuerySyntax
.PostingsEnum
for the specified
field and term.PostingsEnum
for the specified
field and term, with control over whether offsets and payloads are
required.field
and returns a BinaryDocValues
instance, providing a
method to retrieve the term (as a BytesRef) per document.FieldCache.getTerms(org.apache.lucene.index.LeafReader,String,boolean)
,
but you can specify whether more RAM should be consumed in exchange for
faster lookups (default is "true").getTermsEnum(terms, new AttributeSource())
MultiTermQuery
s TermsEnum
TermsEnum
positioned at this weights Term or null if
the term does not exist in the given contextfield
and returns a SortedDocValues
instance,
providing methods to retrieve sort ordinals and terms
(as a ByteRef) per document.FieldCache.getTermsIndex(org.apache.lucene.index.LeafReader,String)
, but you can specify
whether more RAM should be consumed in exchange for
faster lookups (default is "true").DocumentsWriterPerThreadPool.ThreadState
where i is the
given ord.System.nanoTime()
, to compare with the value returned by
nanoTime()
.SegTokenPair
entries in the table.Tokenizer
TokenStream
List
of all token pairs at this offset (index of the second token)ExtensionQuery
IndexReaderContext
.Query
has been looked up
in this QueryCache
.IndexWriter
was closed as a side-effect of a tragic exception,
e.g.Transition
with the index'th
transition leaving the specified state.true
iff the IndexWriter
packs
newly written segments in a compound file.IndexWriter.setCommitData(Map)
for this commit.userData
saved with this commit.true
, if the unmap workaround is enabled.Number
.Number
.float
, int
; 64 for double
, long
)X(x, V)
where V is substring(pos, end)
WeightedSpanTerm
for the specified token.WeightedSpanTerms
from the given Query
and TokenStream
.WeightedSpanTerms
from the given Query
and TokenStream
.WeightedSpanTerms
from the given Query
and TokenStream
.*
), but is not a prefix term (one that has
just a single *
character at the end).CharArraySet
from wordFiles, which
can be a comma-separated list of filenamesPostingsEnum
.PostingsEnum
.z
Analyzer.ReuseStrategy
that reuses the same components for
every field.StringHelper.murmurhash3_x86_32(byte[], int, int, int)
.Analyzer
for the Greek language.GreekLowerCaseFilter
.TokenFilter
that applies GreekStemmer
to stem Greek
words.GreekStemFilter
.GroupingSearch
instance that groups documents by index terms using DocValues.GroupingSearch
instance that groups documents by function using a ValueSource
instance.GroupQueryNode
represents a location where the original user typed
real parenthesis on the query string.Query
object set on the
GroupQueryNode
object using a
QueryTreeBuilder.QUERY_TREE_BUILDER_TAGID
tag.AbstractFirstPassGroupingCollector
.capacity
bytes
without resizing.BytesRefHash.BytesStartArray
DocIdSetBuilder.BulkAdder
object that can be used to
add up to numDocs
documents.ArrayUtil.grow(long[], int)
.ArrayUtil.grow(long[])
.DataOutput
that can be used to build a byte[].GrowableByteArrayDataOutput
with the given initial capacity.PackedInts.Mutable
, but grows the
bit count of the underlying packed ints on-demand.half-float
field for fast range filters.Directory.copyFrom(Directory, String, String, IOContext)
in order
to optionally use a hard-link instead of a full byte by byte file copy if applicable.true
if this state has any children (outgoing
transitions).PostingsEnum.freq()
).SrndQuery
within the package
org.apache.lucene.queryparser.surround.query
it is not necessary to override this method,SortField
instance.instead
SloppyMath.haversinSortKey(double, double, double, double)
HighFreqTerms
class extracts the top n most frequent terms
(by document frequency) from an existing Lucene index and reports their
document frequency.Fragmenter
, Scorer
, Formatter
,
Encoder
and tokenizers.PassageFormatter
.HindiAnalyzer.DEFAULT_STOPWORD_FILE
.TokenFilter
that applies HindiNormalizer
to normalize the
orthography.HindiNormalizationFilter
.TokenFilter
that applies HindiStemmer
to stem Hindi words.HindiStemFilter
.size
elements.HMMChineseTokenizer
HTMLStripCharFilter
.size
in human-readable units (GB, MB, KB or bytes).size
in human-readable units (GB, MB, KB or bytes).Analyzer
for Hungarian.HungarianAnalyzer.DEFAULT_STOPWORD_FILE
.TokenFilter
that applies HungarianLightStemmer
to stem
Hungarian words.HungarianLightStemFilter
.HunspellStemFilter
outputting all possible stems.HunspellStemFilter
outputting all possible stems.HunspellStemFilter
.HyphenatedWordsFilter
.Hyphen
instancesTokenFilter
that decomposes compound words found in many Germanic languages.HyphenationCompoundWordTokenFilter
instance.HyphenationCompoundWordTokenFilter
instance.HyphenationCompoundWordTokenFilter
.log(1 + (docCount - docFreq + 0.5)/(docFreq + 0.5))
.log((docCount+1)/(docFreq+1)) + 1
.#idf(long, long)
for every document.IDVersionPostingsFormat.longToBytes(long, org.apache.lucene.util.BytesRef)
during indexing.IDVersionSegmentTermsEnum.seekExact(BytesRef, long)
for
optimistic-concurreny, and also IDVersionSegmentTermsEnum.getVersion()
to get the
version of the currently seek'd term.ifSource
function,
returns the value of the trueSource
or falseSource
function.MergePolicy
.true
if the lower endpoint is inclusivetrue
if the upper endpoint is inclusivetrue
if the lower endpoint is inclusivetrue
if the upper endpoint is inclusiveIndexCommit
.SegmentInfos
are still in use.IndexWriter
) use this method to advance the stream to
the next token.IndexWriter
) use this method to advance the stream to
the next token.IndexWriter
) use this method to advance the stream to
the next token.TokenStream
with a CharTermAttribute
without elisioned startIndexDeletionPolicy
or IndexReader
.index commits
.DocumentsWriterPerThreadPool
to control how
threads are allocated to DocumentsWriterPerThread
.true
if an index likely exists at
the specified directory.matchesExtension
), as well as generating file names from a segment name,
generation and extension (
fileNameFromGeneration
,
segmentFileName
).IndexFormatTooNewException
IndexFormatTooNewException
IndexFormatTooOldException
.IndexFormatTooOldException
.IndexFormatTooOldException
.IndexFormatTooOldException
.DocumentsWriterPerThread.IndexingChain
that determines how documents are
indexed.Directory
.IndexInput.toString()
.SegmentCommitInfo
.IndexOptions
of current field being
writtenIndexOptions
, describing what should be
recorded into the inverted indexIndexOptions
, describing what should be
recorded into the inverted indexIndexReader
instances.IndexReaderContext
.IndexReaderContext
.IndexSearcher
s leaf contexts to be
executed within a single thread.IndexWriter
using the given
matchVersion
.IndexWriter
using the given
matchVersion
.IndexWriter
using the given
config.IndexWriter
creates and maintains an index.conf
.DirectoryReader.open(IndexWriter)
has
been called (ie, this writer is in near real-time
mode), then after a merge completes, this class can be
invoked to warm the reader on the newly merged
segment, before the merge commits.IndexWriter
.Analyzer
.IndexWriter
.TokenFilter
that applies IndicNormalizer
to normalize text
in Indian Languages.IndicNormalizationFilter
.IndonesianAnalyzer.DEFAULT_STOPWORD_FILE
.TokenFilter
that applies IndonesianStemmer
to stem Indonesian words.IndonesianStemFilter
.InetAddress
field.SegmentInfo
that we wrap.SegmentCommitInfo
at the provided
index.InfoStream
for debugging messages.InfoStream
used for debugging messages.IndexWriter
and SegmentInfos
.IndexInput
.TokenStream
.PersianCharFilter
Sorter
implementation based on the merge-sort algorithm that merges
in place (no extra memory will be allocated).InPlaceMergeSorter
DataInput
wrapping a plain InputStream
.NoMergeScheduler
FieldCache.getDocTermOrds(org.apache.lucene.index.LeafReader, java.lang.String, org.apache.lucene.util.BytesRef)
to filter for 32-bit numeric termsFieldCache.getDocTermOrds(org.apache.lucene.index.LeafReader, java.lang.String, org.apache.lucene.util.BytesRef)
to filter for 64-bit numeric termsNumericUtils
, e.g.ByteBlockPool
IntBlockPool
with a default IntBlockPool.Allocator
.IntBlockPool
with the given IntBlockPool.Allocator
.IntBlockPool.Allocator
that never recycles.IntBlockPool.SliceReader
that can read int slices written by a IntBlockPool.SliceWriter
IntBlockPool.SliceWriter
that allows to write multiple integer slices into a given IntBlockPool
.Integer.compare(int, int)
for numHits
.FunctionValues
implementation which supports retrieving int values.BytesRef
.CompiledAutomaton
.Scorer
s, taking advantage
of TwoPhaseIterator
.Scorer
s, taking advantage
of TwoPhaseIterator
.Terms.intersect(org.apache.lucene.util.automaton.CompiledAutomaton, org.apache.lucene.util.BytesRef)
for
block-tree.LeafReader.getNumericDocValues(java.lang.String)
and makes those
values available as other numeric types, casting as needed.int
field for fast range filters.Comparator
.Comparator
.List
using the Comparator
.List
in natural order.IntroSorter
.Outputs
implementation where each output
is a sequence of ints.IntsRef.EMPTY_INTS
capacity
.IntsRef
instances.shift
bits.value
such that unsigned byte order comparison
is consistent with Integer.compare(int, int)
ToChildBlockJoinQuery.ToChildBlockJoinScorer.validateParentDoc()
on mis-use,
when the parent query incorrectly returns child docs.IOContext
instance with a new value for the readOnce variable.Analyzer
for Irish.IrishAnalyzer.DEFAULT_STOPWORD_FILE
.IrishLowerCaseFilter
.true
if this state corresponds to the end of at least one
input sequence.state
in any Levenshtein DFA is an accept state (final state).WordDelimiterFilter.ALPHA
true
iff this is a complete result ie.Sorter.DocMap
, useful for assertions.Similarity.coord(int,int)
is disabled in scoring
for the high and low frequency query instance.IndexWriter
after invoking the
IndexDeletionPolicy
.WordDelimiterFilter.DIGIT
true
if no terms were indexed.true
if this map contains no key-value mappings.InfoStream.message(java.lang.String, java.lang.String)
.arc
's target state is in expanded (or vector) format.true
iff this DocumentsWriterPerThreadPool.ThreadState
is marked as flush
pending otherwise false
true
if a full flush is currently runningtrue
if the current token is a keyword, otherwise
false
maxNumSegments
.true
if this IndexWriter
is still open.baseClass
and the given subclass subclazz
.BlockTreeTermsReader
sets this).true
if no changes have occured since this searcher
ie.true
if the given reader
is sorted by the
sort
given.WordDelimiterFilter.SUBWORD_DELIM
true
iff the given item is identical to the item
hold by the slices tail, otherwise false
.Character.isLetter(int)
.Character.isWhitespace(int)
.true
if this context struct represents the top level reader within the hierarchical contextWordDelimiterFilter.UPPER
MemoryIndex
at each position so that queries can access them.Analyzer
for Italian.ItalianAnalyzer.DEFAULT_STOPWORD_FILE
.TokenFilter
that applies ItalianLightStemmer
to stem Italian
words.ItalianLightStemFilter
.Iterator
for char[]
instances in this set.DocValuesFieldUpdates.Iterator
over the updated documents and their
values.PrefixCodedTerms
.Iterator
of contained segments in order.DocIdSetIterator
to access the set.DocIdSetIterator
over matching documents.BytesRefArray.iterator(Comparator)
with a null
comparatorBytesRefIterator
with point in time semantics.BytesRefIterator
with point in time semantics.SynonymMap.WORD_SEPARATOR
.IndexDeletionPolicy
implementation that
keeps only the most recent commit and immediately removes
all prior commits after a new commit is done.KeepWordFilter
.KeepWordFilter
.CharArraySet
view on the map's keys.KeywordAttribute
.KeywordAttribute
.KeywordMarkerFilter
KeywordMarkerFilter
.KeywordAttribute.setKeyword(boolean)
set to true
and once set to false
.KeywordRepeatFilter
.KeywordTokenizer
.KStemFilter
.docFreq+1 / numberOfDocuments+1
.totalTermFreq+1 / numberOfDocuments+1
.label
.IndexCommit
.PointRangeQuery
.LatLonPoint
.Analyzer
for Latvian.LatvianAnalyzer.DEFAULT_STOPWORD_FILE
.TokenFilter
that applies LatvianStemmer
to stem Latvian
words.LatvianStemFilter
.FieldComparator
instance.LeafReader
is an abstract class, providing an interface for accessing an
index.LeafReader
is closed.IndexReaderContext
for LeafReader
instances.LeafReaderContext
FieldCache.DOUBLE_POINT_PARSER
instead.FieldCache.FLOAT_POINT_PARSER
instead.FieldCache.INT_POINT_PARSER
instead.FieldCache.LONG_POINT_PARSER
instead.StandardQueryConfigHandler.ConfigurationKeys.POINTS_CONFIG
StandardQueryConfigHandler.ConfigurationKeys.POINTS_CONFIG_MAP
DoublePoint
insteadprecisionStep
LegacyNumericUtils.PRECISION_STEP_DEFAULT
(16).FieldType
.FloatPoint
insteadprecisionStep
LegacyNumericUtils.PRECISION_STEP_DEFAULT_32
(8).FieldType
.IntPoint
insteadprecisionStep
LegacyNumericUtils.PRECISION_STEP_DEFAULT_32
(8).FieldType
.LongPoint
insteadprecisionStep
LegacyNumericUtils.PRECISION_STEP_DEFAULT
(16).FieldType
.PointsConfig
LegacyNumericConfig
object.PointsConfigListener
LegacyNumericFieldConfigListener
object using the given QueryConfigHandler
.PointQueryNode
instead.LegacyNumericQueryNode
object using the given field,
Number
value and NumberFormat
used to convert the value to
String
.PointQueryNodeProcessor
instead.LegacyNumericQueryNodeProcessor
object.IntPoint
, LongPoint
, FloatPoint
, DoublePoint
, and
create range queries with IntPoint.newRangeQuery()
,
LongPoint.newRangeQuery()
,
FloatPoint.newRangeQuery()
,
DoublePoint.newRangeQuery()
respectively.
See PointValues
for background information on Points.PointRangeQueryBuilder
insteadPointRangeQueryNode
instead.LegacyNumericRangeQueryNode
object using the given
LegacyNumericQueryNode
as its bounds and LegacyNumericConfig
.PointRangeQueryNodeBuilder
instead.LegacyNumericRangeQueryNodeBuilder
object.PointRangeQueryNodeProcessor
instead.LegacyNumericRangeQueryNode
object.PointValues
insteadprecisionStep
LegacyNumericUtils.PRECISION_STEP_DEFAULT
(16).precisionStep
.precisionStep
using the given
AttributeFactory
.LegacyNumericTokenStream.LegacyNumericTermAttribute
.PointValues
instead.LegacyNumericUtils.filterPrefixCodedInts(org.apache.lucene.index.TermsEnum)
and LegacyNumericUtils.filterPrefixCodedLongs(org.apache.lucene.index.TermsEnum)
.LengthFilter
.LengthFilter
. state.getBoost() *
computeLengthNorm(numTokens)
where
numTokens does not count overlap tokens if
discountOverlaps is true by default or true for this
specific field.state.getBoost()*lengthNorm(numTerms)
, where
numTerms
is FieldInvertState.getLength()
if ClassicSimilarity.setDiscountOverlaps(boolean)
is false, else it's FieldInvertState.getLength()
- FieldInvertState.getNumOverlap()
.hitA
is less relevant than hitB
.AttributeFactory
.LetterTokenizer
.MoreLikeThisQuery
FiniteStringsIterator
which limits the number of iterated accepted strings.LimitTokenCountFilter
.maxStartOffset
which won't pass and ends the stream.LimitTokenOffsetFilter
.LimitTokenPositionFilter
.LinearFloatFunction
implements a linear function over
another ValueSource
.reader
which share a prefix of
length prefixLength
with term
and which have a fuzzy similarity >
minSimilarity
.Analyzer
for Lithuanian.LithuanianAnalyzer.DEFAULT_STOPWORD_FILE
.MutableBits
recording live documents; this is
only set if there is one or more deleted documents.IndexWriter
with few setters for
settings that can be changed on an IndexWriter
instance "live".p(w|C)
as the number of occurrences of the term in the
collection, divided by the total number of tokens + 1
.IndexInput
.Directory
.SynonymMap.Parser
class.Locale
used when parsing the querywrite.lock
could not be acquired.write.lock
could not be released.Lock
is valid before any destructive filesystem operation.VerifyingLockFactory
.x <= 0 ? 0 : Math.floor(Math.log(x) / Math.log(base))
x
.log2(Math.E)
, precomputed.LogMergePolicy
that measures size of a
segment as the total byte size of the segment's files.LogMergePolicy
that measures size of a
segment as the number of documents (not taking deletions
into account).MergePolicy
that tries
to merge segments into levels of exponentially
increasing size, where each level has fewer segments than
the value of the merge factor.NumericUtils
, e.g.i64
index.LongBitSet.getBits()
)
long[], accessed with a long index.Long.compare(long, long)
for numHits
.values
values of size bitsPerValue
.FunctionValues
implementation which supports retrieving long values.LeafReader.getNumericDocValues(java.lang.String)
and makes those
values available as other numeric types, casting as needed.long
field for fast range filters.LongsRef.EMPTY_LONGS
capacity
.shift
bits.value
such that unsigned byte order comparison
is consistent with Long.compare(long, long)
PackedInts.Decoder.longBlockCount()
long
blocks.PackedInts.Encoder.longBlockCount()
long
blocks.key
exists, returns its ordinal, else
returns -insertionPoint-1
, like Arrays.binarySearch
.key
exists, returns its ordinal, else
returns -insertionPoint-1
, like Arrays.binarySearch
.BytesRef
) corresponding to
the provided ordinal.StandardQueryConfigHandler.ConfigurationKeys.LOWERCASE_EXPANDED_TERMS
is defined in the
QueryConfigHandler
.LowerCaseFilter
.AttributeFactory
.LowerCaseTokenizer
.QueryCache
that evicts queries using a LRU (least-recently-used)
eviction policy in order to remain under a given maximum size and number of
bytes used.maxSize
queries with at most maxRamBytesUsed
bytes of memory, only on
leaves that satisfy leavesToCache
;maxSize
queries
with at most maxRamBytesUsed
bytes of memory.Lucene50PostingsFormat
with default
settings.Lucene50PostingsFormat
with custom
values for minBlockSize
and maxBlockSize
passed to block terms dictionary.term vectors format
.Lucene53NormsFormat
Lucene53NormsFormat
Lucene54DocValuesFormat
Lucene54DocValuesFormat
Lucene60PointsWriter
maxPointsInLeafNode
(1024) and maxMBSortInHeap
(16.0)Version.LATEST
Version.LATEST
Version.LATEST
SegToken.index
for each token, based upon its order by startOffset.BytesRef
s representing UTF-8 encoded
strings.Fields
implementation that merges multiple
Fields into one, and maps around deleted documents.values
to global ord spacevalues
to global ord spaceCharFilter
that applies the mappings
contained in a NormalizeCharMap
to the character
stream, and correcting the resulting changes to the
offsets.Reader
.MappingCharFilter
.DocumentsWriterPerThread
flush
pendingcount
ords, marking them in the provided ordBitSet
.Packed64.get(int)
.MatchAllDocsQuery
MatchAllDocsQueryNode
indicates that a query node tree or subtree
will match all documents if executed in the index.MatchAllDocsQuery
object from a
MatchAllDocsQueryNode
object.WildcardQueryNode
that is "*:*" to
MatchAllDocsQueryNode
.TwoPhaseIterator.matches()
.TwoPhaseIterator.approximation()
is on matches.TwoPhaseIterator
matches on the current document.SegmentReader
s that have identical field
name/number mapping, so their stored fields and term
vectors may be bulk merged.MatchNoDocsQueryNode
indicates that a query node tree or subtree
will not match any documents if executed in the index.MatchNoDocsQuery
object from a
MatchNoDocsQueryNode
object.ForUtil.readBlock(IndexInput, byte[], int[])
.BLOCK_SIZE
encoded values.IndexReader.maxDoc()
for every document.MaxFloatFunction
returns the max of its components.IndexWriter.forceMerge(int)
.Attribute
to a fresh AttributeSource
before calling
MultiTermQuery.getTermsEnum(Terms,AttributeSource)
.MaxNonCompetitiveBoostAttribute
.Float.NaN
if scores were not computed.ReferenceManager.maybeRefreshBlocking()
), periodically, if
you want that ReferenceManager.acquire()
will return refreshed instances.ReferenceManager.maybeRefresh()
), periodically, if you want
that ReferenceManager.acquire()
will return refreshed instances.ConcurrentMergeScheduler.merge(org.apache.lucene.index.IndexWriter, org.apache.lucene.index.MergeTrigger, boolean)
to possibly stall the incoming
thread when there are too many merges running or pending.OfflineSorter.BufferSize
in MB.MemoryIndex.reset()
.mergeState
.mergeState
.mergeState
.mergeState
.mergeState
.DocValuesFieldUpdates
.IndexWriter.getNextMerge()
.TopDocs.merge(int, TopDocs[])
but also ignores the top
start
top docs.Sort
.TopDocs.merge(Sort, int, TopFieldDocs[])
but also ignores the top
start
top docs.BKDReader
s.MergePolicy.MergeAbortedException
.MergePolicy.MergeAbortedException
with a
specified message.TopDocs.merge(int, org.apache.lucene.search.TopDocs[])
impls.toMerge
.FieldTermIterator
sMergeException
.MergeException
.FieldInfos
of the newly merged segment.IndexWriter
after the merge is done and all readers have been closed.toMerge
.toMerge
.maxTempFile
partitions into a new partition.MergePolicy
for selecting merges.IndexWriter.abortMerges()
was called.MergePolicy
instances.RateLimiter
that IndexWriter
assigns to each running merge, to
give MergeScheduler
s ionice like control.MergeRateLimiter.maybePause(long, long)
.MergeScheduler
to use for running merges.IndexWriter
uses an instance
implementing this interface to execute the merges
selected by a MergePolicy
.toMerge
.toMerge
.toMerge
.ConcurrentMergeScheduler.MergeThread
s have kicked off (this is use
to name them).ConcurrentMergeScheduler.MergeThread
s.MergePolicy.findMerges(MergeTrigger, org.apache.lucene.index.SegmentInfos, IndexWriter)
to indicate the
event that triggered the merge.IndexWriter
's infoStream
.MergeScheduler.verbose()
was
called and returned true.null
if no DocumentsWriterPerThreadPool.ThreadState
is yet visible to the calling thread.MinFloatFunction
returns the min of its components.length
.a1
and the complement of the language of
a2
.Directory
implementation that uses
mmap for reading, and FSDirectory.FSIndexOutput
for writing.FSLockFactory.getDefault()
.FSLockFactory.getDefault()
.ModifierQueryNode
indicates the modifier value (+,-,?,NONE) for
each term on the query string.Query
object set on the
ModifierQueryNode
object using a
QueryTreeBuilder.QUERY_TREE_BUILDER_TAGID
tag.MonotonicBlockPackedWriter
.PackedLongValues.Builder
that will compress efficiently integers that
would be a monotonic function of their index.null
MultiTermQuery.RewriteMethod
used when creating queriesgetMatchingSub()
.ValueSource
implementation which wraps multiple ValueSources
and applies an extendible boolean function to their values.ValueSource
implementation which wraps multiple ValueSources
and applies an extendible float function to their values.ValueSource
implementations that wrap multiple
ValueSources and apply their own logic.MultiLevelSkipListReader
.MultiLevelSkipListReader
, where
skipInterval
and skipMultiplier
are
the same.MultiLevelSkipListWriter
.MultiLevelSkipListWriter
, where
skipInterval
and skipMultiplier
are
the same.PhraseQuery
, with the possibility of
adding more than one term at the same position that are treated as a disjunction (OR).MultiPhraseQueryNode
indicates that its children should be used to
build a MultiPhraseQuery
instead of PhraseQuery
.MultiPhraseQuery
object from a MultiPhraseQueryNode
object.PostingsEnum
, merged from PostingsEnum
API of sub-segments.PostingsEnum
along with the
corresponding ReaderSlice
.CompositeReader
which reads multiple indexes, appending
their content.Multiset
is a set that allows for duplicate elements.Multiset
.sims
.values
values
Query
that matches documents
containing a subset of terms provided by a FilteredTermsEnum
enumeration.BooleanClause.Occur.SHOULD
clause in a BooleanQuery, but adjusts
the frequencies used for scoring to be blended across the terms, otherwise
the rarest term typically ranks highest (often not useful eg in the set of
expanded terms in a FuzzyQuery).BooleanClause.Occur.SHOULD
clause in a BooleanQuery, but the scores
are only computed as the boost.BooleanClause.Occur.SHOULD
clause in a BooleanQuery, and keeps the
scores as computed by the query.MultiTermQuery.CONSTANT_SCORE_REWRITE
.MultiTermQuery
as a Filter.MultiTermQuery
as a Filter.MultiTermQuery.RewriteMethod
,
MultiTermQuery.CONSTANT_SCORE_REWRITE
, for multi-term
query nodes.ValueSource
that abstractly represents ValueSource
s for
poly fields, and other things.MutableValue
implementation of type boolean
.MutableValue
implementation of type Date
.MutableValue
implementation of type double
.MutableValue
implementation of type float
.MutableValue
implementation of type int
.MutableValue
implementation of type long
.MutableValue
implementation of type String
.CustomScoreQuery.toString(String)
.resourceDescription
NamedSPILoader.lookup(String)
by name.ThreadFactory
implementation that accepts the name prefix
of the created threads as a constructor argument.NamedThreadFactory
instanceLockFactory
using native OS file
locks.NativeUnixDirectory
Directory
implementation for all Unixes that uses
DIRECT I/O to bypass OS level IO caching during
merging.FSLockFactory.getDefault()
.NEAR
operators: (~) on phrasesn
nearest indexed points to the provided point, according to Haversine distance.NearSpansOrdered
, but for the unordered case.CharacterUtils.CharacterBuffer
and allocates a char[]
of the given bufferSize.Collector
.Compressor
instance.WeakIdentityMap
based on a ConcurrentHashMap
.WeakIdentityMap
based on a ConcurrentHashMap
.Decompressor
instance.LegacyNumericRangeQuery
, that queries a double
range using the given precisionStep
.LegacyNumericRangeQuery
, that queries a double
range using the default precisionStep
LegacyNumericUtils.PRECISION_STEP_DEFAULT
(16).AbstractAnalysisFactory
by invoking the constructor, passing the given argument map.LegacyNumericRangeQuery
, that queries a float
range using the given precisionStep
.LegacyNumericRangeQuery
, that queries a float
range using the default precisionStep
LegacyNumericUtils.PRECISION_STEP_DEFAULT_32
(8).WeakIdentityMap
based on a non-synchronized HashMap
.WeakIdentityMap
based on a non-synchronized HashMap
.LegacyNumericRangeQuery
, that queries a int
range using the given precisionStep
.LegacyNumericRangeQuery
, that queries a int
range using the default precisionStep
LegacyNumericUtils.PRECISION_STEP_DEFAULT_32
(8).LegacyNumericRangeQuery
, that queries a long
range using the given precisionStep
.LegacyNumericRangeQuery
, that queries a long
range using the default precisionStep
LegacyNumericUtils.PRECISION_STEP_DEFAULT
(16).SpanNearQuery.Builder
for an ordered query on a particular fieldRAMFile
for storing data.TermRangeQuery
instanceCheckIndex.exorciseIndex(org.apache.lucene.index.CheckIndex.Status)
method to repair the index.BreakIterator.getSentenceInstance()
TeeSinkTokenFilter.SinkTokenStream
that receives all tokens consumed by this stream.label
and return
the newly created target state for this transition.Thread
DocumentsWriterPerThreadPool.ThreadState
iff any new state is available otherwise
null
.TopDocs
instance containing the given results.SpanNearQuery.Builder
for an unordered query on a particular fieldBreakIterator.getWordInstance()
BytesRef
in the iterator.BytesRef
.count
values.count
next values,
the returned ref MUST NOT be modifiedByteBlockPool.LEVEL_SIZE_ARRAY
to quickly navigate to the next slice level.IntBlockPool.LEVEL_SIZE_ARRAY
to quickly navigate to the next slice level.DocIdSetIterator.NO_MORE_DOCS
if there are no more documents to
return.DocIdSetIterator.NO_MORE_DOCS
if there are no more docs in the
set.v
.InetAddress
that compares immediately less than
address
.SortedSetDocValues.setDocument(int)
.position
as location - offset
, so that a
matching exact phrase is easily identified when all PhrasePositions
have exactly the same position
.FilteredTermsEnum.next()
or if FilteredTermsEnum.accept(org.apache.lucene.util.BytesRef)
returns
FilteredTermsEnum.AcceptStatus.YES_AND_SEEK
or FilteredTermsEnum.AcceptStatus.NO_AND_SEEK
,
this method will be called to eventually seek the underlying TermsEnum
to a new position.FSDirectory.createTempOutput(java.lang.String, java.lang.String, org.apache.lucene.store.IOContext)
.RAMDirectory.createTempOutput(java.lang.String, java.lang.String, org.apache.lucene.store.IOContext)
.SegmentResult
in order to retrieve the grouped facet counts.v
.InetAddress
that compares immediately greater than
address
.NGramTokenFilter
.PhraseQuery
which is optimized for n-gram phrase query.NGramTokenizer
.FSDirectory
implementation that uses java.nio's FileChannel's
positional read, which allows multiple threads to read from the same file
without synchronizing.FSLockFactory.getDefault()
.FileChannel.read(ByteBuffer, long)
DocIdSetIterator.nextDoc()
, DocIdSetIterator.advance(int)
and
DocIdSetIterator.docID()
it means there are no more docs in the iterator.SortedSetDocValues.nextOrd()
it means there are no more
ordinals for the document.NoChildOptimizationQueryNodeProcessor
removes every
BooleanQueryNode, BoostQueryNode, TokenizedPhraseQueryNode or
ModifierQueryNode that do not have a valid children.IndexDeletionPolicy
which keeps all index commits around, never
deleting them.LockFactory
to disable locking entirely.MergePolicy
which never returns merges to execute.MergeScheduler
which never executes any merges.Similarity
that does not make use of scoring factors
and may be used when scores are not needed.TermsEnum.postings(PostingsEnum, int)
if you don't
require per-document postings in the returned enum.Outputs
implementation; use this if
you just want to build an FSA.c
.NormalizationH1(1)
c
.NormalizationH2(1)
NormalizationH3(800)
μ
.NormalizationZ(0.3)
z
.Character.toLowerCase(int)
.MappingCharFilter
.NormsConsumer
to write norms to the
index.NormsProducer
to read norms from the index.TFIDFSimilarity.decodeNormValue(long)
for every document.Analyzer
for Norwegian.NorwegianAnalyzer.DEFAULT_STOPWORD_FILE
.TokenFilter
that applies NorwegianLightStemmer
to stem Norwegian
words.NorwegianLightStemFilter
.TokenFilter
that applies NorwegianMinimalStemmer
to stem Norwegian
words.NorwegianMinimalStemFilter
.NOT
operator (-)NoTokenFoundQueryNode
is used if a term is convert into no tokens
by the tokenizer/lemmatizer/analyzer (null).RAMDirectory
around any provided delegate directory, to
be used during NRT search.<= maxMergeSizeMB
, and 2) the total cached bytes is
<= maxCachedMB
Fragmenter
implementation which does not fragment the text.1
instead.1
instead.Character.BYTES
instead.Double.BYTES
instead.Float.BYTES
instead.Integer.BYTES
instead.Long.BYTES
instead.Short.BYTES
instead.Format
parses Long
into date strings and vice-versa.NumberDateFormat
object using the given DateFormat
.size
values on
blockSize
.DocumentsWriterPerThread
IndexReader.numDocs()
for every document.long
value for scoring,
sorting or value retrieval.DocValuesFieldUpdates
which holds updates of documents, of a single
NumericDocValuesField
.PackedTokenAttributeImpl.type()
NumericPayloadTokenFilter
.PointValues
insteadPointValues
insteadLock
.BitSet
from the content of the provided DocIdSetIterator
.OfflinePointWriter
.OfflinePointWriter.getReader(long, long)
to read the file.OffsetAttribute
.buf[start:start+count]
which is by offset
code points from index
.TermsEnum.postings(PostingsEnum, int)
if you require offsets in the returned enum.IndexReader
is closed.SegmentReader
has closed.DocIdSet
is added to this cache.DocIdSet
s are removed from this
cache.DocumentsWriterPerThreadPool.ThreadState
s
DocumentsWriterPerThread
.DocumentsWriterPerThreadPool.ThreadState
's
DocumentsWriterPerThread
.OpaqueQueryNode
is used for specify values that are not supposed to
be parsed by the parser.IndexWriter
.IndexWriter
,
controlling whether past deletions should be applied.IndexCommit
.StandardDirectoryReader.doOpenIfChanged(SegmentInfos)
, as well as NRT replication.FSDirectory.open(Path)
, but allows you to
also specify a custom LockFactory
.IndexWriter
versus what the provided reader is
searching, then open and return a new
IndexReader searching both committed and uncommitted
changes from the writer; else, return null (though, the
current implementation never returns null).IndexWriterConfig.OpenMode
that IndexWriter
is opened
with.TermRangeQuery
s with open ranges.OR
operator (|)SortedSetDocValues.setDocument(int)
at the specified index.PostingsFormat
does not implement TermsEnum.ord()
.SparseFixedBitSet.or(DocIdSetIterator)
impl that works best when it
is denseTermState
TokenFilter
and Analyzer
implementations that use Snowball
stemmers.org.apache.lucene.codecs.lucene53
for an overview
of the index format.org.apache.lucene.codecs.lucene54
for an overview
of the index format.AttributeImpl
for indexing collation keys as index terms.Document
for indexing and searching.ValueSource
.DocValues
.OrQueryNode
represents an OR boolean operation performed on a list
of nodes.DataOutput
wrapping a plain OutputStream
.IndexOutput
that writes to an OutputStream
.OutputStreamIndexOutput
with the given buffer size.overhead per value / bits per value
).Packed64
except that it trades space for
speed by ensuring that a single block needs to be read/written in order to
read/write a value.PackedLongValues.Builder
that will compress efficiently positive integers.DataInput
wrapper to read unaligned, variable-length packed
integers.in
.DataOutput
wrapper to write unaligned, variable-length packed
integers.out
.PackedInts.Reader
which has all its values equal to 0 (bitsPerValue = 0).LongValues
instance.PackedLongValues
instance.CharTermAttribute
TypeAttribute
PositionIncrementAttribute
PositionLengthAttribute
OffsetAttribute
PagedGrowableWriter
instance.PagedMutable
.PagedMutable
instance.Outputs
implementation, holding two other outputs.CompositeReader
which reads multiple, parallel indexes.IndexReader.close()
.LeafReader
which reads multiple, parallel indexes.IndexReader.close()
.SynonymMap.Builder
.Query
.ExtensionQuery
and returns a corresponding
Query
instance.QueryParserHelper.getSyntaxParser()
, the result is a query
node tree QueryParserHelper.getQueryNodeProcessor()
QueryParserHelper.getQueryBuilder()
QueryNode
.QueryParserHelper.parse(String, String)
so it casts the
return object to Query
.Query
."major.minor.bugfix.prerelease"
.QueryParser
.CheckIndex.checkIndex(List)
) was called with non-null
argument).PostingsHighlighter
.k1 = 1.2
,
b = 0.75
.PathHierarchyTokenizer
.PathQueryNode
is used to store queries like
/company/USA/California /product/shoes/brown.PatternCaptureGroupTokenFilter
.KeywordAttribute
.PatternKeywordMarkerFilter
, that marks the current
token as a keyword if the tokens term buffer matches the provided
Pattern
via the KeywordAttribute
.PatternReplaceCharFilter
.PatternReplaceFilter
.PatternTokenizer
.PayloadAttribute
.BytesRef
.TermsEnum.postings(PostingsEnum, int)
if you require payloads in the returned enum.PayloadFunction
to modify the score of a
wrapped SpanQuery
NOTE: In order to take advantage of this with the default scoring implementation
(ClassicSimilarity
), you must override ClassicSimilarity.scorePayload(int, int, int, BytesRef)
,
which returns 1 by default.FieldInfo
attribute name used to store the
format name for each field.FieldInfo
attribute name used to store the
format name for each field.PostingsFormat
.PostingsFormat
.Analyzer.ReuseStrategy
that reuses components per-field by
maintaining a Map of TokenStreamComponent per field name.FieldInfo
attribute name used to store the
segment suffix name for each field.FieldInfo
attribute name used to store the
segment suffix name for each field.Similarity
for different fields.Analyzer
for Persian.PersianAnalyzer.DEFAULT_STOPWORD_FILE
.PersianCharFilter
.TokenFilter
that applies PersianNormalizer
to normalize the
orthography.PersianNormalizationFilter
.SnapshotDeletionPolicy
which adds a persistence layer so that
snapshots can be maintained across the life of an application.PersistentSnapshotDeletionPolicy
wraps another
IndexDeletionPolicy
to enable flexible
snapshotting, passing IndexWriterConfig.OpenMode.CREATE_OR_APPEND
by default.PersistentSnapshotDeletionPolicy
wraps another
IndexDeletionPolicy
to enable flexible snapshotting.PHRASE
operator (")field
, and at a
maximum edit distance of slop
.field
.field
, and at a
maximum edit distance of slop
.field
.PhraseQuery
object from a TokenizedPhraseQueryNode
object.PhraseQuery
's slop factor.SlopQueryNode
objects in the query
node tree.Query
.Query
.PointsFormat
).packedPoints
iterator must be in sorted order.PointQueryNode
object using the given field,
Number
value and NumberFormat
used to convert the value to
String
.FieldQueryNode
s to
PointRangeQueryNode
s.PointQueryNodeProcessor
object.IntPoint
.PointValues
.PointQueryNode
bounds, which means the bound values are Number
s.PointRangeQueryNode
object using the given
PointQueryNode
as its bounds and PointsConfig
.PointValues
range queries out of PointRangeQueryNode
s.PointRangeQueryNodeBuilder
object.TermRangeQueryNode
s to
PointRangeQueryNode
s.PointRangeQueryNodeProcessor
object.PointWriter
, abstracting away whether points a read
from (offline) disk or simple arrays in heap.PointsConfig
.PointsConfig
in FieldConfig
for point fields.PointValues
queries.PointsConfig
object.FieldConfig
requests in
QueryConfigHandler
and add StandardQueryConfigHandler.ConfigurationKeys.POINTS_CONFIG
based on the StandardQueryConfigHandler.ConfigurationKeys.POINTS_CONFIG_MAP
set in the
QueryConfigHandler
.PointsConfigListener
object using the given QueryConfigHandler
.PointValues.intersect(java.lang.String, org.apache.lucene.index.PointValues.IntersectVisitor)
to check how each recursive cell corresponds to the query.PointReader
to iterate
those points.A & ~B
.PorterStemFilter
.Analyzer
for Portuguese.PortugueseAnalyzer.DEFAULT_STOPWORD_FILE
.TokenFilter
that applies PortugueseLightStemmer
to stem
Portuguese words.PortugueseLightStemFilter
.TokenFilter
that applies PortugueseMinimalStemmer
to stem
Portuguese words.PortugueseMinimalStemFilter
.TokenFilter
that applies PortugueseStemmer
to stem
Portuguese words.PortugueseStemFilter
.PositionIncrementAttribute
.PositionLengthAttribute
.TermsEnum.postings(PostingsEnum, int)
if you require term positions in the returned enum.Spans
for any single document, but only after
Spans.asTwoPhaseIterator()
returned null
.Outputs
implementation where each output
is a non-negative long value.PostingsEnum
for the specified term.PostingsEnum
for the specified term
with PostingsEnum.FREQS
.PostingsEnum
for the current term.PostingsEnum
for the current term, with
control over whether freqs, positions, offsets or payloads
are required.PostingsEnum
for this sub-reader.PostingsHighlighter.DEFAULT_MAX_LENGTH
.PostingsEnum
and
PostingsEnum
instances.BlockTreeTermsWriter
, and handles writing postings.PRECEDENCE
operators: (
and )
StandardQueryNodeProcessorPipeline
and enables
boolean precedence on it.StandardQueryParser
),
except that it respect the boolean precedence, so <a AND b OR c AND d> is parsed to <(+a +b) (+c +d)>
instead of <+a +b +c +d>.LegacyLongField
,
LegacyDoubleField
, LegacyNumericTokenStream
, LegacyNumericRangeQuery
.LegacyIntField
and
LegacyFloatField
.FilterIterator.next()
.PREFIX
operator (*)PrefixAwareTokenFilter
.PrefixCodedTerms
.prefix
.PrefixWildcardQueryNode
represents wildcardquery that matches abc*
or *.PrefixQuery
object from a PrefixWildcardQueryNode
object.DaciukMihovAutomatonBuilder.add(CharsRef)
.IndexDeletionPolicy
PrintStream
such as System.out
.SegToken
representing the best segmentation of a sentenceQueryNodeProcessor.process(QueryNode)
.ProductFloatFunction
returns the product of its components.ProximityQueryNode
represents a query where the terms should meet
specific distance conditions.SearcherLifetimeManager.Pruner
to prune entries.IndexReader.getCoreCacheKey()
.PostingsWriterBase
, adding a push
API for writing each element of the postings.Analyzer
used primarily at query time to wrap another analyzer and provide a layer of protection
which prevents very common words from being passed into queries.QueryAutoStopWordAnalyzer.defaultMaxDocFreqPercent
BitSetProducer
that wraps a query and caches matching
BitSet
s per segment.Analyzer
chain.QueryBuilder
QueryNode
is a interface implemented by all nodes on a QueryNode
tree.QueryNode
s.QueryNodeImpl
is the default implementation of the interface
QueryNode
QueryNodeProcessor
is an interface for classes that process a
QueryNode
tree.QueryNodeProcessor
interface, it's an abstract class, so it should be extended by classes that
want to process a QueryNode
tree.QueryNodeProcessorPipeline
class should be used to build a query
node processor pipeline.QueryConfigHandler
object.1/sqrt(sumOfSquaredWeights)
.Similarity.SimWeight.getValueForNormalization()
of
each of the query terms.Query
objects.Rescorer
that uses a provided Query to assign
scores to the first-pass hits.Scorer
implementation which scores text fragments by the number of
unique query terms found.Scorer
implementation which scores text fragments by the number of
unique query terms found.shouldExit()
method,
used with ExitableDirectoryReader
.QueryTimeout
that can be used by
the ExitableDirectoryReader
class to time out and exit out
when a query takes a long time to rewrite.QueryTreeBuilder
constructor.QueryValueSource
returns the relevance score of the queryQuotedFieldQueryNode
represents phrase query.FSDirectory
using java.io.RandomAccessFile.FSLockFactory.getDefault()
.RandomAccessFile.seek(long)
followed by
RandomAccessFile.read(byte[], int, int)
.Directory
implementation.Directory
.Directory
with the given LockFactory
.RAMDirectory
instance from a different
Directory
implementation.IndexInput
implementation.IndexOutput
implementation.SortedSetDocValues
that supports random access
to the ordinals of a document.Weight
s that are based on random-access
structures such as live docs or doc values.RangeMapFloatFunction
implements a map function over
another ValueSource
whose values fall within min and max inclusive to target.TermRangeQuery
QueryNode
that represents
some kind of range query.RateLimiter
for this merge, used to rate limit writes and abort.SegmentInfo
data from a directory.Outputs.write(Object, DataOutput)
.For
format).null
.ChecksumIndexInput
.IndexReader
, this context represents.DirectoryReader
instances across
multiple threads, while periodically reopening.IndexWriter
.IndexWriter
, controlling whether past deletions should be applied.Directory
.DirectoryReader
, stealing
the incoming reference.IndexReader
s and IndexReaderContext
s.Outputs.writeFinalOutput(Object, DataOutput)
.follow
arc and read the first arc of its target;
this changes the provided arc
(2nd arg) in-place and returns
it.IndexInput
.IndexInput
.DataInput
.follow
arc and reads the last
arc of its target; this changes the provided
arc
(2nd arg) in-place and returns it.segments_N file
) and
load all SegmentCommitInfo
s.bitsPerValue
bits.DataOutput.writeMapOfStrings(Map)
.DataOutput.writeSetOfStrings(Set)
.DataOutput.writeSetOfStrings(Set)
instead.DataOutput.writeMapOfStrings(Map)
instead.zig-zag
-encoded
variable-length
integer.zig-zag
-encoded
variable-length
integer.ReciprocalFloatFunction
implements a reciprocal function f(x) = a/(mx+b), based on
the float value of a field or function as exported by ValueSource
.ByteBlockPool.Allocator
implementation that recycles unused byte
blocks in a buffer and reuses them in subsequent calls to
RecyclingByteBlockAllocator.getByteBlock()
.RecyclingByteBlockAllocator
RecyclingByteBlockAllocator
.RecyclingByteBlockAllocator
with a block size of
ByteBlockPool.BYTE_BLOCK_SIZE
, upper buffered docs limit of
RecyclingByteBlockAllocator.DEFAULT_BUFFERED_BLOCKS
(64).IntBlockPool.Allocator
implementation that recycles unused int
blocks in a buffer and reuses them in subsequent calls to
RecyclingIntBlockAllocator.getIntBlock()
.RecyclingIntBlockAllocator
RecyclingIntBlockAllocator
.RecyclingIntBlockAllocator
with a block size of
IntBlockPool.INT_BLOCK_SIZE
, upper buffered docs limit of
RecyclingIntBlockAllocator.DEFAULT_BUFFERED_BLOCKS
(64).AttributeImpl
/AttributeSource
passing the class name of the Attribute
, a key and the actual value.AttributeImpl.reflectWith(AttributeReflector)
method:
iff prependAttClass=true
: "AttributeClass#key=value,AttributeClass#key=value"
iff prependAttClass=false
: "key=value,key=value"
AttributeSource.reflectWith(AttributeReflector)
method:
iff prependAttClass=true
: "AttributeClass#key=value,AttributeClass#key=value"
iff prependAttClass=false
: "key=value,key=value"
AttributeReflector
.AttributeReflector
.Automaton
.RegExp
from a string.RegExp
from a string.org.apache.lucene.util.automaton
package.term
.term
.term
.term
.RegexpQueryNode
represents RegexpQuery
query Examples: /[a-z]|[0-9]/RegexpQuery
object from a RegexpQueryNode
object.TermState
with an leaf ordinal.TermState
with an leaf ordinal.IndexReader
s which wrap other readers
(e.g.> 50%
occupied) or too large (< 20%
occupied).int[] key
and filling with the old values.BytesRefHash
after a previous BytesRefHash.clear()
call.ReferenceManager.acquire()
.SearcherLifetimeManager.acquire(long)
.numHits
.ClassLoader
.ClassLoader
.ClassLoader
.ClassLoader
.ClassLoader
.ClassLoader
.ClassLoader
.ClassLoader
.SegmentCommitInfo
.SegmentCommitInfo
at the
provided index.LeafReader.CoreClosedListener
which has been added with
LeafReader.addCoreClosedListenerAsReaderClosedListener(IndexReader, CoreClosedListener)
.QueryNodeProcessorPipeline
class removes every instance of
DeletedQueryNode
from a query node tree.RemoveDuplicatesTokenFilter
.QueryNode
that is not a leaf and has not
children.ReferenceManager.addListener(RefreshListener)
.IndexReader.ReaderClosedListener
.source
to dest
as an atomic operation,
where dest
does not yet exist in the directory.min
or more concatenated
repetitions of the language of the given automaton.min
and
max
(including both) concatenated repetitions of the language
of the given automaton.state
with an already registered state
or stateRegistry the last child state.Scorer
.ReqExclScorer
.ReqOptScorer
.TopDocs
.TopDocs
) from an original
query.TokenStream.incrementToken()
.TokenStream.incrementToken()
.TokenStream.incrementToken()
.TokenStream.incrementToken()
.TokenStream.incrementToken()
.TokenStream.incrementToken()
.TokenStream.incrementToken()
.MemoryIndex
to its initial state and recycles all internal buffers.BytesRefIterator.next()
has not yet been called.ByteBlockPool.Allocator.recycleByteBlocks(byte[][], int, int)
.out
.valueCount
values contained in in
.newSize
based on the content of
this buffer.ResourceLoader
.j
from the temporary storage into slot i
.Throwable
and rethrows either IOException
or an unchecked exception.Throwable
and rethrows it as an unchecked exception.CodecUtil.checkFooter(org.apache.lucene.store.ChecksumIndexInput)
.null
if no group could be retrieved.AbstractAllGroupHeadsCollector.temporalResult
.Analyzer.tokenStream(String, String)
.Analyzer.tokenStream(String,String)
TokenStream
.TokenStream
.ReverseStringFilter
.IndexDeletionPolicy
by calling its
IndexDeletionPolicy.onCommit(List)
again with the known commits.Query
object.MultiTermQuery.getTermsEnum(Terms, AttributeSource)
.DocIdSet
implementation inspired from http://roaringbitmap.org/
The space is divided into blocks of 2^16 bits and each block is encoded
independently.RoaringDocIdSet
s.DocIdSet
implementation that can store documents up to 2^16-1 in a short[].IndexWriter
without committing
any changes that have occurred since the last commit
(or since it was opened, if commit hasn't been called).Analyzer
for Romanian.RomanianAnalyzer.DEFAULT_STOPWORD_FILE
.segments
file and
run SegmentInfos.FindSegmentsFile.doBody(java.lang.String)
on it.SegmentInfos.FindSegmentsFile.doBody(java.lang.String)
on the provided commit.RunAutomaton
from a deterministic
Automaton
.RunAutomaton
from a deterministic
Automaton
.Analyzer
for Russian language.TokenFilter
that applies RussianLightStemmer
to stem Russian
words.RussianLightStemFilter
.other
is not null and is exactly
of the same class as this object's class.out
.i
and i+len
into the temporary storage.ScandinavianFoldingFilter
.ScandinavianNormalizationFilter
.max
.Weight
.doc
.Weight.DefaultBulkScorer.scoreRange(org.apache.lucene.search.LeafCollector, org.apache.lucene.search.DocIdSetIterator, org.apache.lucene.search.TwoPhaseIterator, org.apache.lucene.util.Bits, int, int)
to help out
hotspot.Scorer
which wraps another scorer and caches the score of the
current document.TopDocs
.FieldDoc
instances if the
withinGroupSort sorted by fields.FieldFragList.WeightedFragInfo
by boost, breaking ties
by offset.1
1
Scorer
which can iterate in order over all matching
documents and assign them a score.Weight.DefaultBulkScorer.scoreAll(org.apache.lucene.search.LeafCollector, org.apache.lucene.search.DocIdSetIterator, org.apache.lucene.search.TwoPhaseIterator, org.apache.lucene.util.Bits)
to help out
hotspot.BooleanClause.Occur.SHOULD
clause in a
BooleanQuery, and keeps the scores as computed by the
query.BooleanClause.Occur.SHOULD
clause in a
BooleanQuery, and keeps the scores as computed by the
query.BooleanClause.Occur.SHOULD
clause in a BooleanQuery, and keeps the
scores as computed by the query.SegmentInfo
for the new flushed segment and persists
the deleted documents MutableBits
.n
hits for query
.n
hits for query
where all results are after a previous
result (after
).n
hits for query
where all results are after a previous
result (after
).n
hits for query
where all results are after a previous
result (after
), allowing control over
whether hit scores and max score should be computed.SearcherManager
to
create new IndexSearchers.IndexSearcher
instances across multiple
threads, while periodically reopening.IndexWriter
.IndexWriter
, controlling whether past deletions should be applied.Directory
.DirectoryReader
.input
to the directory offset.input
to the directory offset.IDVersionSegmentTermsEnum.seekExact(BytesRef)
that can
sometimes fail-fast if the version indexed with the requested ID
is less than the specified minIDVersion.TermsEnum.ord()
.TermState
previously obtained
from TermsEnum.termState()
.DocValuesProducer
held by SegmentReader
and
keeps track of their reference counting.SegmentInfo
of the newly merged segment.SegmentInfo
describing this segment.SegmentInfo
describing this segment.SegmentInfo
(segment metadata file).CheckIndex.Status.SegmentInfoStatus
instances, detailing status of each segment.BreakIterator
and
allows subclasses to decompose these sentences into words.SegmentReadState
.SegmentReadState
.SegmentReadState
.HHMMSegmenter
SegmentWriteState
with a new segment suffix.SegToken
by converting full-width latin to half-width, then lowercasing latin.SegGraph
arr[from:to[
so that the element at offset k is at the
same position as if arr[from:to[
was sorted, and all elements on
its left are less than or equal to it, and all elements on its right are
greater than or equal to it.SerbianNormalizationFilter
.MergeScheduler
that simply does each merge
sequentially, using the current thread.i
.index
.len
longs starting
at off
in arr
into this mutable, starting at
index
.DocTermOrds.uninvert(org.apache.lucene.index.LeafReader,Bits,BytesRef)
to record the document frequency for each uninverted
term.true
to allow leading wildcard characters.true
to allow leading wildcard characters.true
to allow leading wildcard characters.TermRangeQuery
s.TimeLimitingCollector.setBaseline(long)
using Counter.get()
on the clock passed to the constructor.LegacyNumericConfig
associated with these bounds.PointsConfig
associated with these bounds.BytesRef
of the termBytesRef
with the bytes at the specified offset/length slice.BytesRef
with the specified slice, avoiding copying bytes in the common case when the slice
is contained in a single block in the byte block pool.Codec
.SegmentInfos.getVersion()
.IndexWriter.close()
should first commit
before closing.null
of the term that triggered the boost change.DateTools.Resolution
used for certain field when
no DateTools.Resolution
is defined for this field.DateTools.Resolution
used for certain field when
no DateTools.Resolution
is defined for this field.DateTools.Resolution
used for each fieldIndexWriterConfig
s.InfoStream
used
by a newly instantiated classes.SHOULD
or MUST
.QueryCache
instance.QueryCachingPolicy
instance.Similarity.coord(int,int)
may be disabled in scoring, as
appropriate.double
value.true
to enable position increments in result query.true
to enable position increments in result query.true
to enable position increments in result query.MemoryIndex
IndexReader
.float
value.DocumentsWriterPerThreadPool.ThreadState
.Double.POSITIVE_INFINITY
).SpanScorer.freq
and SpanScorer.numMatches
for the current document.IndexDeletionPolicy
implementation to be
specified.DocumentsWriterPerThreadPool
instance used by the
IndexWriter to assign thread-states to incoming indexing threads.IndexOptions
to use with this field.IndexWriter
this config is attached to.PrintStreamInfoStream
.FieldCacheSanityChecker
.BytesRef
to seek before iterating.inOrder
is true, the search terms must
exists in the documents as the same order as in query.int
value.true
.KeywordAttribute
.KeywordAttribute
.StandardQueryParser.setPointsConfigMap(Map)
long
value.true
to allow leading wildcard characters.SegmentCommitInfo
of the merged segment.MergePolicy
is invoked whenever there are changes to the
segments in the index.null
MultiTermQuery.CONSTANT_SCORE_REWRITE
when creating a PrefixQuery
, WildcardQuery
or TermRangeQuery
.MultiTermQuery.CONSTANT_SCORE_REWRITE
when creating a
prefix, wildcard and range queries.MultiTermQuery.CONSTANT_SCORE_REWRITE
when creating a
prefix, wildcard and range queries.TermsEnum
that is used to collect termsNumberFormat
used to parse a String
to
Number
NumberFormat
used to parse a String
to
Number
NumberFormat
used to convert the value to String
.NumberFormat
used to convert the value to String
.PointValues
insteadPointValues
insteadpositionIncrement ==
0
.true
to omit normalization values for the field.SetOnce.set(Object)
.SetOnce.set(Object)
is called more than once.IndexWriterConfig.OpenMode
of the index.i
so that it can later be used as a
pivot, see IntroSorter.comparePivot(int)
.PagedBytes.PagedBytesDataInput.getPosition()
.true
to ask mapped pages to be loaded
into physical memory on init.true
by default.true
by default.current
into an internal buffer.QueryCache
to use when scores are not needed.QueryCachingPolicy
to use for query caching.QueryConfigHandler
associated to the query tree.QueryNodeProcessor.setQueryConfigHandler(QueryConfigHandler)
.QueryNodeProcessor.setQueryConfigHandler(QueryConfigHandler)
.DirectoryReader.open(IndexWriter)
.BlendedTermQuery.RewriteMethod
.LeafCollector.collect(int)
.Similarity
implementation used by this IndexWriter.GermanStemmer
for this filter.true
to store this field.true
to also store token character offsets into the term
vector for this field.true
to also store token payloads into the term
vector for this field.true
to also store token positions into the term
vector for this field.true
if this field's indexed form should be also stored
into term vectors.null
.true
to tokenize this field's contents via the
configured Analyzer
.LeafFieldComparator.compareTop(int)
.false
by defaultIndexWriter
should pack newly written segments in a
compound file.IndexInput
, that is
mentioned in the bug report.TokenStream
s that are not of the type
CachingTokenFilter
are wrapped in a CachingTokenFilter
to
ensure an efficient reset - if you are already using a different caching
TokenStream
impl and you don't want it to be wrapped, set this to
false.TokenStream
s that are not of the type
CachingTokenFilter
are wrapped in a CachingTokenFilter
to
ensure an efficient reset - if you are already using a different caching
TokenStream
impl and you don't want it to be wrapped, set this to
false.array
.ShingleFilter.inputWindow
with input stream tokens, if available,
shifting to the right if the window was previously full.ShingleFilter
around another Analyzer
.StandardAnalyzer
.StandardAnalyzer
.TokenStream
input
TokenStream
input
ShingleFilter.gramSize
.ShingleFilter
.Query
is worth caching.ExitableDirectoryReader.ExitableTermsEnum.next()
to determine whether to stop processing a query.QueryTimeoutImpl.reset()
has not been called
and the elapsed time has exceeded the time allowed.FST.FIXED_ARRAY_NUM_ARCS_SHALLOW
.SegmentInfo
.Similarity
to use when encoding norms.Similarity
that provides a simplified API for its
descendants.SimpleAnalyzer
BoolFunction
implementation which applies an extendible boolean
function to the values of a single wrapped ValueSource
.Collector
implementation that is used to collect all contexts.FieldComparator
implementation that is used for all contexts.FieldFragList
.FragListBuilder
.Fragmenter
implementation which breaks text up into same-size
fragments with no concerns over spotting sentence boundaries.FSDirectory
using Files.newByteChannel(Path, java.nio.file.OpenOption...)
.FSLockFactory.getDefault()
.SeekableByteChannel.read(ByteBuffer)
LockFactory
using Files.createFile(java.nio.file.Path, java.nio.file.attribute.FileAttribute<?>...)
.Encoder
implementation to escape text for HTML outputFormatter
implementation to highlight terms with a pre and
post tag.Fragmenter
implementation which breaks text up into same-size
fragments but does not split up Spans
.SimpleTerm.MatchingTermVisitor.visitMatchingTerm(Term)
Similarity.SimScorer
to score matching documents from a segment of the inverted index.FragListBuilder
that generates one FieldFragList.WeightedFragInfo
object.LockFactory
for a single in-process instance,
meaning all locking will take place through this one instance.SingleTermsEnum
.missingValue
when count is zeroSegmentCommitInfo
, pro-rated by percentage of
non-deleted documents is set.IndexReader
.PrefixCodedTerms
.SegmentCommitInfo
s.BytesRefArray
BytesRef
values in this BytesRefHash
.FixedLengthBytesRefArray
SegmentCommitInfo
, pro-rated by percentage of
non-deleted documents if LogMergePolicy.setCalibrateSizeByDeletes(boolean)
is set.SegmentCommitInfo
, pro-rated by percentage of
non-deleted documents if LogMergePolicy.setCalibrateSizeByDeletes(boolean)
is set.Long
object, returning 0 if it is
cached by the JVM and its shallow size otherwise.Accountable
s by summing
up the shallow size of the array and the
memory usage
reported by each
Accountable
.count
values.numBytes
bytes.Outputs.writeFinalOutput(T, org.apache.lucene.store.DataOutput)
;
defaults to just calling Outputs.readFinalOutput(org.apache.lucene.store.DataInput)
and discarding
the result.Outputs.read(org.apache.lucene.store.DataInput)
and discarding the result.ReaderSlice
describing how this sub-reader
fits into the composite reader.IntBlockPool.SliceReader
on the given pool1 / (distance + 1)
.1 / (distance + 1)
.SlopQueryNode
represents phrase query with a slop.Query
object set on the
SlopQueryNode
child using
QueryTreeBuilder.QUERY_TREE_BUILDER_TAGID
and applies the slop value
defined in the SlopQueryNode
.DocIdSetIterator.advance(int)
relying on
DocIdSetIterator.nextDoc()
to advance beyond the target position.FuzzyQuery
instead.minimumSimilarity
to term
.FuzzyTermsEnum
instead.Set
of stopwords.SnapshotCommitPoint
wrapping the provided
IndexCommit
.IndexDeletionPolicy
that wraps any other
IndexDeletionPolicy
and adds the ability to hold and later release
snapshots of an index.IndexDeletionPolicy
to wrap.IndexCommit
and prevents it
from being deleted.SnowballFilter
, with configurable languageAnalyzer
for Sorani Kurdish.SoraniAnalyzer.DEFAULT_STOPWORD_FILE
.TokenFilter
that applies SoraniNormalizer
to normalize the
orthography.SoraniNormalizationFilter
.TokenFilter
that applies SoraniStemmer
to stem Sorani words.SoraniStemFilter
.array[0:len]
in place.from
(inclusive) and ends at
to
(exclusive).NumericUtils.bigIntToSortableBytes(java.math.BigInteger, int, byte[], int)
NumericUtils.intToSortableBytes(int, byte[], int)
NumericUtils.longToSortableBytes(long, byte[], int)
int
back to a float
.long
back to a double
.docId -> setId -> ords
docId -> ord
.docId -> address -> ord
.BytesRef
value, indexed for
sorting.TermsEnum
wrapping a provided
SortedDocValues
.Long.compare(long, long)
.long
values for scoring,
sorting or value retrieval.SortedNumericDocValues
.BytesRef
values, indexed for
faceting,grouping,joining.TermsEnum
wrapping a provided
SortedSetDocValues
.FunctionValues
instances for multi-valued string based fields.SortedSetDocValues
.sort
diagnostics
to denote that
this segment is sorted.LeafReader
which supports sorting documents by a given
Sort
.TimSorter
which sorts two parallel arrays of doc IDs and
offsets in one go.MergePolicy
that reorders documents according to a Sort
before merging them.MergePolicy
that sorts documents with the given sort
.currentTerm
needed for use as a sort key.Rescorer
that re-sorts according to a provided
Sort.diagnostics
.IndexWriter.addIndexes(CodecReader...)
.BoostQuery
for spans.query
in such a way that the produced
scores will be boosted by boost
.SpanQuery
sSpans
big
that contain at least one spans from little
.SpanFirstQuery
match
whose end
position is less than or equal to end
.Analyzer
for Spanish.SpanishAnalyzer.DEFAULT_STOPWORD_FILE
.TokenFilter
that applies SpanishLightStemmer
to stem Spanish
words.SpanishLightStemFilter
.MultiTermQuery
as a SpanQuery
,
so it can be nested within other SpanQuery classes.BooleanClause.Occur.SHOULD
clause in a BooleanQuery, and keeps the
scores as computed by the query.SpanNearQuery
SpanOrQuery
SpanNotQuery
include
which
have no overlap with spans from exclude
.include
which
have no overlap with spans from exclude
within
dist
tokens of include
.include
which
have no overlap with spans from exclude
within
pre
tokens before or post
tokens of include
.SpanOrQuery
SpanOrQuery
SpanPositionCheckQuery.getMatch()
lies between a start and end position
See SpanFirstQuery
for a derivation that is optimized for the case where start position is 0.SpanQuery
.SpanQueryBuilder
sSpanTermQuery
little
that are inside of big
.FSDirectory
or wraps one via possibly
nested FilterDirectory
or FileSwitchDirectory
,
this returns IOUtils.spins(Path)
for the wrapped directory,
else, true.Path
is backed by spinning storage.Extensions.Pair
.StandardTokenizer
with StandardFilter
, LowerCaseFilter
and StopFilter
, using a list of
English stop words.StandardAnalyzer.STOP_WORDS_SET
).StandardBooleanQueryNode
has the same behavior as
BooleanQueryNode
.BooleanQueryNodeBuilder
, but this
considers if the built BooleanQuery
should have its coord disabled or
not.DirectoryReader
.StandardTokenizer
.StandardFilter
.StandardQueryNodeProcessorPipeline
processor pipeline.StandardSyntaxParser
, already assembled.StandardQueryParser
object.StandardQueryParser
object and sets an
Analyzer
to it.Query
tree object.StandardTokenizer
.AttributeFactory
StandardTokenizer
.Spans.nextStartPosition()
was not yet called on the current doc.true
iff the ref starts with the given prefix.true
iff the ref starts with the given prefix.freq
.clazz
as instance for the
attributes it implements and for all other attributes calls the given delegate factory.FieldReader.getStats()
.BrazilianStemmer
in use by this filter.KeywordAttribute
aware stemmer
with custom dictionary-based stemming.dictionary
.FST
for the StemmerOverrideFilter
StemmerOverrideFilter
StemmerOverrideFilter
.StemmerOverrideFilter.StemmerOverrideMap
StopAnalyzer.ENGLISH_STOP_WORDS_SET
.StopFilter
.IndexSearcher.doc(int)
and IndexReader.document()
will
return the field and its value.FieldType
.FieldType
.FieldType
.StoredFieldsWriter.startDocument()
is called,
informing the Codec that a new document has started.StoredFieldVisitor.needsField(org.apache.lucene.index.FieldInfo)
.FunctionValues
implementation which supports retrieving String values.SortField.setMissingValue(java.lang.Object)
to have missing
string values sort first.SortField.setMissingValue(java.lang.Object)
to have missing
string values sort last.timeToString
or
dateToString
back to a time, represented as a
Date object.timeToString
or
dateToString
back to a time, represented as the
number of milliseconds since January 1, 1970, 00:00:00 GMT.n
in the
array used to construct this searcher/reader.n
in the
array used to construct this searcher/reader.BaseCompositeReader.getSequentialSubReaders()
,
for effectiveness the array is used internally.a1
is a subset of the language
of a2
.äöü -> aou
,
"ß" is substituted by "ss"
- Substitute a second char of a pair of equal characters with
an asterisk: ?? -> ?*
- Substitute some common character combinations with a token:
sch/ch/ei/ie/ig/st -> $/§/%/&/#/!
IllegalArgumentException
is thrown.SumFloatFunction
returns the sum of its components.SumTotalTermFreqValueSource
returns the number of tokens.i
and j
i
and j
.Analyzer
for Swedish.SwedishAnalyzer.DEFAULT_STOPWORD_FILE
.TokenFilter
that applies SwedishLightStemmer
to stem Swedish
words.SwedishLightStemFilter
.SynonymFilter
.SyntaxParser
interfaceTermsEnum
.CompiledAutomaton.AUTOMATON_TYPE.SINGLE
this is the singleton term.Lucene50PostingsReader.BlockPostingsEnum#nextPosition()
when no seek or buffer refill is done.AbstractAllGroupHeadsCollector
for retrieving the most relevant groups when grouping
on a string based group field.AbstractAllGroupsCollector
AbstractAllGroupsCollector
.TermContext
from a IndexReaderContext
AbstractDistinctValuesCollector
that relies
on SortedDocValues
to count the distinct values per group.TermDistinctValuesCollector
instance.AbstractFirstPassGroupingCollector
that groups based on
field values and more specifically uses SortedDocValues
to collect groups.PostingsEnum.freq()
for the
supplied term in every document.AbstractGroupFacetCollector
that computes grouped facets based on the indexed terms
from DocValues.t
.TermQuery
lowerTerm
but less/equal than upperTerm
.FieldQueryNode
bounds, which means the bound values are strings.TermRangeQueryNode
object using the given
FieldQueryNode
as its bounds.TermRangeQuery
object from a TermRangeQueryNode
object.TermRangeQueryNode
s.Terms
for this field.Scorer
for documents matching a Term
.TermScorer
.AbstractSecondPassGroupingCollector
that groups based on
field values and more specifically uses SortedDocValues
to collect grouped docs.TermsEnum
over the values.TermsEnum
over the values.TermsEnum.seekCeil(BytesRef)
, TermsEnum.seekExact(BytesRef)
) or step through (BytesRefIterator.next()
terms to obtain frequency information (TermsEnum.docFreq()
), PostingsEnum
or PostingsEnum
for the current term (TermsEnum.postings(org.apache.lucene.index.PostingsEnum)
.TermsEnum.seekCeil(org.apache.lucene.util.BytesRef)
.ConstantScoreQuery
over a BooleanQuery
containing only
BooleanClause.Occur.SHOULD
clauses.TermsQuery
from the given collection.TermsQuery
from the given collection for
a single field.TermsQuery
from the given BytesRef
array for
a single field.TermsQuery
from the given array.TermsEnum
without re-seeking.TermStatistics
for a term.TermStats.docFreq
and TermStats.totalTermFreq
).LeafReader
, typically from term vectors.TermVectorsWriter.startDocument(int)
is called,
informing the Codec how many fields will be written.CharSequence
sqrt(freq)
.Similarity
with the Vector Space Model.TFIDFSimilarity.tf(float)
for every document.Analyzer
for Thai language.BreakIterator
to tokenize Thai text.ThaiTokenizer
.TimeLimitingCollector
is used to timeout search requests that
take longer than the maximum allowed search time limit.Collector
with a specified timeout.Comparator
.Comparator
.List
using the Comparator
.List
in natural order.TimSorter
.SegGraph
PositionLengthAttribute
) from the provided TokenStream
, and creates the corresponding
automaton where arcs are bytes (or Unicode code points
if unicodeArcs = true) from each term.n
.n
,
matching the specified exact prefix.Automaton
from this RegExp
.Automaton
from this RegExp
.Automaton
from this RegExp
.Automaton
from this RegExp
.BytesRef
that has the same content as this buffer.CharsRef
that has the same content as this builder.ToParentBlockJoinQuery
, except this
query joins in reverse: you provide a Query matching
parent documents and it joins down to child
documents.FST
to a GraphViz's dot
language description
for visualization.CharsRef
that has the same content as this builder.Token
as implementation for the basic
attributes and return the default impl (with "Impl" appended) for all other
attributes.TokenFilter
instances.Analyzer
.Analyzer
.TokenizedPhraseQueryNode
represents a node created by a code that
tokenizes/lemmatizes/analyzes.Tokenizer.setReader(java.io.Reader)
to provide input.Tokenizer.setReader(java.io.Reader)
to
provide input.Tokenizer
instances.OffsetAttribute.startOffset()
and OffsetAttribute.endOffset()
First 4 bytes are the startTokenOffsetPayloadTokenFilter
.TokenStream
for use with the Highlighter
- can obtain from
term vectors with offsets and positions or from an Analyzer re-parsing the stored content.fieldName
, tokenizing
the contents of reader
.fieldName
, tokenizing
the contents of text
.Attribute
instances.Analyzer.TokenStreamComponents
instance.Analyzer.TokenStreamComponents
instance.Automaton
where the transition labels are UTF8 bytes (or Unicode
code points if unicodeArcs is true) from the TermToBytesRefAttribute
.TermAutomatonQuery
where the transition labels are tokens from the TermToBytesRefAttribute
.Character.toLowerCase(int)
starting
at the given offset.BasicQueryFactory
would exceed the limit
of query clauses.IndexSearcher
to use in conjunction with
ToParentBlockJoinCollector
.IndexWriter.addDocuments()
or IndexWriter.updateDocuments()
API.ToParentBlockJoinQuery
's scorer.IndexSearcher.search(Query,int)
.TopDocs
output.IndexSearcher.search(Query,int,Sort)
.IndexReaderContext
of the top-level
IndexReader
, used internally only for
asserting.size
terms.size
terms.size
terms.size
terms.size
terms.PositionLengthAttribute
) from the provided TokenStream
, and creates the corresponding
automaton where arcs are bytes (or Unicode code points
if unicodeArcs = true) from each term.CharSequence
interface.SrndQuery.hashCode()
and SrndQuery.equals(Object)
,
see LUCENE-2945.field
assumed to be the
default field and omitted.IB <distribution> <lambda><normalization>
.TermScorer
.Object.toString()
.PostingsReaderBase
,
plus the other few vInts stored in the frame.PostingsReaderBase
stores.term
across all
documents (the sum of the freq() for each doc that has this term).t
.TermState
instances passed to TermContext.register(TermState, int, int, long)
.TotalTermFreqValueSource
returns the total term freq
(sum of term freqs across all documents).List
of all tokens in the map, ordered by startOffset.Character.toUpperCase(int)
starting
at the given offset.ControlledRealTimeReopenThread
to ensure specific
changes are visible.TrackingIndexWriter
wrapping the
provided IndexWriter
.state
,
assuming position
and characteristic vector vector
Automaton
.TrimFilter
.TrimFilter
.TruncateTokenFilter
.DirectoryReader.open(IndexWriter)
).IndexWriter.tryDeleteDocument(IndexReader,int)
and
returns the generation that reflects this change.true
iff the refCount was
successfully incremented, otherwise false
.Analyzer
for Turkish.TurkishAnalyzer.DEFAULT_STOPWORD_FILE
.TurkishLowerCaseFilter
.TwoPhaseCommitTool.execute(TwoPhaseCommit...)
when an
object fails to commit().TwoPhaseCommitTool.execute(TwoPhaseCommit...)
when an
object fails to prepareCommit().TwoPhaseIterator
view of this
Scorer
.Scorer.twoPhaseIterator()
to expose an approximation of a DocIdSetIterator
.TwoPhaseIterator.approximation
.PackedTokenAttributeImpl.type()
a payload.TypeAsPayloadTokenFilter
.TypeAttribute
.TypeAttribute.DEFAULT_TYPE
type
TypeTokenFilter
.TypeTokenFilter
that filters tokens out
(useWhiteList=false).TypeTokenFilter
.UAX29URLEmailTokenizer
with StandardFilter
,
LowerCaseFilter
and
StopFilter
, using a list of
English stop words.UAX29URLEmailAnalyzer.STOP_WORDS_SET
).AttributeFactory
UAX29URLEmailTokenizer
.CharTokenizer
s.UnicodeWhitespaceTokenizer
.UnicodeWhitespaceAnalyzer
AttributeFactory
.MMapDirectory.UNMAP_SUPPORTED
is false
, this contains the reason why unmapping is not supported.true
, if this platform supports unmapping mmapped files.CharArrayMap
.CharArraySet
.bits
,
interpreted as an unsigned value.WikipediaTokenizer.TOKENS_ONLY
was used, produce multiple tokens.reader
as long as this reader is
an instance of FilterDirectoryReader
.reader
as long as this reader is
an instance of FilterLeafReader
.dir
as long as this reader is
an instance of FilterDirectory
.DocValues.singleton(SortedDocValues)
, or null.DocValues.singleton(NumericDocValues, Bits)
, or null.DocValues.singleton(NumericDocValues, Bits)
, or null.term
and then adding the new
document.IndexWriter.updateDocument(Term,Iterable)
and
returns the generation that reflects this change.IndexWriter.updateDocuments(Term,Iterable)
and returns
the generation that reflects this change.newTop
and run PriorityQueue.updateTop()
.MergePolicy
is used for upgrading all existing segments of
an index when calling IndexWriter.forceMerge(int)
.MergePolicy
and intercept forceMerge requests to
only upgrade segments written with previous Lucene versions.UpperCaseFilter
.Outputs
implementation where each output
is one or two non-negative long values.QueryCachingPolicy
that tracks usage statistics of recently-used
filters in order to decide on which filters are worth caching.UnicodeUtil.UTF8toUTF16(byte[], int, int, char[])
shortestPaths()
.Util.TopNSearcher
IllegalArgumentException
if any of these settings
is invalid.IllegalArgumentException
if any of these settings
is invalid.DocValuesFieldUpdates.Iterator.nextDoc()
.QueryNode
that holds an
arbitrary value.FunctionValues
for a particular reader.FieldComparator
that works
off of the FunctionValues
for a ValueSource
instead of the normal Lucene FieldComparator that works off of a FieldCache.Scorer
which returns the result of FunctionValues.floatVal(int)
as
the score for a document, and which filters out documents that don't match ValueSourceScorer.matches(int)
.TermVectorsReader
to read term
vectors.TermVectorsWriter
to write term
vectors.LockFactory
that wraps another LockFactory
and verifies that each lock obtain/release
is "correct" (never results in two processes holding the
lock at the same time).BlockTreeTermsWriter
, except it also stores a version per term, and adds a method to its TermsEnum
implementation to seekExact only if the version is >= the specified version.Terms
.baseClass
and method declaration.docID
LeafReader
for the newly
merged segment, before that segment is made visible
to near-real-time readers.WeakHashMap
and
IdentityHashMap
.FieldFragList
.FragListBuilder
.WeightedSpanTerm
s from a Query
based on whether
Term
s from the Query
are contained in a supplied TokenStream
.WHITESPACE
operators: ' ' '\n' '\r' '\t'WhitespaceTokenizer
.WhitespaceAnalyzer
Character.isWhitespace(int)
.AttributeFactory
.WhitespaceTokenizer
.slop factor
.WikipediaTokenizer
.WikipediaTokenizer
.WikipediaTokenizer
.WikipediaTokenizer
.term
.term
.WildcardQueryNode
represents wildcard query This does not apply to
phrases.WildcardQuery
object from a WildcardQueryNode
object.StandardSyntaxParser
creates PrefixWildcardQueryNode
nodes which
have values containing the prefixed wildcard.Directory
implementation for Microsoft Windows.FSLockFactory.getDefault()
.WordDelimiterIterator.DEFAULT_WORD_DELIM_TABLE
as its charTypeTableWordDelimiterFilter
.WordType
of the textCodecReader
view of reader.LeafReader
from
an IndexReader
of any kind.reader
according to the order
defined by sort
.SortingLeafReader.wrap(org.apache.lucene.index.LeafReader, Sort)
but operates directly on a Sorter.DocMap
.Bits
instance that returns true if, and only if, any of
the children of the given parent document has a value.SortedSetDocValues
in order to only select
one value per parent among its children
using the configured
selection
type.SortedDocValues
in order to only select
one value per parent among its children
using the configured
selection
type.SortedNumericDocValues
in order to only select
one value per parent among its children
using the configured
selection
type.NumericDocValues
in order to only select
one value per parent among its children
using the configured
selection
type.Collector
s with a MultiCollector
.IndexCommit
as a SnapshotDeletionPolicy.SnapshotCommitPoint
.FieldInfos
to the
directory.SegmentInfo
data.IndexOutput
DataOutput
.For
format).DefaultIndexingChain.flush(org.apache.lucene.index.SegmentWriteState)
).DataOutput
.bitsPerValue
bits.DataOutput.writeMapOfStrings(Map)
instead.DataOutput.writeMapOfStrings(Map)
instead.TermsEnum
to pull a PostingsEnum
.DataOutput
.DataOutput
.zig-zag
-encoded
variable-length
integer.zig-zag
-encoded
variable-length
long.BitUtil.zigZagEncode(int)
.BitUtil.zigZagEncode(long)
.BitUtil.zigZagEncode(long)
but on integers.aState
aState
aState
aState
aState