Package | Description |
---|---|
org.apache.lucene.analysis |
API and code to convert text into indexable/searchable tokens.
|
org.apache.lucene.analysis.standard |
A fast grammar-based tokenizer constructed with JFlex.
|
org.apache.lucene.document |
The logical representation of a
Document for indexing and searching. |
Modifier and Type | Class and Description |
---|---|
class |
CachingTokenFilter
This class can be used if the Tokens of a TokenStream
are intended to be consumed more than once.
|
class |
CharTokenizer
An abstract base class for simple, character-oriented tokenizers.
|
class |
ISOLatin1AccentFilter
A filter that replaces accented characters in the ISO Latin 1 character set
(ISO-8859-1) by their unaccented equivalent.
|
class |
KeywordTokenizer
Emits the entire input as a single token.
|
class |
LengthFilter
Removes words that are too long and too short from the stream.
|
class |
LetterTokenizer
A LetterTokenizer is a tokenizer that divides text at non-letters.
|
class |
LowerCaseFilter
Normalizes token text to lower case.
|
class |
LowerCaseTokenizer
LowerCaseTokenizer performs the function of LetterTokenizer
and LowerCaseFilter together.
|
class |
PorterStemFilter
Transforms the token stream as per the Porter stemming algorithm.
|
class |
SinkTokenizer
A SinkTokenizer can be used to cache Tokens for use in an Analyzer
|
class |
StopFilter
Removes stop words from a token stream.
|
class |
TeeTokenFilter
Works in conjunction with the SinkTokenizer to provide the ability to set aside tokens
that have already been analyzed.
|
class |
TokenFilter
A TokenFilter is a TokenStream whose input is another token stream.
|
class |
Tokenizer
A Tokenizer is a TokenStream whose input is a Reader.
|
class |
WhitespaceTokenizer
A WhitespaceTokenizer is a tokenizer that divides text at whitespace.
|
Modifier and Type | Field and Description |
---|---|
protected TokenStream |
TokenFilter.input
The source of tokens for this filter.
|
Modifier and Type | Method and Description |
---|---|
TokenStream |
SimpleAnalyzer.reusableTokenStream(java.lang.String fieldName,
java.io.Reader reader) |
TokenStream |
PerFieldAnalyzerWrapper.reusableTokenStream(java.lang.String fieldName,
java.io.Reader reader) |
TokenStream |
KeywordAnalyzer.reusableTokenStream(java.lang.String fieldName,
java.io.Reader reader) |
TokenStream |
StopAnalyzer.reusableTokenStream(java.lang.String fieldName,
java.io.Reader reader) |
TokenStream |
WhitespaceAnalyzer.reusableTokenStream(java.lang.String fieldName,
java.io.Reader reader) |
TokenStream |
Analyzer.reusableTokenStream(java.lang.String fieldName,
java.io.Reader reader)
Creates a TokenStream that is allowed to be re-used
from the previous time that the same thread called
this method.
|
TokenStream |
SimpleAnalyzer.tokenStream(java.lang.String fieldName,
java.io.Reader reader) |
TokenStream |
PerFieldAnalyzerWrapper.tokenStream(java.lang.String fieldName,
java.io.Reader reader) |
TokenStream |
KeywordAnalyzer.tokenStream(java.lang.String fieldName,
java.io.Reader reader) |
TokenStream |
StopAnalyzer.tokenStream(java.lang.String fieldName,
java.io.Reader reader)
Filters LowerCaseTokenizer with StopFilter.
|
TokenStream |
WhitespaceAnalyzer.tokenStream(java.lang.String fieldName,
java.io.Reader reader) |
abstract TokenStream |
Analyzer.tokenStream(java.lang.String fieldName,
java.io.Reader reader)
Creates a TokenStream which tokenizes all the text in the provided
Reader.
|
Constructor and Description |
---|
CachingTokenFilter(TokenStream input) |
ISOLatin1AccentFilter(TokenStream input) |
LengthFilter(TokenStream in,
int min,
int max)
Build a filter that removes words that are too long or too
short from the text.
|
LowerCaseFilter(TokenStream in) |
PorterStemFilter(TokenStream in) |
StopFilter(TokenStream in,
java.util.Set stopWords)
Constructs a filter which removes words from the input
TokenStream that are named in the Set.
|
StopFilter(TokenStream input,
java.util.Set stopWords,
boolean ignoreCase)
Construct a token stream filtering the given input.
|
StopFilter(TokenStream input,
java.lang.String[] stopWords)
Construct a token stream filtering the given input.
|
StopFilter(TokenStream in,
java.lang.String[] stopWords,
boolean ignoreCase)
Constructs a filter which removes words from the input
TokenStream that are named in the array of words.
|
TeeTokenFilter(TokenStream input,
SinkTokenizer sink) |
TokenFilter(TokenStream input)
Construct a token stream filtering the given input.
|
Modifier and Type | Class and Description |
---|---|
class |
StandardFilter
Normalizes tokens extracted with
StandardTokenizer . |
class |
StandardTokenizer
A grammar-based tokenizer constructed with JFlex
|
Modifier and Type | Method and Description |
---|---|
TokenStream |
StandardAnalyzer.reusableTokenStream(java.lang.String fieldName,
java.io.Reader reader) |
TokenStream |
StandardAnalyzer.tokenStream(java.lang.String fieldName,
java.io.Reader reader)
|
Constructor and Description |
---|
StandardFilter(TokenStream in)
Construct filtering in.
|
Modifier and Type | Method and Description |
---|---|
TokenStream |
Fieldable.tokenStreamValue()
The value of the field as a TokenStream, or null.
|
TokenStream |
Field.tokenStreamValue()
The value of the field as a TokesStream, or null.
|
Modifier and Type | Method and Description |
---|---|
void |
Field.setValue(TokenStream value)
Expert: change the value of this field.
|
Constructor and Description |
---|
Field(java.lang.String name,
TokenStream tokenStream)
Create a tokenized and indexed field that is not stored.
|
Field(java.lang.String name,
TokenStream tokenStream,
Field.TermVector termVector)
Create a tokenized and indexed field that is not stored, optionally with
storing term vectors.
|
Copyright © 2000-2014 Apache Software Foundation. All Rights Reserved.