|
||||||||||
PREV NEXT | FRAMES NO FRAMES |
Packages that use Tokenizer | |
---|---|
org.apache.lucene.analysis | API and code to convert text into indexable tokens. |
org.apache.lucene.analysis.ru | Support for indexing and searching Russian text. |
org.apache.lucene.analysis.standard | A grammar-based tokenizer constructed with JavaCC. |
Uses of Tokenizer in org.apache.lucene.analysis |
---|
Subclasses of Tokenizer in org.apache.lucene.analysis | |
---|---|
class |
CharTokenizer
An abstract base class for simple, character-oriented tokenizers. |
class |
LetterTokenizer
A LetterTokenizer is a tokenizer that divides text at non-letters. |
class |
LowerCaseTokenizer
LowerCaseTokenizer performs the function of LetterTokenizer and LowerCaseFilter together. |
class |
WhitespaceTokenizer
A WhitespaceTokenizer is a tokenizer that divides text at whitespace. |
Uses of Tokenizer in org.apache.lucene.analysis.ru |
---|
Subclasses of Tokenizer in org.apache.lucene.analysis.ru | |
---|---|
class |
RussianLetterTokenizer
A RussianLetterTokenizer is a tokenizer that extends LetterTokenizer by additionally looking up letters in a given "russian charset". |
Uses of Tokenizer in org.apache.lucene.analysis.standard |
---|
Subclasses of Tokenizer in org.apache.lucene.analysis.standard | |
---|---|
class |
StandardTokenizer
A grammar-based tokenizer constructed with JavaCC. |
|
||||||||||
PREV NEXT | FRAMES NO FRAMES |