Class LuceneStandardTokenizer
java.lang.Object
com.azure.search.documents.indexes.models.LexicalTokenizer
com.azure.search.documents.indexes.models.LuceneStandardTokenizer
- All Implemented Interfaces:
com.azure.json.JsonSerializable<LexicalTokenizer>
Breaks text following the Unicode Text Segmentation rules. This tokenizer is
implemented using Apache Lucene.
-
Constructor Summary
Constructors -
Method Summary
Modifier and TypeMethodDescriptionGet the maxTokenLength property: The maximum token length.setMaxTokenLength(Integer maxTokenLength) Set the maxTokenLength property: The maximum token length.com.azure.json.JsonWritertoJson(com.azure.json.JsonWriter jsonWriter) Methods inherited from class com.azure.search.documents.indexes.models.LexicalTokenizer
fromJson, getName, getOdataTypeMethods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, waitMethods inherited from interface com.azure.json.JsonSerializable
toJson, toJson, toJsonBytes, toJsonString
-
Constructor Details
-
LuceneStandardTokenizer
Constructor ofLuceneStandardTokenizer.- Parameters:
name- The name of the tokenizer. It must only contain letters, digits, spaces, dashes or underscores, can only start and end with alphanumeric characters, and is limited to 128 characters.
-
-
Method Details
-
getMaxTokenLength
Get the maxTokenLength property: The maximum token length. Default is 255. Tokens longer than the maximum length are split.- Returns:
- the maxTokenLength value.
-
setMaxTokenLength
Set the maxTokenLength property: The maximum token length. Default is 255. Tokens longer than the maximum length are split.- Parameters:
maxTokenLength- the maxTokenLength value to set.- Returns:
- the LuceneStandardTokenizer object itself.
-
toJson
Description copied from class:LexicalTokenizer- Specified by:
toJsonin interfacecom.azure.json.JsonSerializable<LexicalTokenizer>- Overrides:
toJsonin classLexicalTokenizer- Throws:
IOException
-