MicrosoftLanguageStemmingTokenizer Class
- java.
lang. Object - com.
azure. search. documents. indexes. models. LexicalTokenizer - com.
azure. search. documents. indexes. models. MicrosoftLanguageStemmingTokenizer
- com.
- com.
public final class MicrosoftLanguageStemmingTokenizer
extends LexicalTokenizer
Divides text using language-specific rules and reduces words to their base forms.
Constructor Summary
| Constructor | Description |
|---|---|
| MicrosoftLanguageStemmingTokenizer(String name) |
Creates an instance of Microsoft |
Method Summary
| Modifier and Type | Method and Description |
|---|---|
|
static
Microsoft |
fromJson(JsonReader jsonReader)
Reads an instance of Microsoft |
|
Microsoft |
getLanguage()
Get the language property: The language to use. |
| Integer |
getMaxTokenLength()
Get the max |
| String |
getOdataType()
Get the odata |
| Boolean |
isSearchTokenizer()
Get the is |
|
Microsoft |
setIsSearchTokenizerUsed(Boolean isSearchTokenizerUsed)
Set the is |
|
Microsoft |
setLanguage(MicrosoftStemmingTokenizerLanguage language)
Set the language property: The language to use. |
|
Microsoft |
setMaxTokenLength(Integer maxTokenLength)
Set the max |
|
Json |
toJson(JsonWriter jsonWriter) |
Methods inherited from LexicalTokenizer
Methods inherited from java.lang.Object
Constructor Details
MicrosoftLanguageStemmingTokenizer
public MicrosoftLanguageStemmingTokenizer(String name)
Creates an instance of MicrosoftLanguageStemmingTokenizer class.
Parameters:
Method Details
fromJson
public static MicrosoftLanguageStemmingTokenizer fromJson(JsonReader jsonReader)
Reads an instance of MicrosoftLanguageStemmingTokenizer from the JsonReader.
Parameters:
Returns:
Throws:
getLanguage
public MicrosoftStemmingTokenizerLanguage getLanguage()
Get the language property: The language to use. The default is English.
Returns:
getMaxTokenLength
public Integer getMaxTokenLength()
Get the maxTokenLength property: The maximum token length. Tokens longer than the maximum length are split. Maximum token length that can be used is 300 characters. Tokens longer than 300 characters are first split into tokens of length 300 and then each of those tokens is split based on the max token length set. Default is 255.
Returns:
getOdataType
public String getOdataType()
Get the odataType property: A URI fragment specifying the type of tokenizer.
Overrides:
MicrosoftLanguageStemmingTokenizer.getOdataType()Returns:
isSearchTokenizer
public Boolean isSearchTokenizer()
Get the isSearchTokenizerUsed property: A value indicating how the tokenizer is used. Set to true if used as the search tokenizer, set to false if used as the indexing tokenizer. Default is false.
Returns:
setIsSearchTokenizerUsed
public MicrosoftLanguageStemmingTokenizer setIsSearchTokenizerUsed(Boolean isSearchTokenizerUsed)
Set the isSearchTokenizerUsed property: A value indicating how the tokenizer is used. Set to true if used as the search tokenizer, set to false if used as the indexing tokenizer. Default is false.
Parameters:
Returns:
setLanguage
public MicrosoftLanguageStemmingTokenizer setLanguage(MicrosoftStemmingTokenizerLanguage language)
Set the language property: The language to use. The default is English.
Parameters:
Returns:
setMaxTokenLength
public MicrosoftLanguageStemmingTokenizer setMaxTokenLength(Integer maxTokenLength)
Set the maxTokenLength property: The maximum token length. Tokens longer than the maximum length are split. Maximum token length that can be used is 300 characters. Tokens longer than 300 characters are first split into tokens of length 300 and then each of those tokens is split based on the max token length set. Default is 255.
Parameters:
Returns:
toJson
public JsonWriter toJson(JsonWriter jsonWriter)
Overrides:
MicrosoftLanguageStemmingTokenizer.toJson(JsonWriter jsonWriter)Parameters:
Throws: