MicrosoftLanguageTokenizer Class
- java.
lang. Object - com.
azure. search. documents. indexes. models. LexicalTokenizer - com.
azure. search. documents. indexes. models. MicrosoftLanguageTokenizer
- com.
- com.
public final class MicrosoftLanguageTokenizer
extends LexicalTokenizer
Divides text using language-specific rules.
Constructor Summary
| Constructor | Description |
|---|---|
| MicrosoftLanguageTokenizer(String name) |
Creates an instance of Microsoft |
Method Summary
| Modifier and Type | Method and Description |
|---|---|
|
static
Microsoft |
fromJson(JsonReader jsonReader)
Reads an instance of Microsoft |
|
Microsoft |
getLanguage()
Get the language property: The language to use. |
| Integer |
getMaxTokenLength()
Get the max |
| String |
getOdataType()
Get the odata |
| Boolean |
isSearchTokenizer()
Get the is |
|
Microsoft |
setIsSearchTokenizer(Boolean isSearchTokenizer)
Set the is |
|
Microsoft |
setLanguage(MicrosoftTokenizerLanguage language)
Set the language property: The language to use. |
|
Microsoft |
setMaxTokenLength(Integer maxTokenLength)
Set the max |
|
Json |
toJson(JsonWriter jsonWriter) |
Methods inherited from LexicalTokenizer
Methods inherited from java.lang.Object
Constructor Details
MicrosoftLanguageTokenizer
public MicrosoftLanguageTokenizer(String name)
Creates an instance of MicrosoftLanguageTokenizer class.
Parameters:
Method Details
fromJson
public static MicrosoftLanguageTokenizer fromJson(JsonReader jsonReader)
Reads an instance of MicrosoftLanguageTokenizer from the JsonReader.
Parameters:
Returns:
Throws:
getLanguage
public MicrosoftTokenizerLanguage getLanguage()
Get the language property: The language to use. The default is English.
Returns:
getMaxTokenLength
public Integer getMaxTokenLength()
Get the maxTokenLength property: The maximum token length. Tokens longer than the maximum length are split. Maximum token length that can be used is 300 characters. Tokens longer than 300 characters are first split into tokens of length 300 and then each of those tokens is split based on the max token length set. Default is 255.
Returns:
getOdataType
public String getOdataType()
Get the odataType property: A URI fragment specifying the type of tokenizer.
Overrides:
MicrosoftLanguageTokenizer.getOdataType()Returns:
isSearchTokenizer
public Boolean isSearchTokenizer()
Get the isSearchTokenizer property: A value indicating how the tokenizer is used. Set to true if used as the search tokenizer, set to false if used as the indexing tokenizer. Default is false.
Returns:
setIsSearchTokenizer
public MicrosoftLanguageTokenizer setIsSearchTokenizer(Boolean isSearchTokenizer)
Set the isSearchTokenizer property: A value indicating how the tokenizer is used. Set to true if used as the search tokenizer, set to false if used as the indexing tokenizer. Default is false.
Parameters:
Returns:
setLanguage
public MicrosoftLanguageTokenizer setLanguage(MicrosoftTokenizerLanguage language)
Set the language property: The language to use. The default is English.
Parameters:
Returns:
setMaxTokenLength
public MicrosoftLanguageTokenizer setMaxTokenLength(Integer maxTokenLength)
Set the maxTokenLength property: The maximum token length. Tokens longer than the maximum length are split. Maximum token length that can be used is 300 characters. Tokens longer than 300 characters are first split into tokens of length 300 and then each of those tokens is split based on the max token length set. Default is 255.
Parameters:
Returns:
toJson
public JsonWriter toJson(JsonWriter jsonWriter)
Overrides:
MicrosoftLanguageTokenizer.toJson(JsonWriter jsonWriter)Parameters:
Throws: