Package version:

Interface BaseLexicalTokenizer

Base type for tokenizers.

interface BaseLexicalTokenizer {
    name: string;
    odatatype:
        | "#Microsoft.Azure.Search.ClassicTokenizer"
        | "#Microsoft.Azure.Search.EdgeNGramTokenizer"
        | "#Microsoft.Azure.Search.KeywordTokenizer"
        | "#Microsoft.Azure.Search.KeywordTokenizerV2"
        | "#Microsoft.Azure.Search.MicrosoftLanguageTokenizer"
        | "#Microsoft.Azure.Search.MicrosoftLanguageStemmingTokenizer"
        | "#Microsoft.Azure.Search.NGramTokenizer"
        | "#Microsoft.Azure.Search.PathHierarchyTokenizerV2"
        | "#Microsoft.Azure.Search.PatternTokenizer"
        | "#Microsoft.Azure.Search.StandardTokenizer"
        | "#Microsoft.Azure.Search.StandardTokenizerV2"
        | "#Microsoft.Azure.Search.UaxUrlEmailTokenizer";
}

Hierarchy (view full)

Properties

Properties

name: string

The name of the tokenizer. It must only contain letters, digits, spaces, dashes or underscores, can only start and end with alphanumeric characters, and is limited to 128 characters.

odatatype:
    | "#Microsoft.Azure.Search.ClassicTokenizer"
    | "#Microsoft.Azure.Search.EdgeNGramTokenizer"
    | "#Microsoft.Azure.Search.KeywordTokenizer"
    | "#Microsoft.Azure.Search.KeywordTokenizerV2"
    | "#Microsoft.Azure.Search.MicrosoftLanguageTokenizer"
    | "#Microsoft.Azure.Search.MicrosoftLanguageStemmingTokenizer"
    | "#Microsoft.Azure.Search.NGramTokenizer"
    | "#Microsoft.Azure.Search.PathHierarchyTokenizerV2"
    | "#Microsoft.Azure.Search.PatternTokenizer"
    | "#Microsoft.Azure.Search.StandardTokenizer"
    | "#Microsoft.Azure.Search.StandardTokenizerV2"
    | "#Microsoft.Azure.Search.UaxUrlEmailTokenizer"

Polymorphic discriminator, which specifies the different types this object can be