Package version:

Interface PatternTokenizer

Tokenizer that uses regex pattern matching to construct distinct tokens. This tokenizer is implemented using Apache Lucene.

interface PatternTokenizer {
    flags?: (
        | "CANON_EQ"
        | "CASE_INSENSITIVE"
        | "COMMENTS"
        | "DOTALL"
        | "LITERAL"
        | "MULTILINE"
        | "UNICODE_CASE"
        | "UNIX_LINES")[];
    group?: number;
    name: string;
    odatatype: "#Microsoft.Azure.Search.PatternTokenizer";
    pattern?: string;
}

Properties

flags?: (
    | "CANON_EQ"
    | "CASE_INSENSITIVE"
    | "COMMENTS"
    | "DOTALL"
    | "LITERAL"
    | "MULTILINE"
    | "UNICODE_CASE"
    | "UNIX_LINES")[]

Regular expression flags. Possible values include: 'CANON_EQ', 'CASE_INSENSITIVE', 'COMMENTS', 'DOTALL', 'LITERAL', 'MULTILINE', 'UNICODE_CASE', 'UNIX_LINES'

group?: number

The zero-based ordinal of the matching group in the regular expression pattern to extract into tokens. Use -1 if you want to use the entire pattern to split the input into tokens, irrespective of matching groups. Default is -1. Default value: -1.

name: string

The name of the tokenizer. It must only contain letters, digits, spaces, dashes or underscores, can only start and end with alphanumeric characters, and is limited to 128 characters.

odatatype

Polymorphic Discriminator

pattern?: string

A regular expression pattern to match token separators. Default is an expression that matches one or more whitespace characters. Default value: \W+.