Package version:

Specifies some text and analysis components used to break that text into tokens.

interface AnalyzeRequest {
    analyzerName?: string;
    charFilters?: string[];
    normalizerName?: string;
    text: string;
    tokenFilters?: string[];
    tokenizerName?: string;
}

Properties

analyzerName?: string

The name of the analyzer to use to break the given text. If this parameter is not specified, you must specify a tokenizer instead. The tokenizer and analyzer parameters are mutually exclusive. KnownAnalyzerNames is an enum containing built-in analyzer names. NOTE: Either analyzerName or tokenizerName is required in an AnalyzeRequest.

charFilters?: string[]

An optional list of character filters to use when breaking the given text. This parameter can only be set when using the tokenizer parameter.

normalizerName?: string

The name of the normalizer to use to normalize the given text. KnownNormalizerNames is an enum containing built-in analyzer names.

text: string

The text to break into tokens.

tokenFilters?: string[]

An optional list of token filters to use when breaking the given text. This parameter can only be set when using the tokenizer parameter.

tokenizerName?: string

The name of the tokenizer to use to break the given text. If this parameter is not specified, you must specify an analyzer instead. The tokenizer and analyzer parameters are mutually exclusive. KnownTokenizerNames is an enum containing built-in tokenizer names. NOTE: Either analyzerName or tokenizerName is required in an AnalyzeRequest.