Package version:

Interface DictionaryDecompounderTokenFilter

Decomposes compound words found in many Germanic languages. This token filter is implemented using Apache Lucene.

interface DictionaryDecompounderTokenFilter {
    maxSubwordSize?: number;
    minSubwordSize?: number;
    minWordSize?: number;
    name: string;
    odatatype: "#Microsoft.Azure.Search.DictionaryDecompounderTokenFilter";
    onlyLongestMatch?: boolean;
    wordList: string[];
}

Hierarchy (view full)

Properties

maxSubwordSize?: number

The maximum subword size. Only subwords shorter than this are outputted. Default is 15. Maximum is 300.

minSubwordSize?: number

The minimum subword size. Only subwords longer than this are outputted. Default is 2. Maximum is 300.

minWordSize?: number

The minimum word size. Only words longer than this get processed. Default is 5. Maximum is 300.

name: string

The name of the token filter. It must only contain letters, digits, spaces, dashes or underscores, can only start and end with alphanumeric characters, and is limited to 128 characters.

odatatype

Polymorphic discriminator, which specifies the different types this object can be

onlyLongestMatch?: boolean

A value indicating whether to add only the longest matching subword to the output. Default is false.

wordList: string[]

The list of words to match against.