All Classes and Interfaces
Class
Description
The name of the embedding model from the Azure AI Foundry Catalog that will be called.
The multi-region account of an Azure AI service resource that's attached to a skillset.
The account key of an Azure AI service resource that's attached to a skillset, to be used with the resource's
subdomain.
Specifies the AI Services Vision parameters for vectorizing a query image or text.
Specifies the AI Services Vision parameters for vectorizing a query image or text.
The name of the embedding model from the Azure AI Studio Catalog that will be called.
Information about a token returned by an analyzer.
Specifies some text and analysis components used to break that text into
tokens.
Converts alphabetic, numeric, and symbolic Unicode characters which are not in the first 127 ASCII characters (the
"Basic Latin" Unicode block) into their ASCII equivalents, if such equivalents exist.
The result of Autocomplete requests.
Specifies the mode for Autocomplete.
Parameter group.
Implementation of
PagedFluxBase where the element type is AutocompleteItem and the page type is
AutocompletePagedResponse.Implementation of
PagedIterableBase where the element type is AutocompleteItem and the page type is
AutocompletePagedResponse.This class represents a response from the autocomplete API.
The result of Autocomplete query.
Specifies the properties for connecting to an AML vectorizer.
The AML skill allows you to extend AI enrichment with a custom Azure Machine Learning (AML) model.
Specifies an Azure Machine Learning endpoint deployed via the Azure AI Foundry Model Catalog for generating the
vector embedding of a query string.
Allows you to generate a vector embedding for a given text input using the Azure OpenAI resource.
The Azure Open AI model name that will be called.
The AzureOpenAITokenizerParameters model.
Specifies the Azure OpenAI resource used to vectorize a query string.
Specifies the parameters for connecting to the Azure OpenAI resource.
Contains configuration options specific to the binary quantization compression method used during indexing and
querying.
Specifies the data to extract from Azure blob storage and tells the indexer which data to extract from image content
when "imageAction" is set to a value other than "none".
Determines how to process embedded images and image files in Azure blob storage.
Represents the parsing mode for indexing from an Azure blob data source.
Determines algorithm for text extraction from PDF files in Azure blob storage.
Ranking function based on the Okapi BM25 similarity algorithm.
Base type for character filters.
Defines the names of all character filters supported by the search engine.
Forms bigrams of CJK terms that are generated from the standard tokenizer.
Scripts that can be ignored by CjkBigramTokenFilter.
Legacy similarity algorithm which uses the Lucene TFIDFSimilarity implementation of TF-IDF.
Grammar-based tokenizer that is suitable for processing most European-language documents.
Base type for describing any Azure AI service resource attached to a skillset.
The multi-region account key of an Azure AI service resource that's attached to a skillset.
Construct bigrams for frequently occurring terms while indexing.
A skill that enables scenarios that require a Boolean operation to determine the data to assign to an output.
Defines options to control Cross-Origin Resource Sharing (CORS) for an index.
Allows you to take control over the process of converting text into indexable/searchable tokens.
An object that contains information about the matches that were found, and related metadata.
A complex object that can be used to specify alternative spellings or synonyms to the root entity name.
A skill looks for text from a custom, user-defined list of words and phrases.
The language codes supported for input text by CustomEntityLookupSkill.
Allows you to configure normalization for filterable, sortable, and facetable fields, which by default operate with
strict matching.
Base type for data change detection policies.
Base type for data deletion detection policies.
Contains debugging information that can be used to further explore your search results.
An empty object that represents the default Azure AI service resource for a skillset.
Decomposes compound words found in many Germanic languages.
Defines a function that boosts scores based on distance from a geographic location.
Provides parameter values to a distance scoring function.
Contains debugging information that can be used to further explore your search results.
A skill that extracts content from a file within the enrichment pipeline.
A skill that extracts content and layout information (as markdown), via Azure AI Services, from files within the
enrichment pipeline.
The depth of headers in the markdown output.
Controls the cardinality of the output produced by the skill.
Generates n-grams of the given size(s) starting from the front or the back
of an input token.
Specifies which side of the input an n-gram should be generated from.
Tokenizes the input from an edge into n-grams of the given size(s).
Removes elisions.
A string indicating what entity categories to return.
Using the Text Analytics API, extracts linked entities from text.
Text analytics entity recognition.
Deprecated.
Represents the version of
EntityRecognitionSkill.Contains configuration options specific to the exhaustive KNN algorithm used during querying, which will perform
brute-force search across the entire vector index.
Contains the parameters specific to exhaustive KNN algorithm.
A single bucket of a facet query result.
Marker annotation that indicates the field or method is to be ignored by converting to SearchField.
Additional parameters to build
SearchField.Defines a mapping between a field in a data source and a target field in an index.
Represents a function that transforms a value from a data source before indexing.
Defines a function that boosts scores based on the value of a date-time field.
Provides parameter values to a freshness scoring function.
Defines a data change detection policy that captures changes based on the value of a high water mark column.
Contains configuration options specific to the HNSW approximate nearest neighbors algorithm used during indexing and
querying.
Contains the parameters specific to the HNSW algorithm.
Determines whether the count and facets should includes all documents that matched the search query, or only the
documents that are retrieved within the 'maxTextRecallSize' window.
TThe query parameters to configure hybrid search behaviors.
A skill that analyzes image files.
The language codes supported for input by ImageAnalysisSkill.
A string indicating which domain-specific details to return.
Represents an index action that operates on a document.
The operation to perform on a document in an indexing batch.
Contains a batch of document write actions to send to the index.
An
IndexBatchException is thrown whenever Azure AI Search index call was only partially successful.Contains a batch of document write actions to send to the index.
Options for document index operations.
Response containing the status of operations for all documents in the indexing request.
Represents all of the state that defines and dictates the indexer's current execution.
Specifies the environment in which the indexer should execute.
Represents the result of an individual indexer execution.
Represents the status of an individual indexer execution.
Details the status of an individual indexer execution.
Represents the overall indexer status.
Represents the mode the indexer is executing in.
Represents parameters for indexer execution.
A dictionary of indexer-specific configuration properties.
Status of an indexing operation for a single document.
Represents a schedule for indexer execution.
Defines behavior of the index projections in relation to the rest of the indexer.
Statistics for a given index.
Input field mapping for a skill.
A token filter that only keeps tokens with text contained in a specified list of words.
A skill that uses text analytics for key phrase extraction.
The language codes supported for input text by KeyPhraseExtractionSkill.
Marks terms as keywords.
Emits the entire input as a single token.
A skill that detects the language of input text and reports a single language code for every document submitted on
the request.
Removes words that are too long or too short.
Base type for analyzers.
Defines the names of all text analyzers supported by the search engine.
Base type for normalizers.
Defines the names of all text normalizers supported by the search engine.
Base type for tokenizers.
Defines the names of all tokenizers supported by the search engine.
Limits the number of tokens while indexing.
Response from a request to retrieve stats summary of all indexes.
Standard Apache Lucene analyzer; Composed of the standard tokenizer, lowercase filter and stop filter.
Breaks text following the Unicode Text Segmentation rules.
Defines a function that boosts scores based on the magnitude of a numeric field.
Provides parameter values to a magnitude scoring function.
A character filter that applies mappings defined with the mappings option.
Specifies the max header depth that will be considered while grouping markdown content.
Specifies the submode that will determine whether a markdown file will be parsed into exactly one search document or
multiple search documents.
A skill for merging two or more strings into a single unified string, with an optional user-defined delimiter
separating each component part.
Divides text using language-specific rules and reduces words to their base forms.
Divides text using language-specific rules.
Lists the languages supported by the Microsoft language stemming tokenizer.
Lists the languages supported by the Microsoft language tokenizer.
Defines a data deletion detection policy utilizing Azure Blob Storage's native soft delete feature for deletion
detection.
Generates n-grams of the given size(s).
Tokenizes the input into n-grams of the given size(s).
Defines the sequence of characters to use between the lines of text recognized by the OCR skill.
A skill that extracts text from image files.
The language codes supported for input by OcrSkill.
Options passed when
SearchClientBuilder.SearchIndexingBufferedSenderBuilder.onActionAdded(Consumer) is
called.Options passed when
SearchClientBuilder.SearchIndexingBufferedSenderBuilder.onActionError(Consumer) is
called.Options passed when
SearchClientBuilder.SearchIndexingBufferedSenderBuilder.onActionSent(Consumer) is called.Options passed when
SearchClientBuilder.SearchIndexingBufferedSenderBuilder.onActionSucceeded(Consumer) is
called.Output field mapping for a skill.
Tokenizer for path-like hierarchies.
Flexibly separates text into terms via a regular expression pattern.
Uses Java regexes to emit multiple tokens - one for each capture group in one or more patterns.
A character filter that replaces characters in the input string.
A character filter that replaces characters in the input string.
Tokenizer that uses regex pattern matching to construct distinct tokens.
Identifies the type of phonetic encoder to use with a PhoneticTokenFilter.
Create tokens for phonetic matches.
Using the Text Analytics API, extracts personal information from an input text and gives you the option of masking
it.
A string indicating what maskingMode to use to mask the personal information detected in the input text.
Configuration for how semantic search returns answers to the search.
An answer is a text passage extracted from the contents of the most relevant documents that matched the query.
This parameter is only valid if the query type is `semantic`.
Configuration for how semantic search captions search results.
Captions are the most representative passages from the document relatively to the search query.
This parameter is only valid if the query type is `semantic`.
Enables a debugging tool that can be used to further explore your search results.
The language of the query.
The raw concatenated strings that were sent to the semantic enrichment process.
Description of fields that were sent to the semantic enrichment process, as well as how they were used.
The breakdown of subscores between the text and vector query components of the search query for this document.
Configuration for how semantic search rewrites a query.
Contains debugging information specific to query rewrites.
This parameter is only valid if the query type is `semantic`.
Contains debugging information specific to query rewrites.
Improve search recall by spell-correcting individual search query terms.
Specifies the syntax of the search query.
A single bucket of a range facet query result that reports the number of documents with a field value falling within
a particular range.
Defines flags that can be combined to control how regular expressions are used in the pattern analyzer and pattern
tokenizer.
Contains the options for rescoring.
Represents a resource's usage and quota.
Contains configuration options specific to the scalar quantization compression method used during indexing and
querying.
Contains the parameters specific to Scalar Quantization.
Base type for functions that can modify document scores during ranking.
Defines the aggregation function used to combine the results of all the scoring functions in a scoring profile.
Defines the function used to interpolate score boosting across a range of documents.
Represents a parameter value to be used in scoring functions (for example, referencePointParameter).
Defines parameters for a search index that influence scoring in search queries.
A value that specifies whether we want to calculate scoring statistics (such as document frequency) globally for more
consistent scoring, or locally, for lower latency.
An annotation that directs
SearchIndexAsyncClient.buildSearchFields(Class, FieldBuilderOptions) to turn the
field or method into a searchable field.Represents an index alias, which describes a mapping from the alias name to an index.
This class provides a client that contains the operations for querying an index and uploading, merging, or deleting
documents in an Azure AI Search service.
Cloud audiences available for Search.
This class provides a client that contains the operations for querying an index and uploading, merging, or deleting
documents in an Azure AI Search service.
This class provides a fluent builder API to help aid the configuration and instantiation of
SearchClients and SearchAsyncClients.Represents an untyped document returned from a search or document lookup.
Represents a field in an index definition, which describes the name, data type, and search behavior of a field.
Defines the data type of a field in a search index.
This class is used to help construct valid OData filter expressions by automatically replacing, quoting, and escaping
string parameters.
Represents a search index definition, which describes the fields and search behavior of an index.
This class provides a client that contains the operations for creating, getting, listing, updating, or deleting
indexes or synonym map and analyzing text in an Azure AI Search service.
This class provides a client that contains the operations for creating, getting, listing, updating, or deleting
indexes or synonym map and analyzing text in an Azure AI Search service.
This class provides a fluent builder API to help aid the configuration and instantiation of
SearchIndexClients and SearchIndexAsyncClients.Represents an indexer.
This class provides a client that contains the operations for creating, getting, listing, updating, or deleting data
source connections, indexers, or skillsets and running or resetting indexers in an Azure AI Search service.
The SearchIndexerCache model.
This class provides a client that contains the operations for creating, getting, listing, updating, or deleting data
source connections, indexers, or skillsets and running or resetting indexers in an Azure AI Search service.
This class provides a fluent builder API to help aid the configuration and instantiation of
SearchIndexerClients and SearchIndexerAsyncClients.Represents information about the entity (such as Azure SQL table or CosmosDB collection) that will be indexed.
Abstract base type for data identities.
Clears the identity property of a datasource.
Represents a datasource definition, which can be used to configure an indexer.
Utility class that aids in the creation of
SearchIndexerDataSourceConnections.Defines the type of a datasource.
Specifies the identity for a datasource to use.
Represents an item- or document-level indexing error.
Definition of additional projections to secondary search indexes.
Description for what data to store in the designated search index.
A dictionary of index projection-specific configuration properties.
Definition of additional projections to azure blob, table, or files, of enriched data.
Abstract class to share properties between concrete selectors.
Projection definition for what data to store in Azure Files.
Projection definition for what data to store in Azure Blob.
A dictionary of knowledge store-specific configuration properties.
Container object for various projection selectors.
Abstract class to share properties between concrete selectors.
Description for what data to store in Azure Tables.
The SearchIndexerLimits model.
Base type for skills.
A list of skills.
Represents the current status and execution history of an indexer.
Represents an item-level warning.
This class provides a buffered sender that contains operations for conveniently indexing documents to an Azure Search
index.
This class provides a buffered sender that contains operations for conveniently indexing documents to an Azure Search
index.
Statistics for a given index.
Specifies whether any or all of the search terms must be matched in order to count the document as a match.
Additional parameters for searchGet operation.
Implementation of
ContinuablePagedFlux where the continuation token type is SearchRequest, the
element type is SearchResult, and the page type is SearchPagedResponse.Implementation of
ContinuablePagedIterable where the continuation token type is SearchRequest, the
element type is SearchResult, and the page type is SearchPagedResponse.Represents an HTTP response from the search API request that contains a list of items deserialized into a
Page.A customer-managed encryption key in Azure Key Vault.
Contains a document found by a search query, plus associated metadata.
The results of the vector query will filter based on the '@search.score' value.
Represents service-level resource counters and quotas.
Represents various service level limits.
Response from a get service statistics request.
The versions of Azure AI Search supported by this client library.
Defines how the Suggest API should apply to a group of fields in the index.
Defines a specific configuration to be used in the context of semantic capabilities.
The SemanticDebugInfo model.
Allows the user to choose whether a semantic call should fail completely, or to return partial results.
Reason that a partial response was returned for a semantic ranking request.
A field that is used as part of the semantic configuration.
The way the field was used for the semantic enrichment process.
Describes the title, content, and keywords fields to be used for semantic ranking, captions, highlights, and answers.
Type of query rewrite that was used for this request.
Defines parameters for a search index that influence semantic capabilities.
Parameters for performing vector searches.
The document-level results for a
semantic search.The page-level results for a
semantic search.Type of partial response that was returned for a semantic ranking request.
Text analytics positive-negative sentiment analysis, scored as a floating point value in a range of zero to 1.
Deprecated.
Represents the version of
SentimentSkill.A skill for reshaping the outputs.
Creates combinations of tokens as a single token.
Base type for similarity algorithms.
An annotation that directs
SearchIndexAsyncClient.buildSearchFields(Class, FieldBuilderOptions) to turn the
field or method into a non-searchable field.A single vector field result.
A filter that stems words using a Snowball-generated stemmer.
The language to use for a Snowball token filter.
Defines a data deletion detection policy that implements a soft-deletion strategy.
A skill to split a string into chunks of text.
A value indicating which tokenizer to use.
The language codes supported for input text by SplitSkill.
A value indicating which unit to use.
Defines a data change detection policy that captures changes using the Integrated Change Tracking feature of Azure
SQL Database.
Provides the ability to override other stemming filters with custom dictionary-based stemming.
Language specific stemming filter.
The language to use for a stemmer token filter.
Divides text at non-letters; Applies the lowercase and stopword token filters.
Identifies a predefined list of language-specific stopwords.
Removes stop words from a token stream.
Parameter group.
Implementation of
PagedFluxBase where the element type is SuggestResult and the page type is SuggestPagedResponse.Implementation of
PagedIterableBase where the element type is SuggestResult and the page type is
SuggestPagedResponse.Represents an HTTP response from the suggest API request that contains a list of items deserialized into a
Page.A result containing a document found by a suggestion query, plus associated
metadata.
Represents a synonym map definition.
Matches single or multi-word synonyms in a token stream.
Defines a function that boosts scores of documents with string values matching a given list of tags.
Provides parameter values to a tag scoring function.
The BM25 or Classic score for the text portion of the query.
A value indicating which split mode to perform.
A skill to translate text from one language to another.
The language codes supported for input text by TextTranslationSkill.
Defines weights on index fields for which matches should boost scoring in search queries.
Represents classes of characters on which a token filter can operate.
Base type for token filters.
Defines the names of all token filters supported by the search engine.
Truncates the terms to a specific length.
Tokenizes urls and emails as one token.
Filters out tokens with same text as the previous token.
A single bucket of a simple or interval facet query result that reports the number of documents with a field falling
within a particular interval or having a specific value.
The encoding format for interpreting vector field contents.
Determines whether or not filters are applied before or after the vector search is performed.
The query parameters to use for vector search when a base 64 encoded binary of an image that needs to be vectorized
is provided.
The query parameters to use for vector search when an url that represents an image value that needs to be vectorized
is provided.
The query parameters to use for vector search when a text value that needs to be vectorized is provided.
The query parameters to use for vector search when a raw vector value is provided.
The query parameters for vector and hybrid search queries.
The kind of vector query being performed.
The VectorsDebugInfo model.
Contains configuration options related to vector search.
Contains configuration options specific to the algorithm used during indexing or querying.
The algorithm used for indexing and querying.
The similarity metric to use for vector comparisons.
Contains configuration options specific to the compression method used during indexing or querying.
The compression method used for indexing and querying.
The storage method for the original full-precision vectors used for rescoring and internal index operations.
The quantized data type of compressed vector values.
Parameters for performing vector searches.
Defines a combination of configurations to use with vector search.
Specifies the vectorization method to be used during query time.
The vectorization method to be used during query time.
The results of the vector query will be filtered based on the vector similarity metric.
The threshold used for vector queries.
The kind of vector query being performed.
Allows you to generate a vector embedding for a given image or text input using the Azure AI Services Vision
Vectorize API.
The strings indicating what visual feature types to return.
A skill that can call a Web API endpoint, allowing you to extend a skillset by having it call your custom code.
Specifies a user-defined vectorizer for generating the vector embedding of a query string.
Specifies the properties for connecting to a user-defined vectorizer.
Splits words into subwords and performs optional transformations on subword groups.