azure.ai.vision.imageanalysis package

class azure.ai.vision.imageanalysis.ImageAnalysisClient(endpoint: str, credential: AzureKeyCredential | TokenCredential, **kwargs: Any)[source]

ImageAnalysisClient.

Parameters:
  • endpoint (str) – Azure AI Computer Vision endpoint (protocol and hostname, for example: https://<resource-name>.cognitiveservices.azure.com). Required.

  • credential (AzureKeyCredential or TokenCredential) – Credential used to authenticate requests to the service. Is either a AzureKeyCredential type or a TokenCredential type. Required.

Keyword Arguments:

api_version (str) – The API version to use for this operation. Default value is “2023-10-01”. Note that overriding this default value may result in unsupported behavior.

analyze(image_data: bytes, visual_features: List[VisualFeatures], *, language: str | None = None, gender_neutral_caption: bool | None = None, smart_crops_aspect_ratios: List[float] | None = None, model_version: str | None = None, **kwargs: Any) ImageAnalysisResult[source]

Performs a single Image Analysis operation.

Parameters:
  • image_data (bytes) – A buffer containing the whole image to be analyzed.

  • visual_features (list[VisualFeatures]) – A list of visual features to analyze. Required. Seven visual features are supported: Caption, DenseCaptions, Read (OCR), Tags, Objects, SmartCrops, and People. At least one visual feature must be specified.

Keyword Arguments:
  • language (str) – The desired language for result generation (a two-letter language code). Defaults to ‘en’ (English). See https://aka.ms/cv-languages for a list of supported languages.

  • gender_neutral_caption (bool) – Boolean flag for enabling gender-neutral captioning for Caption and Dense Captions features. Defaults to ‘false’. Captions may contain gender terms (for example: ‘man’, ‘woman’, or ‘boy’, ‘girl’). If you set this to ‘true’, those will be replaced with gender-neutral terms (for example: ‘person’ or ‘child’).

  • smart_crops_aspect_ratios (list[float]) – A list of aspect ratios to use for smart cropping. Defaults to one crop region with an aspect ratio the service sees fit between 0.5 and 2.0 (inclusive). Aspect ratios are calculated by dividing the target crop width in pixels by the height in pixels. When set, supported values are between 0.75 and 1.8 (inclusive).

  • model_version (str) – The version of cloud AI-model used for analysis. Defaults to ‘latest’, for the latest AI model with recent improvements. The format is the following: ‘latest’ or ‘YYYY-MM-DD’ or ‘YYYY-MM-DD-preview’, where ‘YYYY’, ‘MM’, ‘DD’ are the year, month and day associated with the model. If you would like to make sure analysis results do not change over time, set this value to a specific model version.

Returns:

ImageAnalysisResult. The ImageAnalysisResult is compatible with MutableMapping

Return type:

ImageAnalysisResult

Raises:

HttpResponseError

analyze_from_url(image_url: str, visual_features: List[VisualFeatures], *, language: str | None = None, gender_neutral_caption: bool | None = None, smart_crops_aspect_ratios: List[float] | None = None, model_version: str | None = None, **kwargs: Any) ImageAnalysisResult[source]

Performs a single Image Analysis operation.

Parameters:
  • image_url (str) – The publicly accessible URL of the image to analyze.

  • visual_features (list[VisualFeatures]) – A list of visual features to analyze. Required. Seven visual features are supported: Caption, DenseCaptions, Read (OCR), Tags, Objects, SmartCrops, and People. At least one visual feature must be specified.

Keyword Arguments:
  • language (str) – The desired language for result generation (a two-letter language code). Defaults to ‘en’ (English). See https://aka.ms/cv-languages for a list of supported languages.

  • gender_neutral_caption (bool) – Boolean flag for enabling gender-neutral captioning for Caption and Dense Captions features. Defaults to ‘false’. Captions may contain gender terms (for example: ‘man’, ‘woman’, or ‘boy’, ‘girl’). If you set this to ‘true’, those will be replaced with gender-neutral terms (for example: ‘person’ or ‘child’).

  • smart_crops_aspect_ratios (list[float]) – A list of aspect ratios to use for smart cropping. Defaults to one crop region with an aspect ratio the service sees fit between 0.5 and 2.0 (inclusive). Aspect ratios are calculated by dividing the target crop width in pixels by the height in pixels. When set, supported values are between 0.75 and 1.8 (inclusive).

  • model_version (str) – The version of cloud AI-model used for analysis. Defaults to ‘latest’, for the latest AI model with recent improvements. The format is the following: ‘latest’ or ‘YYYY-MM-DD’ or ‘YYYY-MM-DD-preview’, where ‘YYYY’, ‘MM’, ‘DD’ are the year, month and day associated with the model. If you would like to make sure analysis results do not change over time, set this value to a specific model version.

Returns:

ImageAnalysisResult. The ImageAnalysisResult is compatible with MutableMapping

Return type:

ImageAnalysisResult

Raises:

HttpResponseError

close() None[source]
send_request(request: HttpRequest, *, stream: bool = False, **kwargs: Any) HttpResponse[source]

Runs the network request through the client’s chained policies.

>>> from azure.core.rest import HttpRequest
>>> request = HttpRequest("GET", "https://www.example.org/")
<HttpRequest [GET], url: 'https://www.example.org/'>
>>> response = client.send_request(request)
<HttpResponse: 200 OK>

For more information on this code flow, see https://aka.ms/azsdk/dpcodegen/python/send_request

Parameters:

request (HttpRequest) – The network request you want to make. Required.

Keyword Arguments:

stream (bool) – Whether the response payload will be streamed. Defaults to False.

Returns:

The response of your network call. Does not do error handling on your response.

Return type:

HttpResponse

Subpackages