azure.ai.vision.imageanalysis package¶
- class azure.ai.vision.imageanalysis.ImageAnalysisClient(endpoint: str, credential: AzureKeyCredential | TokenCredential, **kwargs: Any)[source]¶
ImageAnalysisClient.
- Parameters:
endpoint (str) – Azure AI Computer Vision endpoint (protocol and hostname, for example: https://
<resource-name>
.cognitiveservices.azure.com). Required.credential (AzureKeyCredential or TokenCredential) – Credential used to authenticate requests to the service. Is either a AzureKeyCredential type or a TokenCredential type. Required.
- Keyword Arguments:
api_version (str) – The API version to use for this operation. Default value is “2023-10-01”. Note that overriding this default value may result in unsupported behavior.
- analyze(image_data: bytes, visual_features: List[VisualFeatures], *, language: str | None = None, gender_neutral_caption: bool | None = None, smart_crops_aspect_ratios: List[float] | None = None, model_version: str | None = None, **kwargs: Any) ImageAnalysisResult [source]¶
Performs a single Image Analysis operation.
- Parameters:
image_data (bytes) – A buffer containing the whole image to be analyzed.
visual_features (list[VisualFeatures]) – A list of visual features to analyze. Required. Seven visual features are supported: Caption, DenseCaptions, Read (OCR), Tags, Objects, SmartCrops, and People. At least one visual feature must be specified.
- Keyword Arguments:
language (str) – The desired language for result generation (a two-letter language code). Defaults to ‘en’ (English). See https://aka.ms/cv-languages for a list of supported languages.
gender_neutral_caption (bool) – Boolean flag for enabling gender-neutral captioning for Caption and Dense Captions features. Defaults to ‘false’. Captions may contain gender terms (for example: ‘man’, ‘woman’, or ‘boy’, ‘girl’). If you set this to ‘true’, those will be replaced with gender-neutral terms (for example: ‘person’ or ‘child’).
smart_crops_aspect_ratios (list[float]) – A list of aspect ratios to use for smart cropping. Defaults to one crop region with an aspect ratio the service sees fit between 0.5 and 2.0 (inclusive). Aspect ratios are calculated by dividing the target crop width in pixels by the height in pixels. When set, supported values are between 0.75 and 1.8 (inclusive).
model_version (str) – The version of cloud AI-model used for analysis. Defaults to ‘latest’, for the latest AI model with recent improvements. The format is the following: ‘latest’ or ‘YYYY-MM-DD’ or ‘YYYY-MM-DD-preview’, where ‘YYYY’, ‘MM’, ‘DD’ are the year, month and day associated with the model. If you would like to make sure analysis results do not change over time, set this value to a specific model version.
- Returns:
ImageAnalysisResult. The ImageAnalysisResult is compatible with MutableMapping
- Return type:
- Raises:
- analyze_from_url(image_url: str, visual_features: List[VisualFeatures], *, language: str | None = None, gender_neutral_caption: bool | None = None, smart_crops_aspect_ratios: List[float] | None = None, model_version: str | None = None, **kwargs: Any) ImageAnalysisResult [source]¶
Performs a single Image Analysis operation.
- Parameters:
image_url (str) – The publicly accessible URL of the image to analyze.
visual_features (list[VisualFeatures]) – A list of visual features to analyze. Required. Seven visual features are supported: Caption, DenseCaptions, Read (OCR), Tags, Objects, SmartCrops, and People. At least one visual feature must be specified.
- Keyword Arguments:
language (str) – The desired language for result generation (a two-letter language code). Defaults to ‘en’ (English). See https://aka.ms/cv-languages for a list of supported languages.
gender_neutral_caption (bool) – Boolean flag for enabling gender-neutral captioning for Caption and Dense Captions features. Defaults to ‘false’. Captions may contain gender terms (for example: ‘man’, ‘woman’, or ‘boy’, ‘girl’). If you set this to ‘true’, those will be replaced with gender-neutral terms (for example: ‘person’ or ‘child’).
smart_crops_aspect_ratios (list[float]) – A list of aspect ratios to use for smart cropping. Defaults to one crop region with an aspect ratio the service sees fit between 0.5 and 2.0 (inclusive). Aspect ratios are calculated by dividing the target crop width in pixels by the height in pixels. When set, supported values are between 0.75 and 1.8 (inclusive).
model_version (str) – The version of cloud AI-model used for analysis. Defaults to ‘latest’, for the latest AI model with recent improvements. The format is the following: ‘latest’ or ‘YYYY-MM-DD’ or ‘YYYY-MM-DD-preview’, where ‘YYYY’, ‘MM’, ‘DD’ are the year, month and day associated with the model. If you would like to make sure analysis results do not change over time, set this value to a specific model version.
- Returns:
ImageAnalysisResult. The ImageAnalysisResult is compatible with MutableMapping
- Return type:
- Raises:
- send_request(request: HttpRequest, *, stream: bool = False, **kwargs: Any) HttpResponse [source]¶
Runs the network request through the client’s chained policies.
>>> from azure.core.rest import HttpRequest >>> request = HttpRequest("GET", "https://www.example.org/") <HttpRequest [GET], url: 'https://www.example.org/'> >>> response = client.send_request(request) <HttpResponse: 200 OK>
For more information on this code flow, see https://aka.ms/azsdk/dpcodegen/python/send_request
- Parameters:
request (HttpRequest) – The network request you want to make. Required.
- Keyword Arguments:
stream (bool) – Whether the response payload will be streamed. Defaults to False.
- Returns:
The response of your network call. Does not do error handling on your response.
- Return type:
Subpackages¶
- azure.ai.vision.imageanalysis.aio package
- azure.ai.vision.imageanalysis.models package
CaptionResult
CropRegion
DenseCaption
DenseCaption.as_dict()
DenseCaption.clear()
DenseCaption.copy()
DenseCaption.get()
DenseCaption.items()
DenseCaption.keys()
DenseCaption.pop()
DenseCaption.popitem()
DenseCaption.setdefault()
DenseCaption.update()
DenseCaption.values()
DenseCaption.bounding_box
DenseCaption.confidence
DenseCaption.text
DenseCaptionsResult
DenseCaptionsResult.as_dict()
DenseCaptionsResult.clear()
DenseCaptionsResult.copy()
DenseCaptionsResult.get()
DenseCaptionsResult.items()
DenseCaptionsResult.keys()
DenseCaptionsResult.pop()
DenseCaptionsResult.popitem()
DenseCaptionsResult.setdefault()
DenseCaptionsResult.update()
DenseCaptionsResult.values()
DenseCaptionsResult.list
DetectedObject
DetectedObject.as_dict()
DetectedObject.clear()
DetectedObject.copy()
DetectedObject.get()
DetectedObject.items()
DetectedObject.keys()
DetectedObject.pop()
DetectedObject.popitem()
DetectedObject.setdefault()
DetectedObject.update()
DetectedObject.values()
DetectedObject.bounding_box
DetectedObject.tags
DetectedPerson
DetectedPerson.as_dict()
DetectedPerson.clear()
DetectedPerson.copy()
DetectedPerson.get()
DetectedPerson.items()
DetectedPerson.keys()
DetectedPerson.pop()
DetectedPerson.popitem()
DetectedPerson.setdefault()
DetectedPerson.update()
DetectedPerson.values()
DetectedPerson.bounding_box
DetectedPerson.confidence
DetectedTag
DetectedTextBlock
DetectedTextBlock.as_dict()
DetectedTextBlock.clear()
DetectedTextBlock.copy()
DetectedTextBlock.get()
DetectedTextBlock.items()
DetectedTextBlock.keys()
DetectedTextBlock.pop()
DetectedTextBlock.popitem()
DetectedTextBlock.setdefault()
DetectedTextBlock.update()
DetectedTextBlock.values()
DetectedTextBlock.lines
DetectedTextLine
DetectedTextLine.as_dict()
DetectedTextLine.clear()
DetectedTextLine.copy()
DetectedTextLine.get()
DetectedTextLine.items()
DetectedTextLine.keys()
DetectedTextLine.pop()
DetectedTextLine.popitem()
DetectedTextLine.setdefault()
DetectedTextLine.update()
DetectedTextLine.values()
DetectedTextLine.bounding_polygon
DetectedTextLine.text
DetectedTextLine.words
DetectedTextWord
DetectedTextWord.as_dict()
DetectedTextWord.clear()
DetectedTextWord.copy()
DetectedTextWord.get()
DetectedTextWord.items()
DetectedTextWord.keys()
DetectedTextWord.pop()
DetectedTextWord.popitem()
DetectedTextWord.setdefault()
DetectedTextWord.update()
DetectedTextWord.values()
DetectedTextWord.bounding_polygon
DetectedTextWord.confidence
DetectedTextWord.text
ImageAnalysisResult
ImageAnalysisResult.as_dict()
ImageAnalysisResult.clear()
ImageAnalysisResult.copy()
ImageAnalysisResult.get()
ImageAnalysisResult.items()
ImageAnalysisResult.keys()
ImageAnalysisResult.pop()
ImageAnalysisResult.popitem()
ImageAnalysisResult.setdefault()
ImageAnalysisResult.update()
ImageAnalysisResult.values()
ImageAnalysisResult.caption
ImageAnalysisResult.dense_captions
ImageAnalysisResult.metadata
ImageAnalysisResult.model_version
ImageAnalysisResult.objects
ImageAnalysisResult.people
ImageAnalysisResult.read
ImageAnalysisResult.smart_crops
ImageAnalysisResult.tags
ImageBoundingBox
ImageBoundingBox.as_dict()
ImageBoundingBox.clear()
ImageBoundingBox.copy()
ImageBoundingBox.get()
ImageBoundingBox.items()
ImageBoundingBox.keys()
ImageBoundingBox.pop()
ImageBoundingBox.popitem()
ImageBoundingBox.setdefault()
ImageBoundingBox.update()
ImageBoundingBox.values()
ImageBoundingBox.height
ImageBoundingBox.width
ImageBoundingBox.x
ImageBoundingBox.y
ImageMetadata
ImagePoint
ObjectsResult
PeopleResult
ReadResult
SmartCropsResult
SmartCropsResult.as_dict()
SmartCropsResult.clear()
SmartCropsResult.copy()
SmartCropsResult.get()
SmartCropsResult.items()
SmartCropsResult.keys()
SmartCropsResult.pop()
SmartCropsResult.popitem()
SmartCropsResult.setdefault()
SmartCropsResult.update()
SmartCropsResult.values()
SmartCropsResult.list
TagsResult
VisualFeatures
VisualFeatures.capitalize()
VisualFeatures.casefold()
VisualFeatures.center()
VisualFeatures.count()
VisualFeatures.encode()
VisualFeatures.endswith()
VisualFeatures.expandtabs()
VisualFeatures.find()
VisualFeatures.format()
VisualFeatures.format_map()
VisualFeatures.index()
VisualFeatures.isalnum()
VisualFeatures.isalpha()
VisualFeatures.isascii()
VisualFeatures.isdecimal()
VisualFeatures.isdigit()
VisualFeatures.isidentifier()
VisualFeatures.islower()
VisualFeatures.isnumeric()
VisualFeatures.isprintable()
VisualFeatures.isspace()
VisualFeatures.istitle()
VisualFeatures.isupper()
VisualFeatures.join()
VisualFeatures.ljust()
VisualFeatures.lower()
VisualFeatures.lstrip()
VisualFeatures.maketrans()
VisualFeatures.partition()
VisualFeatures.removeprefix()
VisualFeatures.removesuffix()
VisualFeatures.replace()
VisualFeatures.rfind()
VisualFeatures.rindex()
VisualFeatures.rjust()
VisualFeatures.rpartition()
VisualFeatures.rsplit()
VisualFeatures.rstrip()
VisualFeatures.split()
VisualFeatures.splitlines()
VisualFeatures.startswith()
VisualFeatures.strip()
VisualFeatures.swapcase()
VisualFeatures.title()
VisualFeatures.translate()
VisualFeatures.upper()
VisualFeatures.zfill()
VisualFeatures.CAPTION
VisualFeatures.DENSE_CAPTIONS
VisualFeatures.OBJECTS
VisualFeatures.PEOPLE
VisualFeatures.READ
VisualFeatures.SMART_CROPS
VisualFeatures.TAGS