azure.ai.ml.entities package¶
Contains entities and SDK objects for Azure Machine Learning SDKv2.
Main areas include managing compute targets, creating/managing workspaces and jobs, and submitting/accessing model, runs and run output/logging etc.
- class azure.ai.ml.entities.APIKeyConnection(*, api_base: str, api_key: str | None = None, metadata: Dict[Any, Any] | None = None, **kwargs)[source]¶
Note
This is an experimental class, and may change at any time. Please see https://aka.ms/azuremlexperimental for more information.
A generic connection for any API key-based service.
- Parameters:
- dump(dest: str | PathLike | IO, **kwargs: Any) None¶
Dump the connection spec into a file in yaml format.
- Parameters:
dest (Union[PathLike, str, IO[AnyStr]]) – The destination to receive this connection’s spec. Must be either a path to a local file, or an already-open file stream. If dest is a file path, a new file will be created, and an exception is raised if the file exists. If dest is an open file, the file will be written to directly, and an exception will be raised if the file is not writable.
- property api_base: str | None¶
Alternate name for the target of the connection, which is used by some connection subclasses.
- Returns:
The target of the connection.
- Return type:
- property api_key: str | None¶
The API key of the connection.
- Returns:
The API key of the connection.
- Return type:
Optional[str]
- property azure_endpoint: str | None¶
Alternate name for the target of the connection, which is used by some connection subclasses.
- Returns:
The target of the connection.
- Return type:
- property base_path: str¶
The base path of the resource.
- Returns:
The base path of the resource.
- Return type:
- property creation_context: SystemData | None¶
The creation context of the resource.
- Returns:
The creation metadata for the resource.
- Return type:
Optional[SystemData]
- property credentials: PatTokenConfiguration | SasTokenConfiguration | UsernamePasswordConfiguration | ManagedIdentityConfiguration | ServicePrincipalConfiguration | AccessKeyConfiguration | ApiKeyConfiguration | NoneCredentialConfiguration | AccountKeyConfiguration | AadCredentialConfiguration¶
Credentials for connection.
- Returns:
Credentials for connection.
- Return type:
Union[ PatTokenConfiguration, SasTokenConfiguration, UsernamePasswordConfiguration, ServicePrincipalConfiguration, AccessKeyConfiguration, NoneCredentialConfiguration, AccountKeyConfiguration, AadCredentialConfiguration, ]
- property endpoint: str | None¶
Alternate name for the target of the connection, which is used by some connection subclasses.
- Returns:
The target of the connection.
- Return type:
- property id: str | None¶
The resource ID.
- Returns:
The global ID of the resource, an Azure Resource Manager (ARM) ID.
- Return type:
Optional[str]
Get the Boolean describing if this connection is shared amongst its cohort within a hub. Only applicable for connections created within a project.
- Return type:
- property metadata: Dict[str, Any] | None¶
The connection’s metadata dictionary. :return: This connection’s metadata. :rtype: Optional[Dict[str, Any]]
- property tags: Dict[str, Any] | None¶
Deprecated. Use metadata instead. :return: This connection’s metadata. :rtype: Optional[Dict[str, Any]]
- property target: str | None¶
Target url for the connection.
- Returns:
Target of the connection.
- Return type:
Optional[str]
- class azure.ai.ml.entities.AadCredentialConfiguration[source]¶
Azure Active Directory Credential Configuration
- class azure.ai.ml.entities.AccessKeyConfiguration(*, access_key_id: str, secret_access_key: str)[source]¶
Access Key Credentials.
- Parameters:
- class azure.ai.ml.entities.AlertNotification(*, emails: List[str] | None = None)[source]¶
Alert notification configuration for monitoring jobs
- Keyword Arguments:
emails (Optional[List[str]]) – A list of email addresses that will receive notifications for monitoring alerts. Defaults to None.
Example:
Configuring alert notifications for a monitored job.¶from azure.ai.ml.entities import ( AlertNotification, MonitorDefinition, MonitoringTarget, SparkResourceConfiguration, ) monitor_definition = MonitorDefinition( compute=SparkResourceConfiguration(instance_type="standard_e4s_v3", runtime_version="3.3"), monitoring_target=MonitoringTarget( ml_task="Classification", endpoint_deployment_id="azureml:fraud_detection_endpoint:fraud_detection_deployment", ), alert_notification=AlertNotification(emails=["abc@example.com", "def@example.com"]), )
- class azure.ai.ml.entities.AmlCompute(*, name: str, description: str | None = None, size: str | None = None, tags: dict | None = None, ssh_public_access_enabled: bool | None = None, ssh_settings: AmlComputeSshSettings | None = None, min_instances: int | None = None, max_instances: int | None = None, network_settings: NetworkSettings | None = None, idle_time_before_scale_down: int | None = None, identity: IdentityConfiguration | None = None, tier: str | None = None, enable_node_public_ip: bool = True, **kwargs: Any)[source]¶
AzureML Compute resource.
- Parameters:
name (str) – Name of the compute resource.
description (Optional[str]) – Description of the compute resource.
size (Optional[str]) – Size of the compute. Defaults to None.
tags (Optional[dict[str, str]]) – A set of tags. Contains resource tags defined as key/value pairs.
ssh_settings (Optional[AmlComputeSshSettings]) – SSH settings to access the AzureML compute cluster.
network_settings (Optional[NetworkSettings]) – Virtual network settings for the AzureML compute cluster.
idle_time_before_scale_down (Optional[int]) – Node idle time before scaling down. Defaults to None.
identity (Optional[IdentityConfiguration]) – The identities that are associated with the compute cluster.
tier (Optional[str]) – Virtual Machine tier. Accepted values include: “Dedicated”, “LowPriority”. Defaults to None.
min_instances (Optional[int]) – Minimum number of instances. Defaults to None.
max_instances (Optional[int]) – Maximum number of instances. Defaults to None.
ssh_public_access_enabled (Optional[bool]) – State of the public SSH port. Accepted values are: * False - Indicates that the public SSH port is closed on all nodes of the cluster. * True - Indicates that the public SSH port is open on all nodes of the cluster. * None - Indicates that the public SSH port is closed on all nodes of the cluster if VNet is defined, else is open all public nodes. It can be None only during cluster creation time. After creation it will be either True or False. Defaults to None.
enable_node_public_ip (bool) – Enable or disable node public IP address provisioning. Accepted values are: * True - Indicates that the compute nodes will have public IPs provisioned. * False - Indicates that the compute nodes will have a private endpoint and no public IPs. Defaults to True.
Example:
Creating an AmlCompute object.¶from azure.ai.ml.entities import AmlCompute, IdentityConfiguration, ManagedIdentityConfiguration aml_compute = AmlCompute( name="my-aml-compute", min_instances=0, max_instances=10, idle_time_before_scale_down=100, identity=IdentityConfiguration( type="UserAssigned", user_assigned_identities=[ ManagedIdentityConfiguration( resource_id="/subscriptions/1234567-abcd-ef12-1234-12345/resourcegroups/our_rg_eastus/providers/Microsoft.ManagedIdentity/userAssignedIdentities/our-agent-aks" ) ], ), )
- dump(dest: str | PathLike | IO, **kwargs: Any) None¶
Dump the compute content into a file in yaml format.
- Parameters:
dest (Union[PathLike, str, IO[AnyStr]]) – The destination to receive this compute’s content. Must be either a path to a local file, or an already-open file stream. If dest is a file path, a new file will be created, and an exception is raised if the file exists. If dest is an open file, the file will be written to directly, and an exception will be raised if the file is not writable.’.
- property base_path: str¶
The base path of the resource.
- Returns:
The base path of the resource.
- Return type:
- property created_on: str | None¶
The compute resource creation timestamp.
- Returns:
The compute resource creation timestamp.
- Return type:
Optional[str]
- property creation_context: SystemData | None¶
The creation context of the resource.
- Returns:
The creation metadata for the resource.
- Return type:
Optional[SystemData]
- property id: str | None¶
The resource ID.
- Returns:
The global ID of the resource, an Azure Resource Manager (ARM) ID.
- Return type:
Optional[str]
- property provisioning_errors: str | None¶
The compute resource provisioning errors.
- Returns:
The compute resource provisioning errors.
- Return type:
Optional[str]
- class azure.ai.ml.entities.AmlComputeNodeInfo[source]¶
Compute node information related to AmlCompute.
- class azure.ai.ml.entities.AmlComputeSshSettings(*, admin_username: str, admin_password: str | None = None, ssh_key_value: str | None = None)[source]¶
SSH settings to access a AML compute target.
- Parameters:
Example:
Configuring an AmlComputeSshSettings object.¶from azure.ai.ml.entities import AmlComputeSshSettings ssh_settings = AmlComputeSshSettings( admin_username="azureuser", ssh_key_value="ssh-rsa ABCDEFGHIJKLMNOPQRSTUVWXYZ administrator@MININT-2023", admin_password="password123", )
- class azure.ai.ml.entities.AmlTokenConfiguration[source]¶
AzureML Token identity configuration.
Example:
Configuring an AmlTokenConfiguration for a command().¶from azure.ai.ml import Input, command from azure.ai.ml.constants import AssetTypes from azure.ai.ml.entities._credentials import AmlTokenConfiguration node = command( description="description", environment="AzureML-sklearn-1.0-ubuntu20.04-py38-cpu:33", code="./tests/test_configs/training/", command="python read_data.py --input_data ${{inputs.input_data}}", inputs={"input_data": Input(type=AssetTypes.MLTABLE, path="./sample_data")}, display_name="builder_command_job", compute="testCompute", experiment_name="mfe-test1-dataset", identity=AmlTokenConfiguration(), )
- class azure.ai.ml.entities.ApiKeyConfiguration(*, key: str)[source]¶
Note
This is an experimental class, and may change at any time. Please see https://aka.ms/azuremlexperimental for more information.
Api Key Credentials.
- Parameters:
key (str) – API key id
- class azure.ai.ml.entities.Asset(name: str | None = None, version: str | None = None, description: str | None = None, tags: Dict | None = None, properties: Dict | None = None, **kwargs: Any)[source]¶
Base class for asset.
This class should not be instantiated directly. Instead, use one of its subclasses.
- Parameters:
name (Optional[str]]) – The name of the asset. Defaults to a random GUID.
version (Optional[str]) – The version of the asset. Defaults to “1” if no name is provided, otherwise defaults to autoincrement from the last registered version of the asset with that name. For a model name that has never been registered, a default version will be assigned.
description (Optional[str]) – The description of the resource. Defaults to None.
tags (Optional[dict[str, str]]) – Tag dictionary. Tags can be added, removed, and updated. Defaults to None.
properties (Optional[dict[str, str]]) – The asset property dictionary. Defaults to None.
- Keyword Arguments:
kwargs (Optional[dict]) – A dictionary of additional configuration parameters.
- dump(dest: str | PathLike | IO, **kwargs: Any) None[source]¶
Dump the asset content into a file in YAML format.
- Parameters:
dest (Union[PathLike, str, IO[AnyStr]]) – The local path or file stream to write the YAML content to. If dest is a file path, a new file will be created. If dest is an open file, the file will be written to directly.
- Raises:
FileExistsError – Raised if dest is a file path and the file already exists.
IOError – Raised if dest is an open file and the file is not writable.
- property base_path: str¶
The base path of the resource.
- Returns:
The base path of the resource.
- Return type:
- property creation_context: SystemData | None¶
The creation context of the resource.
- Returns:
The creation metadata for the resource.
- Return type:
Optional[SystemData]
- class azure.ai.ml.entities.AssignedUserConfiguration(*, user_tenant_id: str, user_object_id: str)[source]¶
Settings to create a compute resource on behalf of another user.
- Parameters:
Example:
Creating an AssignedUserConfiguration.¶from azure.ai.ml.entities import AssignedUserConfiguration on_behalf_of_config = AssignedUserConfiguration(user_tenant_id="12345", user_object_id="abcdef")
- class azure.ai.ml.entities.AutoPauseSettings(*, delay_in_minutes: int | None = None, enabled: bool | None = None)[source]¶
Auto pause settings for Synapse Spark compute.
- Keyword Arguments:
Example:
Configuring AutoPauseSettings on SynapseSparkCompute.¶from azure.ai.ml.entities import ( AutoPauseSettings, AutoScaleSettings, IdentityConfiguration, ManagedIdentityConfiguration, SynapseSparkCompute, ) synapse_compute = SynapseSparkCompute( name="synapse_name", resource_id="/subscriptions/subscription/resourceGroups/group/providers/Microsoft.Synapse/workspaces/workspace/bigDataPools/pool", identity=IdentityConfiguration( type="UserAssigned", user_assigned_identities=[ ManagedIdentityConfiguration( resource_id="/subscriptions/subscription/resourceGroups/group/providers/Microsoft.ManagedIdentity/userAssignedIdentities/identity" ) ], ), scale_settings=AutoScaleSettings(min_node_count=1, max_node_count=3, enabled=True), auto_pause_settings=AutoPauseSettings(delay_in_minutes=10, enabled=True), )
- class azure.ai.ml.entities.AutoScaleSettings(*, min_node_count: int | None = None, max_node_count: int | None = None, enabled: bool | None = None)[source]¶
Auto-scale settings for Synapse Spark compute.
- Keyword Arguments:
Example:
Configuring AutoScaleSettings on SynapseSparkCompute.¶from azure.ai.ml.entities import ( AutoPauseSettings, AutoScaleSettings, IdentityConfiguration, ManagedIdentityConfiguration, SynapseSparkCompute, ) synapse_compute = SynapseSparkCompute( name="synapse_name", resource_id="/subscriptions/subscription/resourceGroups/group/providers/Microsoft.Synapse/workspaces/workspace/bigDataPools/pool", identity=IdentityConfiguration( type="UserAssigned", user_assigned_identities=[ ManagedIdentityConfiguration( resource_id="/subscriptions/subscription/resourceGroups/group/providers/Microsoft.ManagedIdentity/userAssignedIdentities/identity" ) ], ), scale_settings=AutoScaleSettings(min_node_count=1, max_node_count=3, enabled=True), auto_pause_settings=AutoPauseSettings(delay_in_minutes=10, enabled=True), )
- class azure.ai.ml.entities.AzureAISearchConfig(*, index_name: str | None = None, connection_id: str | None = None)[source]¶
Note
This is an experimental class, and may change at any time. Please see https://aka.ms/azuremlexperimental for more information.
Config class for creating an Azure AI Search index.
- class azure.ai.ml.entities.AzureAISearchConnection(*, endpoint: str, api_key: str | None = None, metadata: Dict[Any, Any] | None = None, **kwargs: Any)[source]¶
A Connection that is specifically designed for handling connections to Azure AI Search.
- Parameters:
- dump(dest: str | PathLike | IO, **kwargs: Any) None¶
Dump the connection spec into a file in yaml format.
- Parameters:
dest (Union[PathLike, str, IO[AnyStr]]) – The destination to receive this connection’s spec. Must be either a path to a local file, or an already-open file stream. If dest is a file path, a new file will be created, and an exception is raised if the file exists. If dest is an open file, the file will be written to directly, and an exception will be raised if the file is not writable.
- property api_base: str | None¶
Alternate name for the target of the connection, which is used by some connection subclasses.
- Returns:
The target of the connection.
- Return type:
- property api_key: str | None¶
The API key of the connection.
- Returns:
The API key of the connection.
- Return type:
Optional[str]
- property azure_endpoint: str | None¶
Alternate name for the target of the connection, which is used by some connection subclasses.
- Returns:
The target of the connection.
- Return type:
- property base_path: str¶
The base path of the resource.
- Returns:
The base path of the resource.
- Return type:
- property creation_context: SystemData | None¶
The creation context of the resource.
- Returns:
The creation metadata for the resource.
- Return type:
Optional[SystemData]
- property credentials: PatTokenConfiguration | SasTokenConfiguration | UsernamePasswordConfiguration | ManagedIdentityConfiguration | ServicePrincipalConfiguration | AccessKeyConfiguration | ApiKeyConfiguration | NoneCredentialConfiguration | AccountKeyConfiguration | AadCredentialConfiguration¶
Credentials for connection.
- Returns:
Credentials for connection.
- Return type:
Union[ PatTokenConfiguration, SasTokenConfiguration, UsernamePasswordConfiguration, ServicePrincipalConfiguration, AccessKeyConfiguration, NoneCredentialConfiguration, AccountKeyConfiguration, AadCredentialConfiguration, ]
- property endpoint: str | None¶
Alternate name for the target of the connection, which is used by some connection subclasses.
- Returns:
The target of the connection.
- Return type:
- property id: str | None¶
The resource ID.
- Returns:
The global ID of the resource, an Azure Resource Manager (ARM) ID.
- Return type:
Optional[str]
Get the Boolean describing if this connection is shared amongst its cohort within a hub. Only applicable for connections created within a project.
- Return type:
- property metadata: Dict[str, Any] | None¶
The connection’s metadata dictionary. :return: This connection’s metadata. :rtype: Optional[Dict[str, Any]]
- property tags: Dict[str, Any] | None¶
Deprecated. Use metadata instead. :return: This connection’s metadata. :rtype: Optional[Dict[str, Any]]
- property target: str | None¶
Target url for the connection.
- Returns:
Target of the connection.
- Return type:
Optional[str]
- class azure.ai.ml.entities.AzureAIServicesConnection(*, endpoint: str, api_key: str | None = None, ai_services_resource_id: str, metadata: Dict[Any, Any] | None = None, **kwargs: Any)[source]¶
Note
This is an experimental class, and may change at any time. Please see https://aka.ms/azuremlexperimental for more information.
A Connection geared towards Azure AI services.
- Parameters:
name (str) – Name of the connection.
endpoint (str) – The URL or ARM resource ID of the external resource.
api_key (Optional[str]) – The api key to connect to the azure endpoint. If unset, tries to use the user’s Entra ID as credentials instead.
ai_services_resource_id (str) – The fully qualified ID of the Azure AI service resource to connect to.
- dump(dest: str | PathLike | IO, **kwargs: Any) None¶
Dump the connection spec into a file in yaml format.
- Parameters:
dest (Union[PathLike, str, IO[AnyStr]]) – The destination to receive this connection’s spec. Must be either a path to a local file, or an already-open file stream. If dest is a file path, a new file will be created, and an exception is raised if the file exists. If dest is an open file, the file will be written to directly, and an exception will be raised if the file is not writable.
- property ai_services_resource_id: str | None¶
The resource id of the ai service being connected to.
- Returns:
The resource id of the ai service being connected to.
- Return type:
Optional[str]
- property api_base: str | None¶
Alternate name for the target of the connection, which is used by some connection subclasses.
- Returns:
The target of the connection.
- Return type:
- property api_key: str | None¶
The API key of the connection.
- Returns:
The API key of the connection.
- Return type:
Optional[str]
- property azure_endpoint: str | None¶
Alternate name for the target of the connection, which is used by some connection subclasses.
- Returns:
The target of the connection.
- Return type:
- property base_path: str¶
The base path of the resource.
- Returns:
The base path of the resource.
- Return type:
- property creation_context: SystemData | None¶
The creation context of the resource.
- Returns:
The creation metadata for the resource.
- Return type:
Optional[SystemData]
- property credentials: PatTokenConfiguration | SasTokenConfiguration | UsernamePasswordConfiguration | ManagedIdentityConfiguration | ServicePrincipalConfiguration | AccessKeyConfiguration | ApiKeyConfiguration | NoneCredentialConfiguration | AccountKeyConfiguration | AadCredentialConfiguration¶
Credentials for connection.
- Returns:
Credentials for connection.
- Return type:
Union[ PatTokenConfiguration, SasTokenConfiguration, UsernamePasswordConfiguration, ServicePrincipalConfiguration, AccessKeyConfiguration, NoneCredentialConfiguration, AccountKeyConfiguration, AadCredentialConfiguration, ]
- property endpoint: str | None¶
Alternate name for the target of the connection, which is used by some connection subclasses.
- Returns:
The target of the connection.
- Return type:
- property id: str | None¶
The resource ID.
- Returns:
The global ID of the resource, an Azure Resource Manager (ARM) ID.
- Return type:
Optional[str]
Get the Boolean describing if this connection is shared amongst its cohort within a hub. Only applicable for connections created within a project.
- Return type:
- property metadata: Dict[str, Any] | None¶
The connection’s metadata dictionary. :return: This connection’s metadata. :rtype: Optional[Dict[str, Any]]
- property tags: Dict[str, Any] | None¶
Deprecated. Use metadata instead. :return: This connection’s metadata. :rtype: Optional[Dict[str, Any]]
- property target: str | None¶
Target url for the connection.
- Returns:
Target of the connection.
- Return type:
Optional[str]
- class azure.ai.ml.entities.AzureBlobDatastore(*, name: str, account_name: str, container_name: str, description: str | None = None, tags: Dict | None = None, endpoint: str | None = None, protocol: str = 'https', properties: Dict | None = None, credentials: AccountKeyConfiguration | SasTokenConfiguration | None = None, **kwargs: Any)[source]¶
Azure blob storage that is linked to an Azure ML workspace.
- Parameters:
name (str) – Name of the datastore.
account_name (str) – Name of the Azure storage account.
container_name (str) – Name of the container.
description (str) – Description of the resource.
tags (dict[str, str]) – Tag dictionary. Tags can be added, removed, and updated.
endpoint (str) – Endpoint to use to connect with the Azure storage account.
protocol (str) – Protocol to use to connect with the Azure storage account.
properties (dict[str, str]) – The asset property dictionary.
credentials (Union[AccountKeyConfiguration, SasTokenConfiguration]) – Credentials to use for Azure ML workspace to connect to the storage.
kwargs (dict) – A dictionary of additional configuration parameters.
- dump(dest: str | PathLike | IO, **kwargs: Any) None¶
Dump the datastore content into a file in yaml format.
- Parameters:
dest (Union[PathLike, str, IO[AnyStr]]) – The destination to receive this datastore’s content. Must be either a path to a local file, or an already-open file stream. If dest is a file path, a new file will be created, and an exception is raised if the file exists. If dest is an open file, the file will be written to directly, and an exception will be raised if the file is not writable.
- property base_path: str¶
The base path of the resource.
- Returns:
The base path of the resource.
- Return type:
- property creation_context: SystemData | None¶
The creation context of the resource.
- Returns:
The creation metadata for the resource.
- Return type:
Optional[SystemData]
- class azure.ai.ml.entities.AzureBlobStoreConnection(*, url: str, container_name: str, account_name: str, metadata: Dict[Any, Any] | None = None, **kwargs)[source]¶
A connection to an Azure Blob Store.
- Parameters:
name (str) – Name of the connection.
url (str) – The URL or ARM resource ID of the external resource.
container_name (str) – The name of the container.
account_name (str) – The name of the account.
credentials (Union[ AccountKeyConfiguration, SasTokenConfiguration, AadCredentialConfiguration, ]) – The credentials for authenticating to the blob store. This type of connection accepts 3 types of credentials: account key and SAS token credentials, or NoneCredentialConfiguration for credential-less connections.
- dump(dest: str | PathLike | IO, **kwargs: Any) None¶
Dump the connection spec into a file in yaml format.
- Parameters:
dest (Union[PathLike, str, IO[AnyStr]]) – The destination to receive this connection’s spec. Must be either a path to a local file, or an already-open file stream. If dest is a file path, a new file will be created, and an exception is raised if the file exists. If dest is an open file, the file will be written to directly, and an exception will be raised if the file is not writable.
- property account_name: str | None¶
The name of the connection’s account
- Returns:
The name of the account.
- Return type:
Optional[str]
- property api_base: str | None¶
Alternate name for the target of the connection, which is used by some connection subclasses.
- Returns:
The target of the connection.
- Return type:
- property azure_endpoint: str | None¶
Alternate name for the target of the connection, which is used by some connection subclasses.
- Returns:
The target of the connection.
- Return type:
- property base_path: str¶
The base path of the resource.
- Returns:
The base path of the resource.
- Return type:
- property container_name: str | None¶
The name of the connection’s container.
- Returns:
The name of the container.
- Return type:
Optional[str]
- property creation_context: SystemData | None¶
The creation context of the resource.
- Returns:
The creation metadata for the resource.
- Return type:
Optional[SystemData]
- property credentials: PatTokenConfiguration | SasTokenConfiguration | UsernamePasswordConfiguration | ManagedIdentityConfiguration | ServicePrincipalConfiguration | AccessKeyConfiguration | ApiKeyConfiguration | NoneCredentialConfiguration | AccountKeyConfiguration | AadCredentialConfiguration¶
Credentials for connection.
- Returns:
Credentials for connection.
- Return type:
Union[ PatTokenConfiguration, SasTokenConfiguration, UsernamePasswordConfiguration, ServicePrincipalConfiguration, AccessKeyConfiguration, NoneCredentialConfiguration, AccountKeyConfiguration, AadCredentialConfiguration, ]
- property endpoint: str | None¶
Alternate name for the target of the connection, which is used by some connection subclasses.
- Returns:
The target of the connection.
- Return type:
- property id: str | None¶
The resource ID.
- Returns:
The global ID of the resource, an Azure Resource Manager (ARM) ID.
- Return type:
Optional[str]
Get the Boolean describing if this connection is shared amongst its cohort within a hub. Only applicable for connections created within a project.
- Return type:
- property metadata: Dict[str, Any] | None¶
The connection’s metadata dictionary. :return: This connection’s metadata. :rtype: Optional[Dict[str, Any]]
- property tags: Dict[str, Any] | None¶
Deprecated. Use metadata instead. :return: This connection’s metadata. :rtype: Optional[Dict[str, Any]]
- property target: str | None¶
Target url for the connection.
- Returns:
Target of the connection.
- Return type:
Optional[str]
- class azure.ai.ml.entities.AzureContentSafetyConnection(*, endpoint: str, api_key: str | None = None, metadata: Dict[Any, Any] | None = None, **kwargs: Any)[source]¶
A Connection geared towards a Azure Content Safety service.
- Parameters:
- dump(dest: str | PathLike | IO, **kwargs: Any) None¶
Dump the connection spec into a file in yaml format.
- Parameters:
dest (Union[PathLike, str, IO[AnyStr]]) – The destination to receive this connection’s spec. Must be either a path to a local file, or an already-open file stream. If dest is a file path, a new file will be created, and an exception is raised if the file exists. If dest is an open file, the file will be written to directly, and an exception will be raised if the file is not writable.
- property api_base: str | None¶
Alternate name for the target of the connection, which is used by some connection subclasses.
- Returns:
The target of the connection.
- Return type:
- property api_key: str | None¶
The API key of the connection.
- Returns:
The API key of the connection.
- Return type:
Optional[str]
- property azure_endpoint: str | None¶
Alternate name for the target of the connection, which is used by some connection subclasses.
- Returns:
The target of the connection.
- Return type:
- property base_path: str¶
The base path of the resource.
- Returns:
The base path of the resource.
- Return type:
- property creation_context: SystemData | None¶
The creation context of the resource.
- Returns:
The creation metadata for the resource.
- Return type:
Optional[SystemData]
- property credentials: PatTokenConfiguration | SasTokenConfiguration | UsernamePasswordConfiguration | ManagedIdentityConfiguration | ServicePrincipalConfiguration | AccessKeyConfiguration | ApiKeyConfiguration | NoneCredentialConfiguration | AccountKeyConfiguration | AadCredentialConfiguration¶
Credentials for connection.
- Returns:
Credentials for connection.
- Return type:
Union[ PatTokenConfiguration, SasTokenConfiguration, UsernamePasswordConfiguration, ServicePrincipalConfiguration, AccessKeyConfiguration, NoneCredentialConfiguration, AccountKeyConfiguration, AadCredentialConfiguration, ]
- property endpoint: str | None¶
Alternate name for the target of the connection, which is used by some connection subclasses.
- Returns:
The target of the connection.
- Return type:
- property id: str | None¶
The resource ID.
- Returns:
The global ID of the resource, an Azure Resource Manager (ARM) ID.
- Return type:
Optional[str]
Get the Boolean describing if this connection is shared amongst its cohort within a hub. Only applicable for connections created within a project.
- Return type:
- property metadata: Dict[str, Any] | None¶
The connection’s metadata dictionary. :return: This connection’s metadata. :rtype: Optional[Dict[str, Any]]
- property tags: Dict[str, Any] | None¶
Deprecated. Use metadata instead. :return: This connection’s metadata. :rtype: Optional[Dict[str, Any]]
- property target: str | None¶
Target url for the connection.
- Returns:
Target of the connection.
- Return type:
Optional[str]
- class azure.ai.ml.entities.AzureDataLakeGen1Datastore(*, name: str, store_name: str, description: str | None = None, tags: Dict | None = None, properties: Dict | None = None, credentials: CertificateConfiguration | ServicePrincipalConfiguration | None = None, **kwargs: Any)[source]¶
Azure Data Lake aka Gen 1 datastore that is linked to an Azure ML workspace.
- Parameters:
name (str) – Name of the datastore.
store_name (str) – Name of the Azure storage resource.
description (str) – Description of the resource.
tags (dict[str, str]) – Tag dictionary. Tags can be added, removed, and updated.
properties (dict[str, str]) – The asset property dictionary.
credentials (Union[ServicePrincipalSection, CertificateSection]) – Credentials to use for Azure ML workspace to connect to the storage.
kwargs (dict) – A dictionary of additional configuration parameters.
- dump(dest: str | PathLike | IO, **kwargs: Any) None¶
Dump the datastore content into a file in yaml format.
- Parameters:
dest (Union[PathLike, str, IO[AnyStr]]) – The destination to receive this datastore’s content. Must be either a path to a local file, or an already-open file stream. If dest is a file path, a new file will be created, and an exception is raised if the file exists. If dest is an open file, the file will be written to directly, and an exception will be raised if the file is not writable.
- property base_path: str¶
The base path of the resource.
- Returns:
The base path of the resource.
- Return type:
- property creation_context: SystemData | None¶
The creation context of the resource.
- Returns:
The creation metadata for the resource.
- Return type:
Optional[SystemData]
- class azure.ai.ml.entities.AzureDataLakeGen2Datastore(*, name: str, account_name: str, filesystem: str, description: str | None = None, tags: Dict | None = None, endpoint: str = 'core.windows.net', protocol: str = 'https', properties: Dict | None = None, credentials: CertificateConfiguration | ServicePrincipalConfiguration | None = None, **kwargs: Any)[source]¶
Azure data lake gen 2 that is linked to an Azure ML workspace.
- Parameters:
name (str) – Name of the datastore.
account_name (str) – Name of the Azure storage account.
filesystem (str) – The name of the Data Lake Gen2 filesystem.
description (str) – Description of the resource.
tags (dict[str, str]) – Tag dictionary. Tags can be added, removed, and updated.
endpoint (str) – Endpoint to use to connect with the Azure storage account
protocol (str) – Protocol to use to connect with the Azure storage account
credentials – Credentials to use for Azure ML workspace to connect to the storage.
properties (dict[str, str]) – The asset property dictionary.
kwargs (dict) – A dictionary of additional configuration parameters.
- dump(dest: str | PathLike | IO, **kwargs: Any) None¶
Dump the datastore content into a file in yaml format.
- Parameters:
dest (Union[PathLike, str, IO[AnyStr]]) – The destination to receive this datastore’s content. Must be either a path to a local file, or an already-open file stream. If dest is a file path, a new file will be created, and an exception is raised if the file exists. If dest is an open file, the file will be written to directly, and an exception will be raised if the file is not writable.
- property base_path: str¶
The base path of the resource.
- Returns:
The base path of the resource.
- Return type:
- property creation_context: SystemData | None¶
The creation context of the resource.
- Returns:
The creation metadata for the resource.
- Return type:
Optional[SystemData]
- class azure.ai.ml.entities.AzureFileDatastore(*, name: str, account_name: str, file_share_name: str, description: str | None = None, tags: Dict | None = None, endpoint: str = 'core.windows.net', protocol: str = 'https', properties: Dict | None = None, credentials: AccountKeyConfiguration | SasTokenConfiguration | None = None, **kwargs: Any)[source]¶
Azure file share that is linked to an Azure ML workspace.
- Parameters:
name (str) – Name of the datastore.
account_name (str) – Name of the Azure storage account.
file_share_name (str) – Name of the file share.
description (str) – Description of the resource.
tags (dict[str, str]) – Tag dictionary. Tags can be added, removed, and updated.
endpoint (str) – Endpoint to use to connect with the Azure storage account
protocol (str) – Protocol to use to connect with the Azure storage account
properties (dict[str, str]) – The asset property dictionary.
credentials (Union[AccountKeyConfiguration, SasTokenConfiguration]) – Credentials to use for Azure ML workspace to connect to the storage. Defaults to None.
kwargs (dict) – A dictionary of additional configuration parameters.
- dump(dest: str | PathLike | IO, **kwargs: Any) None¶
Dump the datastore content into a file in yaml format.
- Parameters:
dest (Union[PathLike, str, IO[AnyStr]]) – The destination to receive this datastore’s content. Must be either a path to a local file, or an already-open file stream. If dest is a file path, a new file will be created, and an exception is raised if the file exists. If dest is an open file, the file will be written to directly, and an exception will be raised if the file is not writable.
- property base_path: str¶
The base path of the resource.
- Returns:
The base path of the resource.
- Return type:
- property creation_context: SystemData | None¶
The creation context of the resource.
- Returns:
The creation metadata for the resource.
- Return type:
Optional[SystemData]
- class azure.ai.ml.entities.AzureMLBatchInferencingServer(*, code_configuration: CodeConfiguration | None = None, **kwargs: Any)[source]¶
Note
This is an experimental class, and may change at any time. Please see https://aka.ms/azuremlexperimental for more information.
Azure ML batch inferencing configurations.
- Parameters:
code_configuration (azure.ai.ml.entities.CodeConfiguration) – The code configuration of the inferencing server.
- Variables:
type – The type of the inferencing server.
- class azure.ai.ml.entities.AzureMLOnlineInferencingServer(*, code_configuration: CodeConfiguration | None = None, **kwargs: Any)[source]¶
Note
This is an experimental class, and may change at any time. Please see https://aka.ms/azuremlexperimental for more information.
Azure ML online inferencing configurations.
- Parameters:
code_configuration (str) – The code configuration of the inferencing server.
- Variables:
type – The type of the inferencing server.
- class azure.ai.ml.entities.AzureOpenAIConnection(*, azure_endpoint: str, api_key: str | None = None, api_version: str | None = None, api_type: str = 'Azure', open_ai_resource_id: str | None = None, metadata: Dict[Any, Any] | None = None, **kwargs: Any)[source]¶
Note
This is an experimental class, and may change at any time. Please see https://aka.ms/azuremlexperimental for more information.
A Connection that is specifically designed for handling connections to Azure Open AI.
- Parameters:
name (str) – Name of the connection.
azure_endpoint (str) – The URL or ARM resource ID of the Azure Open AI Resource.
api_key (Optional[str]) – The api key to connect to the azure endpoint. If unset, tries to use the user’s Entra ID as credentials instead.
open_ai_resource_id (Optional[str]) – The fully qualified ID of the Azure Open AI resource to connect to.
api_version (Optional[str]) – The api version that this connection was created for.
- dump(dest: str | PathLike | IO, **kwargs: Any) None¶
Dump the connection spec into a file in yaml format.
- Parameters:
dest (Union[PathLike, str, IO[AnyStr]]) – The destination to receive this connection’s spec. Must be either a path to a local file, or an already-open file stream. If dest is a file path, a new file will be created, and an exception is raised if the file exists. If dest is an open file, the file will be written to directly, and an exception will be raised if the file is not writable.
- property api_base: str | None¶
Alternate name for the target of the connection, which is used by some connection subclasses.
- Returns:
The target of the connection.
- Return type:
- property api_key: str | None¶
The API key of the connection.
- Returns:
The API key of the connection.
- Return type:
Optional[str]
- property api_version: str | None¶
The API version of the connection.
- Returns:
The API version of the connection.
- Return type:
Optional[str]
- property azure_endpoint: str | None¶
Alternate name for the target of the connection, which is used by some connection subclasses.
- Returns:
The target of the connection.
- Return type:
- property base_path: str¶
The base path of the resource.
- Returns:
The base path of the resource.
- Return type:
- property creation_context: SystemData | None¶
The creation context of the resource.
- Returns:
The creation metadata for the resource.
- Return type:
Optional[SystemData]
- property credentials: PatTokenConfiguration | SasTokenConfiguration | UsernamePasswordConfiguration | ManagedIdentityConfiguration | ServicePrincipalConfiguration | AccessKeyConfiguration | ApiKeyConfiguration | NoneCredentialConfiguration | AccountKeyConfiguration | AadCredentialConfiguration¶
Credentials for connection.
- Returns:
Credentials for connection.
- Return type:
Union[ PatTokenConfiguration, SasTokenConfiguration, UsernamePasswordConfiguration, ServicePrincipalConfiguration, AccessKeyConfiguration, NoneCredentialConfiguration, AccountKeyConfiguration, AadCredentialConfiguration, ]
- property endpoint: str | None¶
Alternate name for the target of the connection, which is used by some connection subclasses.
- Returns:
The target of the connection.
- Return type:
- property id: str | None¶
The resource ID.
- Returns:
The global ID of the resource, an Azure Resource Manager (ARM) ID.
- Return type:
Optional[str]
Get the Boolean describing if this connection is shared amongst its cohort within a hub. Only applicable for connections created within a project.
- Return type:
- property metadata: Dict[str, Any] | None¶
The connection’s metadata dictionary. :return: This connection’s metadata. :rtype: Optional[Dict[str, Any]]
- property open_ai_resource_id: str | None¶
The fully qualified ID of the Azure Open AI resource this connects to.
- Returns:
The fully qualified ID of the Azure Open AI resource this connects to.
- Return type:
Optional[str]
- property tags: Dict[str, Any] | None¶
Deprecated. Use metadata instead. :return: This connection’s metadata. :rtype: Optional[Dict[str, Any]]
- property target: str | None¶
Target url for the connection.
- Returns:
Target of the connection.
- Return type:
Optional[str]
- class azure.ai.ml.entities.AzureOpenAIDeployment(*args: Any, **kwargs: Any)[source]¶
Note
This is an experimental class, and may change at any time. Please see https://aka.ms/azuremlexperimental for more information.
>Azure OpenAI Deployment Information.
Readonly variables are only populated by the server, and will be ignored when sending a request.
- ivar name:
The deployment name.
- vartype name:
str
- ivar model_name:
The name of the model to deploy.
- vartype model_name:
str
- ivar model_version:
The model version to deploy.
- vartype model_version:
str
- ivar connection_name:
The name of the connection to deploy to.
- vartype connection_name:
str
- ivar target_url:
The target URL of the AOAI resource for the deployment.
- vartype target_url:
str
- ivar id:
The ARM resource id of the deployment.
- vartype id:
str
- ivar properties:
Properties of the deployment.
- vartype properties:
dict[str, str]
- ivar tags:
Tags of the deployment.
- vartype tags:
dict[str, str]
- ivar system_data:
System data of the deployment.
- vartype system_data:
~azure.ai.ml.entities.SystemData
- as_dict(*, exclude_readonly: bool = False) Dict[str, Any][source]¶
Return a dict that can be JSONify using json.dump.
- clear() None. Remove all items from D.¶
- copy() Model¶
- get(k[, d]) D[k] if k in D, else d. d defaults to None.¶
- items() a set-like object providing a view on D's items¶
- keys() a set-like object providing a view on D's keys¶
- pop(k[, d]) v, remove specified key and return the corresponding value.¶
If key is not found, d is returned if given, otherwise KeyError is raised.
- popitem() (k, v), remove and return some (key, value) pair¶
as a 2-tuple; but raise KeyError if D is empty.
- setdefault(k[, d]) D.get(k,d), also set D[k]=d if k not in D¶
- update([E, ]**F) None. Update D from mapping/iterable E and F.¶
If E present and has a .keys() method, does: for k in E: D[k] = E[k] If E present and lacks .keys() method, does: for (k, v) in E: D[k] = v In either case, this is followed by: for k, v in F.items(): D[k] = v
- values() an object providing a view on D's values¶
- system_data: SystemData | None¶
System data of the deployment.
- class azure.ai.ml.entities.AzureSpeechServicesConnection(*, endpoint: str, api_key: str | None = None, metadata: Dict[Any, Any] | None = None, **kwargs: Any)[source]¶
A Connection geared towards an Azure Speech service.
- Parameters:
- dump(dest: str | PathLike | IO, **kwargs: Any) None¶
Dump the connection spec into a file in yaml format.
- Parameters:
dest (Union[PathLike, str, IO[AnyStr]]) – The destination to receive this connection’s spec. Must be either a path to a local file, or an already-open file stream. If dest is a file path, a new file will be created, and an exception is raised if the file exists. If dest is an open file, the file will be written to directly, and an exception will be raised if the file is not writable.
- property api_base: str | None¶
Alternate name for the target of the connection, which is used by some connection subclasses.
- Returns:
The target of the connection.
- Return type:
- property api_key: str | None¶
The API key of the connection.
- Returns:
The API key of the connection.
- Return type:
Optional[str]
- property azure_endpoint: str | None¶
Alternate name for the target of the connection, which is used by some connection subclasses.
- Returns:
The target of the connection.
- Return type:
- property base_path: str¶
The base path of the resource.
- Returns:
The base path of the resource.
- Return type:
- property creation_context: SystemData | None¶
The creation context of the resource.
- Returns:
The creation metadata for the resource.
- Return type:
Optional[SystemData]
- property credentials: PatTokenConfiguration | SasTokenConfiguration | UsernamePasswordConfiguration | ManagedIdentityConfiguration | ServicePrincipalConfiguration | AccessKeyConfiguration | ApiKeyConfiguration | NoneCredentialConfiguration | AccountKeyConfiguration | AadCredentialConfiguration¶
Credentials for connection.
- Returns:
Credentials for connection.
- Return type:
Union[ PatTokenConfiguration, SasTokenConfiguration, UsernamePasswordConfiguration, ServicePrincipalConfiguration, AccessKeyConfiguration, NoneCredentialConfiguration, AccountKeyConfiguration, AadCredentialConfiguration, ]
- property endpoint: str | None¶
Alternate name for the target of the connection, which is used by some connection subclasses.
- Returns:
The target of the connection.
- Return type:
- property id: str | None¶
The resource ID.
- Returns:
The global ID of the resource, an Azure Resource Manager (ARM) ID.
- Return type:
Optional[str]
Get the Boolean describing if this connection is shared amongst its cohort within a hub. Only applicable for connections created within a project.
- Return type:
- property metadata: Dict[str, Any] | None¶
The connection’s metadata dictionary. :return: This connection’s metadata. :rtype: Optional[Dict[str, Any]]
- property tags: Dict[str, Any] | None¶
Deprecated. Use metadata instead. :return: This connection’s metadata. :rtype: Optional[Dict[str, Any]]
- property target: str | None¶
Target url for the connection.
- Returns:
Target of the connection.
- Return type:
Optional[str]
- class azure.ai.ml.entities.BaseEnvironment(type: str, resource_id: str | None = None)[source]¶
Note
This is an experimental class, and may change at any time. Please see https://aka.ms/azuremlexperimental for more information.
Base environment type.
All required parameters must be populated in order to send to Azure.
- Parameters:
Example:
Create a Base Environment object.¶from azure.ai.ml.entities._assets._artifacts._package.base_environment_source import BaseEnvironment base_environment = BaseEnvironment(type="base-env-type", resource_id="base-env-resource-id")
- class azure.ai.ml.entities.BaselineDataRange(*, window_start: str | None = None, window_end: str | None = None, lookback_window_size: str | None = None, lookback_window_offset: str | None = None)[source]¶
Baseline data range for monitoring.
This class is used when initializing a data_window for a ReferenceData object. For trailing input, set lookback_window_size and lookback_window_offset to a desired value. For static input, set window_start and window_end to a desired value.
- class azure.ai.ml.entities.BatchDeployment(*, name: str, endpoint_name: str | None = None, description: str | None = None, tags: Dict[str, Any] | None = None, properties: Dict[str, str] | None = None, model: str | Model | None = None, code_configuration: CodeConfiguration | None = None, environment: str | Environment | None = None, compute: str | None = None, resources: ResourceConfiguration | None = None, output_file_name: str | None = None, output_action: BatchDeploymentOutputAction | str | None = None, error_threshold: int | None = None, retry_settings: BatchRetrySettings | None = None, logging_level: str | None = None, mini_batch_size: int | None = None, max_concurrency_per_instance: int | None = None, environment_variables: Dict[str, str] | None = None, code_path: str | PathLike | None = None, scoring_script: str | PathLike | None = None, instance_count: int | None = None, **kwargs: Any)[source]¶
Batch endpoint deployment entity.
- Parameters:
name (str) – the name of the batch deployment
description (str) – Description of the resource.
tags (dict[str, str]) – Tag dictionary. Tags can be added, removed, and updated.
properties (dict[str, str]) – The asset property dictionary.
model (Union[str, Model]) – Model entity for the endpoint deployment, defaults to None
code_configuration (CodeConfiguration) – defaults to None
environment (Union[str, Environment]) – Environment entity for the endpoint deployment., defaults to None
compute (str) – Compute target for batch inference operation.
output_action (str or BatchDeploymentOutputAction) – Indicates how the output will be organized. Possible values include: “summary_only”, “append_row”. Defaults to “append_row”
output_file_name (str) – Customized output file name for append_row output action, defaults to “predictions.csv”
max_concurrency_per_instance (int) – Indicates maximum number of parallelism per instance, defaults to 1
error_threshold (int) – Error threshold, if the error count for the entire input goes above this value, the batch inference will be aborted. Range is [-1, int.MaxValue] -1 value indicates, ignore all failures during batch inference For FileDataset count of file failures For TabularDataset, this is the count of record failures, defaults to -1
retry_settings (BatchRetrySettings) – Retry settings for a batch inference operation, defaults to None
resources (ResourceConfiguration) – Indicates compute configuration for the job.
logging_level (str) – Logging level for batch inference operation, defaults to “info”
mini_batch_size (int) – Size of the mini-batch passed to each batch invocation, defaults to 10
environment_variables (dict) – Environment variables that will be set in deployment.
code_path (Union[str, PathLike]) – Folder path to local code assets. Equivalent to code_configuration.code.
scoring_script (Union[str, PathLike]) – Scoring script name. Equivalent to code_configuration.code.scoring_script.
instance_count (int) – Number of instances the interfering will run on. Equivalent to resources.instance_count.
- Raises:
ValidationException – Raised if BatchDeployment cannot be successfully validated. Details will be provided in the error message.
Endpoint Deployment base class.
Constructor of Endpoint Deployment base class.
- Parameters:
name (Optional[str]) – Name of the deployment resource, defaults to None
- Keyword Arguments:
endpoint_name (Optional[str]) – Name of the Endpoint resource, defaults to None
description (Optional[str]) – Description of the deployment resource, defaults to None
tags (Optional[Dict[str, Any]]) – Tag dictionary. Tags can be added, removed, and updated, defaults to None
properties (Optional[Dict[str, Any]]) – The asset property dictionary, defaults to None
model (Optional[Union[str, Model]]) – The Model entity, defaults to None
code_configuration (Optional[CodeConfiguration]) – Code Configuration, defaults to None
environment (Optional[Union[str, Environment]]) – The Environment entity, defaults to None
environment_variables (Optional[Dict[str, str]]) – Environment variables that will be set in deployment, defaults to None
code_path (Optional[Union[str, PathLike]]) – Folder path to local code assets. Equivalent to code_configuration.code.path , defaults to None
scoring_script (Optional[Union[str, PathLike]]) – Scoring script name. Equivalent to code_configuration.code.scoring_script , defaults to None
- Raises:
ValidationException – Raised if Deployment cannot be successfully validated. Exception details will be provided in the error message.
- dump(dest: str | PathLike | IO, **kwargs: Any) None¶
Dump the deployment content into a file in yaml format.
- Parameters:
dest (Union[os.PathLike, str, IO[AnyStr]]) – The destination to receive this deployment’s content. Must be either a path to a local file, or an already-open file stream. If dest is a file path, a new file will be created, and an exception is raised if the file exists. If dest is an open file, the file will be written to directly, and an exception will be raised if the file is not writable.
- property base_path: str¶
The base path of the resource.
- Returns:
The base path of the resource.
- Return type:
- property code_path: str | PathLike | None¶
The code directory containing the scoring script.
- Return type:
Union[str, PathLike]
- property creation_context: SystemData | None¶
The creation context of the resource.
- Returns:
The creation metadata for the resource.
- Return type:
Optional[SystemData]
- property id: str | None¶
The resource ID.
- Returns:
The global ID of the resource, an Azure Resource Manager (ARM) ID.
- Return type:
Optional[str]
- property provisioning_state: str | None¶
Batch deployment provisioning state, readonly.
- Returns:
Batch deployment provisioning state.
- Return type:
Optional[str]
- class azure.ai.ml.entities.BatchEndpoint(*, name: str | None = None, tags: Dict | None = None, properties: Dict | None = None, auth_mode: str = 'aad_token', description: str | None = None, location: str | None = None, defaults: Dict[str, str] | None = None, default_deployment_name: str | None = None, scoring_uri: str | None = None, openapi_uri: str | None = None, **kwargs: Any)[source]¶
Batch endpoint entity.
- Parameters:
name (str) – Name of the resource.
tags (dict[str, str]) – Tag dictionary. Tags can be added, removed, and updated.
properties (dict[str, str]) – The asset property dictionary.
auth_mode (str) – Possible values include: “AMLToken”, “Key”, “AADToken”, defaults to None
description (str) – Description of the inference endpoint, defaults to None
location (str) – defaults to None
defaults (Dict[str, str]) – Traffic rules on how the traffic will be routed across deployments, defaults to {}
default_deployment_name (str) – Equivalent to defaults.default_deployment, will be ignored if defaults is present.
scoring_uri (str) – URI to use to perform a prediction, readonly.
openapi_uri (str) – URI to check the open API definition of the endpoint.
Endpoint base class.
Constructor for Endpoint base class.
- Parameters:
auth_mode (str) – The authentication mode, defaults to None
location (str) – The location of the endpoint, defaults to None
name (str) – Name of the resource.
tags (Optional[Dict[str, str]]) – Tag dictionary. Tags can be added, removed, and updated.
properties (Optional[Dict[str, str]]) – The asset property dictionary.
- Keyword Arguments:
- dump(dest: str | PathLike | IO | None = None, **kwargs: Any) Dict[str, Any][source]¶
Dump the object content into a file.
- Parameters:
dest (Union[PathLike, str, IO[AnyStr]]) – The local path or file stream to write the YAML content to. If dest is a file path, a new file will be created. If dest is an open file, the file will be written to directly.
- property base_path: str¶
The base path of the resource.
- Returns:
The base path of the resource.
- Return type:
- property creation_context: SystemData | None¶
The creation context of the resource.
- Returns:
The creation metadata for the resource.
- Return type:
Optional[SystemData]
- property id: str | None¶
The resource ID.
- Returns:
The global ID of the resource, an Azure Resource Manager (ARM) ID.
- Return type:
Optional[str]
- class azure.ai.ml.entities.BatchJob(**kwargs: Any)[source]¶
Batch jobs that are created with batch deployments/endpoints invocation.
This class shouldn’t be instantiated directly. Instead, it is used as the return type of batch deployment/endpoint invocation and job listing.
- class azure.ai.ml.entities.BatchRetrySettings(*, max_retries: int | None = None, timeout: int | None = None)[source]¶
Retry settings for batch deployment.
- class azure.ai.ml.entities.BuildContext(*, dockerfile_path: str | None = None, path: str | PathLike | None = None)[source]¶
Docker build context for Environment.
- Parameters:
path (Union[str, os.PathLike]) – The local or remote path to the the docker build context directory.
dockerfile_path (str) – The path to the dockerfile relative to root of docker build context directory.
Example:
Create a Build Context object.¶from azure.ai.ml.entities._assets.environment import BuildContext build_context = BuildContext(dockerfile_path="docker-file-path", path="docker-build-context-path")
- class azure.ai.ml.entities.CapabilityHost(*, name: str, description: str | None = None, vector_store_connections: List[str] | None = None, ai_services_connections: List[str] | None = None, storage_connections: List[str] | None = None, capability_host_kind: str | CapabilityHostKind = CapabilityHostKind.AGENTS, **kwargs: Any)[source]¶
Note
This is an experimental class, and may change at any time. Please see https://aka.ms/azuremlexperimental for more information.
Initialize a CapabilityHost instance. Capabilityhost management is controlled by MLClient’s capabilityhosts operations.
- Parameters:
name (str) – The name of the capability host.
description (Optional[str]) – The description of the capability host.
vector_store_connections (Optional[List[str]]) – A list of vector store (AI Search) connections.
ai_services_connections (Optional[List[str]]) – A list of OpenAI service connection.
storage_connections (Optional[List[str]]) – A list of storage connections. Default storage connection value is projectname/workspaceblobstore for project workspace.
capability_host_kind (Union[str, CapabilityHostKind]) – The kind of capability host, either as a string or CapabilityHostKind enum. Default is AGENTS.
kwargs (Any) – Additional keyword arguments.
Example:
Create a CapabilityHost object.¶from azure.ai.ml.entities._workspace._ai_workspaces.capability_host import ( CapabilityHost, ) from azure.ai.ml.constants._workspace import CapabilityHostKind # CapabilityHost in Hub workspace. For Hub workspace, only name and description are required. capability_host = CapabilityHost( name="test-capability-host", description="some description", capability_host_kind=CapabilityHostKind.AGENTS, ) # CapabilityHost in Project workspace capability_host = CapabilityHost( name="test-capability-host", description="some description", capability_host_kind=CapabilityHostKind.AGENTS, ai_services_connections=["connection1"], storage_connections=["projectname/workspaceblobstore"], vector_store_connections=["connection1"], )
- dump(dest: str | PathLike | IO | None, **kwargs: Any) None[source]¶
Dump the CapabilityHost content into a file in yaml format.
- Parameters:
dest (Union[PathLike, str, IO[AnyStr]]) – The destination to receive this CapabilityHost’s content. Must be either a path to a local file, or an already-open file stream. If dest is a file path, a new file will be created, and an exception is raised if the file exists. If dest is an open file, the file will be written to directly, and an exception will be raised if the file is not writable.
- property base_path: str¶
The base path of the resource.
- Returns:
The base path of the resource.
- Return type:
- property creation_context: SystemData | None¶
The creation context of the resource.
- Returns:
The creation metadata for the resource.
- Return type:
Optional[SystemData]
- class azure.ai.ml.entities.CapabilityHostKind(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source]¶
Note
This is an experimental class, and may change at any time. Please see https://aka.ms/azuremlexperimental for more information.
Capabilityhost kind.
- capitalize()¶
Return a capitalized version of the string.
More specifically, make the first character have upper case and the rest lower case.
- casefold()¶
Return a version of the string suitable for caseless comparisons.
- center(width, fillchar=' ', /)¶
Return a centered string of length width.
Padding is done using the specified fill character (default is a space).
- count(sub[, start[, end]]) int¶
Return the number of non-overlapping occurrences of substring sub in string S[start:end]. Optional arguments start and end are interpreted as in slice notation.
- encode(encoding='utf-8', errors='strict')¶
Encode the string using the codec registered for encoding.
- encoding
The encoding in which to encode the string.
- errors
The error handling scheme to use for encoding errors. The default is ‘strict’ meaning that encoding errors raise a UnicodeEncodeError. Other possible values are ‘ignore’, ‘replace’ and ‘xmlcharrefreplace’ as well as any other name registered with codecs.register_error that can handle UnicodeEncodeErrors.
- endswith(suffix[, start[, end]]) bool¶
Return True if S ends with the specified suffix, False otherwise. With optional start, test S beginning at that position. With optional end, stop comparing S at that position. suffix can also be a tuple of strings to try.
- expandtabs(tabsize=8)¶
Return a copy where all tab characters are expanded using spaces.
If tabsize is not given, a tab size of 8 characters is assumed.
- find(sub[, start[, end]]) int¶
Return the lowest index in S where substring sub is found, such that sub is contained within S[start:end]. Optional arguments start and end are interpreted as in slice notation.
Return -1 on failure.
- format(*args, **kwargs) str¶
Return a formatted version of S, using substitutions from args and kwargs. The substitutions are identified by braces (‘{’ and ‘}’).
- format_map(mapping) str¶
Return a formatted version of S, using substitutions from mapping. The substitutions are identified by braces (‘{’ and ‘}’).
- index(sub[, start[, end]]) int¶
Return the lowest index in S where substring sub is found, such that sub is contained within S[start:end]. Optional arguments start and end are interpreted as in slice notation.
Raises ValueError when the substring is not found.
- isalnum()¶
Return True if the string is an alpha-numeric string, False otherwise.
A string is alpha-numeric if all characters in the string are alpha-numeric and there is at least one character in the string.
- isalpha()¶
Return True if the string is an alphabetic string, False otherwise.
A string is alphabetic if all characters in the string are alphabetic and there is at least one character in the string.
- isascii()¶
Return True if all characters in the string are ASCII, False otherwise.
ASCII characters have code points in the range U+0000-U+007F. Empty string is ASCII too.
- isdecimal()¶
Return True if the string is a decimal string, False otherwise.
A string is a decimal string if all characters in the string are decimal and there is at least one character in the string.
- isdigit()¶
Return True if the string is a digit string, False otherwise.
A string is a digit string if all characters in the string are digits and there is at least one character in the string.
- isidentifier()¶
Return True if the string is a valid Python identifier, False otherwise.
Call keyword.iskeyword(s) to test whether string s is a reserved identifier, such as “def” or “class”.
- islower()¶
Return True if the string is a lowercase string, False otherwise.
A string is lowercase if all cased characters in the string are lowercase and there is at least one cased character in the string.
- isnumeric()¶
Return True if the string is a numeric string, False otherwise.
A string is numeric if all characters in the string are numeric and there is at least one character in the string.
- isprintable()¶
Return True if the string is printable, False otherwise.
A string is printable if all of its characters are considered printable in repr() or if it is empty.
- isspace()¶
Return True if the string is a whitespace string, False otherwise.
A string is whitespace if all characters in the string are whitespace and there is at least one character in the string.
- istitle()¶
Return True if the string is a title-cased string, False otherwise.
In a title-cased string, upper- and title-case characters may only follow uncased characters and lowercase characters only cased ones.
- isupper()¶
Return True if the string is an uppercase string, False otherwise.
A string is uppercase if all cased characters in the string are uppercase and there is at least one cased character in the string.
- join(iterable, /)¶
Concatenate any number of strings.
The string whose method is called is inserted in between each given string. The result is returned as a new string.
Example: ‘.’.join([‘ab’, ‘pq’, ‘rs’]) -> ‘ab.pq.rs’
- ljust(width, fillchar=' ', /)¶
Return a left-justified string of length width.
Padding is done using the specified fill character (default is a space).
- lower()¶
Return a copy of the string converted to lowercase.
- lstrip(chars=None, /)¶
Return a copy of the string with leading whitespace removed.
If chars is given and not None, remove characters in chars instead.
- static maketrans()¶
Return a translation table usable for str.translate().
If there is only one argument, it must be a dictionary mapping Unicode ordinals (integers) or characters to Unicode ordinals, strings or None. Character keys will be then converted to ordinals. If there are two arguments, they must be strings of equal length, and in the resulting dictionary, each character in x will be mapped to the character at the same position in y. If there is a third argument, it must be a string, whose characters will be mapped to None in the result.
- partition(sep, /)¶
Partition the string into three parts using the given separator.
This will search for the separator in the string. If the separator is found, returns a 3-tuple containing the part before the separator, the separator itself, and the part after it.
If the separator is not found, returns a 3-tuple containing the original string and two empty strings.
- removeprefix(prefix, /)¶
Return a str with the given prefix string removed if present.
If the string starts with the prefix string, return string[len(prefix):]. Otherwise, return a copy of the original string.
- removesuffix(suffix, /)¶
Return a str with the given suffix string removed if present.
If the string ends with the suffix string and that suffix is not empty, return string[:-len(suffix)]. Otherwise, return a copy of the original string.
- replace(old, new, count=-1, /)¶
Return a copy with all occurrences of substring old replaced by new.
- count
Maximum number of occurrences to replace. -1 (the default value) means replace all occurrences.
If the optional argument count is given, only the first count occurrences are replaced.
- rfind(sub[, start[, end]]) int¶
Return the highest index in S where substring sub is found, such that sub is contained within S[start:end]. Optional arguments start and end are interpreted as in slice notation.
Return -1 on failure.
- rindex(sub[, start[, end]]) int¶
Return the highest index in S where substring sub is found, such that sub is contained within S[start:end]. Optional arguments start and end are interpreted as in slice notation.
Raises ValueError when the substring is not found.
- rjust(width, fillchar=' ', /)¶
Return a right-justified string of length width.
Padding is done using the specified fill character (default is a space).
- rpartition(sep, /)¶
Partition the string into three parts using the given separator.
This will search for the separator in the string, starting at the end. If the separator is found, returns a 3-tuple containing the part before the separator, the separator itself, and the part after it.
If the separator is not found, returns a 3-tuple containing two empty strings and the original string.
- rsplit(sep=None, maxsplit=-1)¶
Return a list of the substrings in the string, using sep as the separator string.
- sep
The separator used to split the string.
When set to None (the default value), will split on any whitespace character (including n r t f and spaces) and will discard empty strings from the result.
- maxsplit
Maximum number of splits. -1 (the default value) means no limit.
Splitting starts at the end of the string and works to the front.
- rstrip(chars=None, /)¶
Return a copy of the string with trailing whitespace removed.
If chars is given and not None, remove characters in chars instead.
- split(sep=None, maxsplit=-1)¶
Return a list of the substrings in the string, using sep as the separator string.
- sep
The separator used to split the string.
When set to None (the default value), will split on any whitespace character (including n r t f and spaces) and will discard empty strings from the result.
- maxsplit
Maximum number of splits. -1 (the default value) means no limit.
Splitting starts at the front of the string and works to the end.
Note, str.split() is mainly useful for data that has been intentionally delimited. With natural text that includes punctuation, consider using the regular expression module.
- splitlines(keepends=False)¶
Return a list of the lines in the string, breaking at line boundaries.
Line breaks are not included in the resulting list unless keepends is given and true.
- startswith(prefix[, start[, end]]) bool¶
Return True if S starts with the specified prefix, False otherwise. With optional start, test S beginning at that position. With optional end, stop comparing S at that position. prefix can also be a tuple of strings to try.
- strip(chars=None, /)¶
Return a copy of the string with leading and trailing whitespace removed.
If chars is given and not None, remove characters in chars instead.
- swapcase()¶
Convert uppercase characters to lowercase and lowercase characters to uppercase.
- title()¶
Return a version of the string where each word is titlecased.
More specifically, words start with uppercased characters and all remaining cased characters have lower case.
- translate(table, /)¶
Replace each character in the string using the given translation table.
- table
Translation table, which must be a mapping of Unicode ordinals to Unicode ordinals, strings, or None.
The table must implement lookup/indexing via __getitem__, for instance a dictionary or list. If this operation raises LookupError, the character is left untouched. Characters mapped to None are deleted.
- upper()¶
Return a copy of the string converted to uppercase.
- zfill(width, /)¶
Pad a numeric string with zeros on the left, to fill a field of the given width.
The string is never truncated.
- AGENTS = 'Agents'¶
- class azure.ai.ml.entities.CategoricalDriftMetrics(*, jensen_shannon_distance: float | None = None, population_stability_index: float | None = None, pearsons_chi_squared_test: float | None = None)[source]¶
Categorical Drift Metrics
- Parameters:
jensen_shannon_distance – The Jensen-Shannon distance between the two distributions
population_stability_index – The population stability index between the two distributions
pearsons_chi_squared_test – The Pearson’s Chi-Squared test between the two distributions
- class azure.ai.ml.entities.CertificateConfiguration(certificate: str | None = None, thumbprint: str | None = None, **kwargs: str)[source]¶
- class azure.ai.ml.entities.CodeConfiguration(code: str | PathLike | None = None, scoring_script: str | PathLike | None = None)[source]¶
Code configuration for a scoring job.
- Parameters:
code (Optional[Union[Code, str]]) – The code directory containing the scoring script. The code can be an Code object, an ARM resource ID of an existing code asset, a local path, or “http:”, “https:”, or “azureml:” url pointing to a remote location.
scoring_script (Optional[str]) – The scoring script file path relative to the code directory.
Example:
Creating a CodeConfiguration for a BatchDeployment.¶from azure.ai.ml.entities import BatchDeployment, CodeConfiguration deployment = BatchDeployment( name="non-mlflow-deployment", description="this is a sample non-mlflow deployment", endpoint_name="my-batch-endpoint", model=model, code_configuration=CodeConfiguration( code="configs/deployments/model-2/onlinescoring", scoring_script="score1.py" ), environment="env", compute="cpu-cluster", instance_count=2, max_concurrency_per_instance=2, mini_batch_size=10, output_file_name="predictions.csv", )
- class azure.ai.ml.entities.Command(*, component: str | CommandComponent, compute: str | None = None, inputs: Dict[str, Input | str | bool | int | float | Enum] | None = None, outputs: Dict[str, str | Output] | None = None, limits: CommandJobLimits | None = None, identity: Dict | ManagedIdentityConfiguration | AmlTokenConfiguration | UserIdentityConfiguration | None = None, distribution: Dict | MpiDistribution | TensorFlowDistribution | PyTorchDistribution | RayDistribution | DistributionConfiguration | None = None, environment: Environment | str | None = None, environment_variables: Dict | None = None, resources: JobResourceConfiguration | None = None, services: Dict[str, JobService | JupyterLabJobService | SshJobService | TensorBoardJobService | VsCodeJobService] | None = None, queue_settings: QueueSettings | None = None, **kwargs: Any)[source]¶
Base class for command node, used for command component version consumption.
You should not instantiate this class directly. Instead, you should create it using the builder function: command().
- Keyword Arguments:
component (Union[str, CommandComponent]) – The ID or instance of the command component or job to be run for the step.
compute (Optional[str]) – The compute target the job will run on.
inputs (Optional[dict[str, Union[ Input, str, bool, int, float, Enum]]]) – A mapping of input names to input data sources used in the job.
outputs (Optional[dict[str, Union[str, Output]]]) – A mapping of output names to output data sources used in the job.
limits (CommandJobLimits) – The limits for the command component or job.
identity (Optional[Union[ dict[str, str], ManagedIdentityConfiguration, AmlTokenConfiguration, UserIdentityConfiguration]]) – The identity that the command job will use while running on compute.
distribution (Optional[Union[dict, PyTorchDistribution, MpiDistribution, TensorFlowDistribution, RayDistribution]]) – The configuration for distributed jobs.
environment (Optional[Union[str, Environment]]) – The environment that the job will run in.
environment_variables (Optional[dict[str, str]]) – A dictionary of environment variable names and values. These environment variables are set on the process where the user script is being executed.
resources (Optional[JobResourceConfiguration]) – The compute resource configuration for the command.
services (Optional[dict[str, Union[JobService, JupyterLabJobService, SshJobService, TensorBoardJobService, VsCodeJobService]]]) – The interactive services for the node. This is an experimental parameter, and may change at any time. Please see https://aka.ms/azuremlexperimental for more information.
queue_settings (Optional[QueueSettings]) – Queue settings for the job.
- Raises:
ValidationException – Raised if Command cannot be successfully validated. Details will be provided in the error message.
- clear() None. Remove all items from D.¶
- copy() a shallow copy of D¶
- dump(dest: str | PathLike | IO, **kwargs: Any) None¶
Dumps the job content into a file in YAML format.
- Parameters:
dest (Union[PathLike, str, IO[AnyStr]]) – The local path or file stream to write the YAML content to. If dest is a file path, a new file will be created. If dest is an open file, the file will be written to directly.
- Raises:
FileExistsError – Raised if dest is a file path and the file already exists.
IOError – Raised if dest is an open file and the file is not writable.
- fromkeys(value=None, /)¶
Create a new dictionary with keys from iterable and values set to value.
- get(key, default=None, /)¶
Return the value for key if key is in the dictionary, else default.
- items() a set-like object providing a view on D's items¶
- keys() a set-like object providing a view on D's keys¶
- pop(k[, d]) v, remove specified key and return the corresponding value.¶
If the key is not found, return the default if given; otherwise, raise a KeyError.
- popitem()¶
Remove and return a (key, value) pair as a 2-tuple.
Pairs are returned in LIFO (last-in, first-out) order. Raises KeyError if the dict is empty.
- set_limits(*, timeout: int, **kwargs: Any) None[source]¶
Set limits for Command.
- Keyword Arguments:
timeout (int) – The timeout for the job in seconds.
Example:
Setting a timeout limit of 10 seconds on a Command.¶from azure.ai.ml import Input, Output, command command_node = command( environment="AzureML-sklearn-1.0-ubuntu20.04-py38-cpu:33", command='echo "hello world"', distribution={"type": "Pytorch", "process_count_per_instance": 2}, inputs={ "training_data": Input(type="uri_folder"), "max_epochs": 20, "learning_rate": 1.8, "learning_rate_schedule": "time-based", }, outputs={"model_output": Output(type="uri_folder")}, ) command_node.set_limits(timeout=10)
- set_queue_settings(*, job_tier: str | None = None, priority: str | None = None) None[source]¶
Set QueueSettings for the job.
- Keyword Arguments:
Example:
Configuring queue settings on a Command.¶from azure.ai.ml import Input, Output, command command_node = command( environment="AzureML-sklearn-1.0-ubuntu20.04-py38-cpu:33", command='echo "hello world"', distribution={"type": "Pytorch", "process_count_per_instance": 2}, inputs={ "training_data": Input(type="uri_folder"), "max_epochs": 20, "learning_rate": 1.8, "learning_rate_schedule": "time-based", }, outputs={"model_output": Output(type="uri_folder")}, ) command_node.set_queue_settings(job_tier="standard", priority="medium")
- set_resources(*, instance_type: str | List[str] | None = None, instance_count: int | None = None, locations: List[str] | None = None, properties: Dict | None = None, docker_args: str | None = None, shm_size: str | None = None, **kwargs: Any) None[source]¶
Set resources for Command.
- Keyword Arguments:
instance_type (Optional[Union[str, List[str]]]) – The type of compute instance to run the job on. If not specified, the job will run on the default compute target.
instance_count (Optional[int]) – The number of instances to run the job on. If not specified, the job will run on a single instance.
locations (Optional[List[str]]) – The list of locations where the job will run. If not specified, the job will run on the default compute target.
properties (Optional[dict]) – The properties of the job.
docker_args (Optional[str]) – The Docker arguments for the job.
shm_size (Optional[str]) – The size of the docker container’s shared memory block. This should be in the format of (number)(unit) where the number has to be greater than 0 and the unit can be one of b(bytes), k(kilobytes), m(megabytes), or g(gigabytes).
Example:
Setting resources on a Command.¶from azure.ai.ml import Input, Output, command command_node = command( environment="AzureML-sklearn-1.0-ubuntu20.04-py38-cpu:33", command='echo "hello world"', distribution={"type": "Pytorch", "process_count_per_instance": 2}, inputs={ "training_data": Input(type="uri_folder"), "max_epochs": 20, "learning_rate": 1.8, "learning_rate_schedule": "time-based", }, outputs={"model_output": Output(type="uri_folder")}, ) command_node.set_resources( instance_count=1, instance_type="STANDARD_D2_v2", properties={"key": "new_val"}, shm_size="3g", )
- setdefault(key, default=None, /)¶
Insert key with a value of default if key is not in the dictionary.
Return the value for key if key is in the dictionary, else default.
- sweep(*, primary_metric: str, goal: str, sampling_algorithm: str = 'random', compute: str | None = None, max_concurrent_trials: int | None = None, max_total_trials: int | None = None, timeout: int | None = None, trial_timeout: int | None = None, early_termination_policy: str | EarlyTerminationPolicy | None = None, search_space: Dict[str, Choice | LogNormal | LogUniform | Normal | QLogNormal | QLogUniform | QNormal | QUniform | Randint | Uniform] | None = None, identity: ManagedIdentityConfiguration | AmlTokenConfiguration | UserIdentityConfiguration | None = None, queue_settings: QueueSettings | None = None, job_tier: str | None = None, priority: str | None = None) Sweep[source]¶
Turns the command into a sweep node with extra sweep run setting. The command component in the current command node will be used as its trial component. A command node can sweep multiple times, and the generated sweep node will share the same trial component.
- Keyword Arguments:
primary_metric (str) – The primary metric of the sweep objective - e.g. AUC (Area Under the Curve). The metric must be logged while running the trial component.
goal (str) – The goal of the Sweep objective. Accepted values are “minimize” or “maximize”.
sampling_algorithm (str) – The sampling algorithm to use inside the search space. Acceptable values are “random”, “grid”, or “bayesian”. Defaults to “random”.
compute (Optional[str]) – The target compute to run the node on. If not specified, the current node’s compute will be used.
max_total_trials (Optional[int]) – The maximum number of total trials to run. This value will overwrite the value in CommandJob.limits if specified.
max_concurrent_trials (Optional[int]) – The maximum number of concurrent trials for the Sweep job.
timeout (Optional[int]) – The maximum run duration in seconds, after which the job will be cancelled.
trial_timeout (Optional[int]) – The Sweep Job trial timeout value, in seconds.
early_termination_policy (Optional[Union[BanditPolicy, TruncationSelectionPolicy, MedianStoppingPolicy, str]]) – The early termination policy of the sweep node. Acceptable values are “bandit”, “median_stopping”, or “truncation_selection”. Defaults to None.
identity (Optional[Union[ ManagedIdentityConfiguration, AmlTokenConfiguration, UserIdentityConfiguration]]) – The identity that the job will use while running on compute.
search_space (Optional[Dict[str, Union[ Choice, LogNormal, LogUniform, Normal, QLogNormal, QLogUniform, QNormal, QUniform, Randint, Uniform) – The search space to use for the sweep job.
]]]
- Keyword Arguments:
queue_settings (Optional[QueueSettings]) – The queue settings for the job.
job_tier (Optional[str]) – Experimental The job tier. Accepted values are “Spot”, “Basic”, “Standard”, or “Premium”.
priority (Optional[str]) – Experimental The compute priority. Accepted values are “low”, “medium”, and “high”.
- Returns:
A Sweep node with the component from current Command node as its trial component.
- Return type:
Example:
Creating a Sweep node from a Command job.¶from azure.ai.ml import command job = command( inputs=dict(kernel="linear", penalty=1.0), compute=cpu_cluster, environment=f"{job_env.name}:{job_env.version}", code="./scripts", command="python scripts/train.py --kernel $kernel --penalty $penalty", experiment_name="sklearn-iris-flowers", ) # we can reuse an existing Command Job as a function that we can apply inputs to for the sweep configurations from azure.ai.ml.sweep import Uniform job_for_sweep = job( kernel=Uniform(min_value=0.0005, max_value=0.005), penalty=Uniform(min_value=0.9, max_value=0.99), ) from azure.ai.ml.sweep import BanditPolicy sweep_job = job_for_sweep.sweep( sampling_algorithm="random", primary_metric="best_val_acc", goal="Maximize", max_total_trials=8, max_concurrent_trials=4, early_termination_policy=BanditPolicy(slack_factor=0.15, evaluation_interval=1, delay_evaluation=10), )
- update([E, ]**F) None. Update D from dict/iterable E and F.¶
If E is present and has a .keys() method, then does: for k in E: D[k] = E[k] If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k]
- values() an object providing a view on D's values¶
- property base_path: str¶
The base path of the resource.
- Returns:
The base path of the resource.
- Return type:
- property code: PathLike | str | None¶
The source code to run the job.
- Return type:
Optional[Union[str, os.PathLike]]
- property component: str | CommandComponent¶
The ID or instance of the command component or job to be run for the step.
- Returns:
The ID or instance of the command component or job to be run for the step.
- Return type:
Union[str, CommandComponent]
- property creation_context: SystemData | None¶
The creation context of the resource.
- Returns:
The creation metadata for the resource.
- Return type:
Optional[SystemData]
- property distribution: Dict | MpiDistribution | TensorFlowDistribution | PyTorchDistribution | RayDistribution | DistributionConfiguration | None¶
The configuration for the distributed command component or job.
- Returns:
The configuration for distributed jobs.
- Return type:
Union[PyTorchDistribution, MpiDistribution, TensorFlowDistribution, RayDistribution]
- property id: str | None¶
The resource ID.
- Returns:
The global ID of the resource, an Azure Resource Manager (ARM) ID.
- Return type:
Optional[str]
- property identity: Dict | ManagedIdentityConfiguration | AmlTokenConfiguration | UserIdentityConfiguration | None¶
The identity that the job will use while running on compute.
- Returns:
The identity that the job will use while running on compute.
- Return type:
Optional[Union[ManagedIdentityConfiguration, AmlTokenConfiguration, UserIdentityConfiguration]]
- property queue_settings: QueueSettings | None¶
The queue settings for the command component or job.
- Returns:
The queue settings for the command component or job.
- Return type:
- property resources: JobResourceConfiguration¶
The compute resource configuration for the command component or job.
- Return type:
- property services: Dict[str, JobService | JupyterLabJobService | SshJobService | TensorBoardJobService | VsCodeJobService] | None¶
The interactive services for the node.
This is an experimental parameter, and may change at any time. Please see https://aka.ms/azuremlexperimental for more information.
- Return type:
dict[str, Union[JobService, JupyterLabJobService, SshJobService, TensorBoardJobService, VsCodeJobService]]
- property status: str | None¶
The status of the job.
Common values returned include “Running”, “Completed”, and “Failed”. All possible values are:
NotStarted - This is a temporary state that client-side Run objects are in before cloud submission.
Starting - The Run has started being processed in the cloud. The caller has a run ID at this point.
Provisioning - On-demand compute is being created for a given job submission.
- Preparing - The run environment is being prepared and is in one of two stages:
Docker image build
conda environment setup
- Queued - The job is queued on the compute target. For example, in BatchAI, the job is in a queued state
while waiting for all the requested nodes to be ready.
Running - The job has started to run on the compute target.
Finalizing - User code execution has completed, and the run is in post-processing stages.
CancelRequested - Cancellation has been requested for the job.
- Completed - The run has completed successfully. This includes both the user code execution and run
post-processing stages.
Failed - The run failed. Usually the Error property on a run will provide details as to why.
Canceled - Follows a cancellation request and indicates that the run is now successfully cancelled.
NotResponding - For runs that have Heartbeats enabled, no heartbeat has been recently sent.
- Returns:
Status of the job.
- Return type:
Optional[str]
- class azure.ai.ml.entities.CommandComponent(*, name: str | None = None, version: str | None = None, description: str | None = None, tags: Dict | None = None, display_name: str | None = None, command: str | None = None, code: PathLike | str | None = None, environment: Environment | str | None = None, distribution: Dict | MpiDistribution | TensorFlowDistribution | PyTorchDistribution | RayDistribution | DistributionConfiguration | None = None, resources: JobResourceConfiguration | None = None, inputs: Dict | None = None, outputs: Dict | None = None, instance_count: int | None = None, is_deterministic: bool = True, additional_includes: List | None = None, properties: Dict | None = None, **kwargs: Any)[source]¶
Command component version, used to define a Command Component or Job.
- Keyword Arguments:
name (Optional[str]) – The name of the Command job or component.
version (Optional[str]) – The version of the Command job or component.
description (Optional[str]) – The description of the component. Defaults to None.
tags (Optional[dict]) – Tag dictionary. Tags can be added, removed, and updated. Defaults to None.
display_name (Optional[str]) – The display name of the component.
command (Optional[str]) – The command to be executed.
code – The source code to run the job. Can be a local path or “http:”, “https:”, or “azureml:” url pointing to a remote location.
environment (Optional[Union[str, Environment]]) – The environment that the job will run in.
distribution (Optional[Union[PyTorchDistribution, MpiDistribution, TensorFlowDistribution, RayDistribution]]) – The configuration for distributed jobs. Defaults to None.
resources (Optional[JobResourceConfiguration]) – The compute resource configuration for the command.
inputs (Optional[dict[str, Union[ Input, str, bool, int, float, Enum, ]]]) – A mapping of input names to input data sources used in the job. Defaults to None.
outputs (Optional[dict[str, Union[str, Output]]]) – A mapping of output names to output data sources used in the job. Defaults to None.
instance_count (Optional[int]) – The number of instances or nodes to be used by the compute target. Defaults to 1.
is_deterministic (Optional[bool]) – Specifies whether the Command will return the same output given the same input. Defaults to True. When True, if a Command (component) is deterministic and has been run before in the current workspace with the same input and settings, it will reuse results from a previous submitted job when used as a node or step in a pipeline. In that scenario, no compute resources will be used.
additional_includes (Optional[List[str]]) – A list of shared additional files to be included in the component. Defaults to None.
properties (Optional[dict[str, str]]) – The job property dictionary. Defaults to None.
- Raises:
ValidationException – Raised if CommandComponent cannot be successfully validated. Details will be provided in the error message.
Example:
Creating a CommandComponent.¶from azure.ai.ml.entities import CommandComponent component = CommandComponent( name="sample_command_component_basic", display_name="CommandComponentBasic", description="This is the basic command component", tags={"tag": "tagvalue", "owner": "sdkteam"}, version="1", outputs={"component_out_path": {"type": "uri_folder"}}, command="echo Hello World", code="./src", environment="AzureML-sklearn-1.0-ubuntu20.04-py38-cpu:33", )
- dump(dest: str | PathLike | IO, **kwargs: Any) None¶
Dump the component content into a file in yaml format.
- Parameters:
dest (Union[PathLike, str, IO[AnyStr]]) – The destination to receive this component’s content. Must be either a path to a local file, or an already-open file stream. If dest is a file path, a new file will be created, and an exception is raised if the file exists. If dest is an open file, the file will be written to directly, and an exception will be raised if the file is not writable.
- property base_path: str¶
The base path of the resource.
- Returns:
The base path of the resource.
- Return type:
- property creation_context: SystemData | None¶
The creation context of the resource.
- Returns:
The creation metadata for the resource.
- Return type:
Optional[SystemData]
- property display_name: str | None¶
Display name of the component.
- Returns:
Display name of the component.
- Return type:
- property distribution: dict | MpiDistribution | TensorFlowDistribution | PyTorchDistribution | RayDistribution | DistributionConfiguration | None¶
The configuration for the distributed command component or job.
- Returns:
The distribution configuration.
- Return type:
Union[PyTorchDistribution, MpiDistribution, TensorFlowDistribution, RayDistribution]
- property id: str | None¶
The resource ID.
- Returns:
The global ID of the resource, an Azure Resource Manager (ARM) ID.
- Return type:
Optional[str]
- property instance_count: int | None¶
The number of instances or nodes to be used by the compute target.
- Returns:
The number of instances or nodes.
- Return type:
- property is_deterministic: bool | None¶
Whether the component is deterministic.
- Returns:
Whether the component is deterministic
- Return type:
- property resources: JobResourceConfiguration¶
The compute resource configuration for the command component or job.
- Returns:
The compute resource configuration for the command component or job.
- Return type:
- class azure.ai.ml.entities.CommandJob(*, inputs: Dict[str, Input | str | bool | int | float] | None = None, outputs: Dict[str, Output] | None = None, limits: CommandJobLimits | None = None, identity: Dict | ManagedIdentityConfiguration | AmlTokenConfiguration | UserIdentityConfiguration | None = None, services: Dict[str, JobService | JupyterLabJobService | SshJobService | TensorBoardJobService | VsCodeJobService] | None = None, **kwargs: Any)[source]¶
Command job.
Note
For sweep jobs, inputs, outputs, and parameters are accessible as environment variables using the prefix
AZUREML_PARAMETER_. For example, if you have a parameter named “input_data”, you can access it asAZUREML_PARAMETER_input_data.- Keyword Arguments:
services (Optional[dict[str, JobService]]) – Read-only information on services associated with the job.
inputs (Optional[dict[str, Union[Input, str, bool, int, float]]]) – Mapping of output data bindings used in the command.
outputs (Optional[dict[str, Output]]) – Mapping of output data bindings used in the job.
identity (Optional[Union[ManagedIdentityConfiguration, AmlTokenConfiguration, UserIdentityConfiguration]]) – The identity that the job will use while running on compute.
limits (Optional[CommandJobLimits]) – The limits for the job.
kwargs (dict) – A dictionary of additional configuration parameters.
Example:
Configuring a CommandJob.¶command_job = CommandJob( code="./src", command="python train.py --ss {search_space.ss}", inputs={"input1": Input(path="trial.csv")}, outputs={"default": Output(path="./foo")}, compute="trial", environment="AzureML-sklearn-1.0-ubuntu20.04-py38-cpu:33", limits=CommandJobLimits(timeout=120), )
- dump(dest: str | PathLike | IO, **kwargs: Any) None¶
Dumps the job content into a file in YAML format.
- Parameters:
dest (Union[PathLike, str, IO[AnyStr]]) – The local path or file stream to write the YAML content to. If dest is a file path, a new file will be created. If dest is an open file, the file will be written to directly.
- Raises:
FileExistsError – Raised if dest is a file path and the file already exists.
IOError – Raised if dest is an open file and the file is not writable.
- property base_path: str¶
The base path of the resource.
- Returns:
The base path of the resource.
- Return type:
- property creation_context: SystemData | None¶
The creation context of the resource.
- Returns:
The creation metadata for the resource.
- Return type:
Optional[SystemData]
- property distribution: dict | MpiDistribution | TensorFlowDistribution | PyTorchDistribution | RayDistribution | DistributionConfiguration | None¶
The configuration for the distributed command component or job.
- Returns:
The distribution configuration.
- Return type:
Union[PyTorchDistribution, MpiDistribution, TensorFlowDistribution, RayDistribution]
- property id: str | None¶
The resource ID.
- Returns:
The global ID of the resource, an Azure Resource Manager (ARM) ID.
- Return type:
Optional[str]
- property resources: JobResourceConfiguration¶
The compute resource configuration for the command component or job.
- Returns:
The compute resource configuration for the command component or job.
- Return type:
- property status: str | None¶
The status of the job.
Common values returned include “Running”, “Completed”, and “Failed”. All possible values are:
NotStarted - This is a temporary state that client-side Run objects are in before cloud submission.
Starting - The Run has started being processed in the cloud. The caller has a run ID at this point.
Provisioning - On-demand compute is being created for a given job submission.
- Preparing - The run environment is being prepared and is in one of two stages:
Docker image build
conda environment setup
- Queued - The job is queued on the compute target. For example, in BatchAI, the job is in a queued state
while waiting for all the requested nodes to be ready.
Running - The job has started to run on the compute target.
Finalizing - User code execution has completed, and the run is in post-processing stages.
CancelRequested - Cancellation has been requested for the job.
- Completed - The run has completed successfully. This includes both the user code execution and run
post-processing stages.
Failed - The run failed. Usually the Error property on a run will provide details as to why.
Canceled - Follows a cancellation request and indicates that the run is now successfully cancelled.
NotResponding - For runs that have Heartbeats enabled, no heartbeat has been recently sent.
- Returns:
Status of the job.
- Return type:
Optional[str]
- class azure.ai.ml.entities.CommandJobLimits(*, timeout: int | str | None = None)[source]¶
Limits for Command Jobs.
- Keyword Arguments:
timeout (Optional[Union[int, str]]) – The maximum run duration, in seconds, after which the job will be cancelled.
Example:
Configuring a CommandJob with CommandJobLimits.¶command_job = CommandJob( code="./src", command="python train.py --ss {search_space.ss}", inputs={"input1": Input(path="trial.csv")}, outputs={"default": Output(path="./foo")}, compute="trial", environment="AzureML-sklearn-1.0-ubuntu20.04-py38-cpu:33", limits=CommandJobLimits(timeout=120), )
- class azure.ai.ml.entities.Component(*, name: str | None = None, version: str | None = None, id: str | None = None, type: str | None = None, description: str | None = None, tags: Dict | None = None, properties: Dict | None = None, display_name: str | None = None, is_deterministic: bool = True, inputs: Dict | None = None, outputs: Dict | None = None, yaml_str: str | None = None, _schema: str | None = None, creation_context: SystemData | None = None, **kwargs: Any)[source]¶
Base class for component version, used to define a component. Can’t be instantiated directly.
- Parameters:
name (str) – Name of the resource.
version (str) – Version of the resource.
id (str) – Global ID of the resource, Azure Resource Manager ID.
type (str) – Type of the command, supported is ‘command’.
description (str) – Description of the resource.
tags (dict) – Tag dictionary. Tags can be added, removed, and updated.
properties (dict) – Internal use only.
display_name (str) – Display name of the component.
is_deterministic (bool) – Whether the component is deterministic. Defaults to True.
inputs (dict) – Inputs of the component.
outputs (dict) – Outputs of the component.
yaml_str (str) – The YAML string of the component.
_schema (str) – Schema of the component.
creation_context (SystemData) – Creation metadata of the component.
kwargs – Additional parameters for the component.
- Raises:
ValidationException – Raised if Component cannot be successfully validated. Details will be provided in the error message.
- dump(dest: str | PathLike | IO, **kwargs: Any) None[source]¶
Dump the component content into a file in yaml format.
- Parameters:
dest (Union[PathLike, str, IO[AnyStr]]) – The destination to receive this component’s content. Must be either a path to a local file, or an already-open file stream. If dest is a file path, a new file will be created, and an exception is raised if the file exists. If dest is an open file, the file will be written to directly, and an exception will be raised if the file is not writable.
- property base_path: str¶
The base path of the resource.
- Returns:
The base path of the resource.
- Return type:
- property creation_context: SystemData | None¶
The creation context of the resource.
- Returns:
The creation metadata for the resource.
- Return type:
Optional[SystemData]
- property display_name: str | None¶
Display name of the component.
- Returns:
Display name of the component.
- Return type:
- property id: str | None¶
The resource ID.
- Returns:
The global ID of the resource, an Azure Resource Manager (ARM) ID.
- Return type:
Optional[str]
- property is_deterministic: bool | None¶
Whether the component is deterministic.
- Returns:
Whether the component is deterministic
- Return type:
- class azure.ai.ml.entities.Compute(name: str, location: str | None = None, description: str | None = None, resource_id: str | None = None, tags: Dict | None = None, **kwargs: Any)[source]¶
Base class for compute resources.
This class should not be instantiated directly. Instead, use one of its subclasses.
- Parameters:
type (str) – The compute type. Accepted values are “amlcompute”, “computeinstance”, “virtualmachine”, “kubernetes”, and “synapsespark”.
name (str) – Name of the compute resource.
location (Optional[str]) – The resource location. Defaults to workspace location.
description (Optional[str]) – Description of the resource. Defaults to None.
resource_id (Optional[str]) – ARM resource id of the underlying compute. Defaults to None.
tags (Optional[dict[str, str]]) – A set of tags. Contains resource tags defined as key/value pairs.
- dump(dest: str | PathLike | IO, **kwargs: Any) None[source]¶
Dump the compute content into a file in yaml format.
- Parameters:
dest (Union[PathLike, str, IO[AnyStr]]) – The destination to receive this compute’s content. Must be either a path to a local file, or an already-open file stream. If dest is a file path, a new file will be created, and an exception is raised if the file exists. If dest is an open file, the file will be written to directly, and an exception will be raised if the file is not writable.’.
- property base_path: str¶
The base path of the resource.
- Returns:
The base path of the resource.
- Return type:
- property created_on: str | None¶
The compute resource creation timestamp.
- Returns:
The compute resource creation timestamp.
- Return type:
Optional[str]
- property creation_context: SystemData | None¶
The creation context of the resource.
- Returns:
The creation metadata for the resource.
- Return type:
Optional[SystemData]
- property id: str | None¶
The resource ID.
- Returns:
The global ID of the resource, an Azure Resource Manager (ARM) ID.
- Return type:
Optional[str]
- property provisioning_errors: str | None¶
The compute resource provisioning errors.
- Returns:
The compute resource provisioning errors.
- Return type:
Optional[str]
- class azure.ai.ml.entities.ComputeConfiguration(*, target: str | None = None, instance_count: int | None = None, is_local: bool | None = None, instance_type: str | None = None, location: str | None = None, properties: Dict[str, Any] | None = None, deserialize_properties: bool = False)[source]¶
Compute resource configuration
- Parameters:
target (Optional[str]) – The compute target.
instance_count (Optional[int]) – The number of instances.
is_local (Optional[bool]) – Specifies if the compute will be on the local machine.
location (Optional[str]) – The location of the compute resource.
properties (Optional[Dict[str, Any]]) – The resource properties
deserialize_properties (bool) – Specifies if property bag should be deserialized. Defaults to False.
- class azure.ai.ml.entities.ComputeInstance(*, name: str, description: str | None = None, size: str | None = None, tags: dict | None = None, ssh_public_access_enabled: bool | None = None, create_on_behalf_of: AssignedUserConfiguration | None = None, network_settings: NetworkSettings | None = None, ssh_settings: ComputeInstanceSshSettings | None = None, schedules: ComputeSchedules | None = None, identity: IdentityConfiguration | None = None, idle_time_before_shutdown: str | None = None, idle_time_before_shutdown_minutes: int | None = None, setup_scripts: SetupScripts | None = None, enable_node_public_ip: bool = True, custom_applications: List[CustomApplications] | None = None, enable_sso: bool = True, enable_root_access: bool = True, release_quota_on_stop: bool = False, enable_os_patching: bool = False, **kwargs: Any)[source]¶
Compute Instance resource.
- Parameters:
name (str) – Name of the compute.
location (Optional[str]) – The resource location.
description (Optional[str]) – Description of the resource.
size (Optional[str]) – Compute size.
tags (Optional[dict[str, str]]) – A set of tags. Contains resource tags defined as key/value pairs.
create_on_behalf_of (Optional[AssignedUserConfiguration]) – Configuration to create resource on behalf of another user. Defaults to None.
network_settings (Optional[NetworkSettings]) – Network settings for the compute instance.
ssh_settings (Optional[ComputeInstanceSshSettings]) – SSH settings for the compute instance.
ssh_public_access_enabled (Optional[bool]) –
State of the public SSH port. Defaults to None. Possible values are:
False - Indicates that the public ssh port is closed on all nodes of the cluster.
True - Indicates that the public ssh port is open on all nodes of the cluster.
- None -Indicates that the public ssh port is closed on all nodes of the cluster if VNet is defined,
else is open all public nodes. It can be default only during cluster creation time, after creation it will be either True or False.
schedules (Optional[ComputeSchedules]) – Compute instance schedules. Defaults to None.
identity (IdentityConfiguration) – The identities that are associated with the compute cluster.
idle_time_before_shutdown (Optional[str]) – Deprecated. Use the idle_time_before_shutdown_minutes parameter instead. Stops compute instance after user defined period of inactivity. Time is defined in ISO8601 format. Minimum is 15 minutes, maximum is 3 days.
idle_time_before_shutdown_minutes (Optional[int]) – Stops compute instance after a user defined period of inactivity in minutes. Minimum is 15 minutes, maximum is 3 days.
enable_node_public_ip (Optional[bool]) –
Enable or disable node public IP address provisioning. Defaults to True. Possible values are:
True - Indicates that the compute nodes will have public IPs provisioned.
False - Indicates that the compute nodes will have a private endpoint and no public IPs.
setup_scripts (Optional[SetupScripts]) – Details of customized scripts to execute for setting up the cluster.
custom_applications (Optional[List[CustomApplications]]) – List of custom applications and their endpoints for the compute instance.
enable_sso (bool) – Enable or disable single sign-on. Defaults to True.
enable_root_access (bool) – Enable or disable root access. Defaults to True.
release_quota_on_stop (bool) – Release quota on stop for the compute instance. Defaults to False.
enable_os_patching (bool) – Enable or disable OS patching for the compute instance. Defaults to False.
- Variables:
state – State of the resource.
last_operation – The last operation.
applications – Applications associated with the compute instance.
Example:
Creating a ComputeInstance object.¶from azure.ai.ml.entities import ComputeInstance ci = ComputeInstance( name=ci_name, size="Standard_DS2_v2", ) ml_client.compute.begin_create_or_update(ci)
- dump(dest: str | PathLike | IO, **kwargs: Any) None¶
Dump the compute content into a file in yaml format.
- Parameters:
dest (Union[PathLike, str, IO[AnyStr]]) – The destination to receive this compute’s content. Must be either a path to a local file, or an already-open file stream. If dest is a file path, a new file will be created, and an exception is raised if the file exists. If dest is an open file, the file will be written to directly, and an exception will be raised if the file is not writable.’.
- property base_path: str¶
The base path of the resource.
- Returns:
The base path of the resource.
- Return type:
- property created_on: str | None¶
The compute resource creation timestamp.
- Returns:
The compute resource creation timestamp.
- Return type:
Optional[str]
- property creation_context: SystemData | None¶
The creation context of the resource.
- Returns:
The creation metadata for the resource.
- Return type:
Optional[SystemData]
- property id: str | None¶
The resource ID.
- Returns:
The global ID of the resource, an Azure Resource Manager (ARM) ID.
- Return type:
Optional[str]
- property last_operation: Dict[str, str]¶
The last operation.
- Returns:
The last operation.
- Return type:
- property os_image_metadata: ImageMetadata¶
Metadata about the operating system image for this compute instance.
- Returns:
Operating system image metadata.
- Return type:
- property provisioning_errors: str | None¶
The compute resource provisioning errors.
- Returns:
The compute resource provisioning errors.
- Return type:
Optional[str]
- property provisioning_state: str | None¶
The compute resource’s provisioning state.
- Returns:
The compute resource’s provisioning state.
- Return type:
Optional[str]
- class azure.ai.ml.entities.ComputeInstanceSshSettings(*, ssh_key_value: str | None = None, **kwargs: Any)[source]¶
Credentials for an administrator user account to SSH into the compute node.
Can only be configured if ssh_public_access_enabled is set to true on compute resource.
- Parameters:
ssh_key_value (Optional[str]) – The SSH public key of the administrator user account.
Example:
Configuring ComputeInstanceSshSettings object.¶from azure.ai.ml.entities import ComputeInstanceSshSettings ssh_settings = ComputeInstanceSshSettings( ssh_key_value="ssh-rsa ABCDEFGHIJKLMNOPQRSTUVWXYZ administrator@MININT-2023" )
- class azure.ai.ml.entities.ComputePowerAction(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source]¶
[Required] The compute power action.
- capitalize()¶
Return a capitalized version of the string.
More specifically, make the first character have upper case and the rest lower case.
- casefold()¶
Return a version of the string suitable for caseless comparisons.
- center(width, fillchar=' ', /)¶
Return a centered string of length width.
Padding is done using the specified fill character (default is a space).
- count(sub[, start[, end]]) int¶
Return the number of non-overlapping occurrences of substring sub in string S[start:end]. Optional arguments start and end are interpreted as in slice notation.
- encode(encoding='utf-8', errors='strict')¶
Encode the string using the codec registered for encoding.
- encoding
The encoding in which to encode the string.
- errors
The error handling scheme to use for encoding errors. The default is ‘strict’ meaning that encoding errors raise a UnicodeEncodeError. Other possible values are ‘ignore’, ‘replace’ and ‘xmlcharrefreplace’ as well as any other name registered with codecs.register_error that can handle UnicodeEncodeErrors.
- endswith(suffix[, start[, end]]) bool¶
Return True if S ends with the specified suffix, False otherwise. With optional start, test S beginning at that position. With optional end, stop comparing S at that position. suffix can also be a tuple of strings to try.
- expandtabs(tabsize=8)¶
Return a copy where all tab characters are expanded using spaces.
If tabsize is not given, a tab size of 8 characters is assumed.
- find(sub[, start[, end]]) int¶
Return the lowest index in S where substring sub is found, such that sub is contained within S[start:end]. Optional arguments start and end are interpreted as in slice notation.
Return -1 on failure.
- format(*args, **kwargs) str¶
Return a formatted version of S, using substitutions from args and kwargs. The substitutions are identified by braces (‘{’ and ‘}’).
- format_map(mapping) str¶
Return a formatted version of S, using substitutions from mapping. The substitutions are identified by braces (‘{’ and ‘}’).
- index(sub[, start[, end]]) int¶
Return the lowest index in S where substring sub is found, such that sub is contained within S[start:end]. Optional arguments start and end are interpreted as in slice notation.
Raises ValueError when the substring is not found.
- isalnum()¶
Return True if the string is an alpha-numeric string, False otherwise.
A string is alpha-numeric if all characters in the string are alpha-numeric and there is at least one character in the string.
- isalpha()¶
Return True if the string is an alphabetic string, False otherwise.
A string is alphabetic if all characters in the string are alphabetic and there is at least one character in the string.
- isascii()¶
Return True if all characters in the string are ASCII, False otherwise.
ASCII characters have code points in the range U+0000-U+007F. Empty string is ASCII too.
- isdecimal()¶
Return True if the string is a decimal string, False otherwise.
A string is a decimal string if all characters in the string are decimal and there is at least one character in the string.
- isdigit()¶
Return True if the string is a digit string, False otherwise.
A string is a digit string if all characters in the string are digits and there is at least one character in the string.
- isidentifier()¶
Return True if the string is a valid Python identifier, False otherwise.
Call keyword.iskeyword(s) to test whether string s is a reserved identifier, such as “def” or “class”.
- islower()¶
Return True if the string is a lowercase string, False otherwise.
A string is lowercase if all cased characters in the string are lowercase and there is at least one cased character in the string.
- isnumeric()¶
Return True if the string is a numeric string, False otherwise.
A string is numeric if all characters in the string are numeric and there is at least one character in the string.
- isprintable()¶
Return True if the string is printable, False otherwise.
A string is printable if all of its characters are considered printable in repr() or if it is empty.
- isspace()¶
Return True if the string is a whitespace string, False otherwise.
A string is whitespace if all characters in the string are whitespace and there is at least one character in the string.
- istitle()¶
Return True if the string is a title-cased string, False otherwise.
In a title-cased string, upper- and title-case characters may only follow uncased characters and lowercase characters only cased ones.
- isupper()¶
Return True if the string is an uppercase string, False otherwise.
A string is uppercase if all cased characters in the string are uppercase and there is at least one cased character in the string.
- join(iterable, /)¶
Concatenate any number of strings.
The string whose method is called is inserted in between each given string. The result is returned as a new string.
Example: ‘.’.join([‘ab’, ‘pq’, ‘rs’]) -> ‘ab.pq.rs’
- ljust(width, fillchar=' ', /)¶
Return a left-justified string of length width.
Padding is done using the specified fill character (default is a space).
- lower()¶
Return a copy of the string converted to lowercase.
- lstrip(chars=None, /)¶
Return a copy of the string with leading whitespace removed.
If chars is given and not None, remove characters in chars instead.
- static maketrans()¶
Return a translation table usable for str.translate().
If there is only one argument, it must be a dictionary mapping Unicode ordinals (integers) or characters to Unicode ordinals, strings or None. Character keys will be then converted to ordinals. If there are two arguments, they must be strings of equal length, and in the resulting dictionary, each character in x will be mapped to the character at the same position in y. If there is a third argument, it must be a string, whose characters will be mapped to None in the result.
- partition(sep, /)¶
Partition the string into three parts using the given separator.
This will search for the separator in the string. If the separator is found, returns a 3-tuple containing the part before the separator, the separator itself, and the part after it.
If the separator is not found, returns a 3-tuple containing the original string and two empty strings.
- removeprefix(prefix, /)¶
Return a str with the given prefix string removed if present.
If the string starts with the prefix string, return string[len(prefix):]. Otherwise, return a copy of the original string.
- removesuffix(suffix, /)¶
Return a str with the given suffix string removed if present.
If the string ends with the suffix string and that suffix is not empty, return string[:-len(suffix)]. Otherwise, return a copy of the original string.
- replace(old, new, count=-1, /)¶
Return a copy with all occurrences of substring old replaced by new.
- count
Maximum number of occurrences to replace. -1 (the default value) means replace all occurrences.
If the optional argument count is given, only the first count occurrences are replaced.
- rfind(sub[, start[, end]]) int¶
Return the highest index in S where substring sub is found, such that sub is contained within S[start:end]. Optional arguments start and end are interpreted as in slice notation.
Return -1 on failure.
- rindex(sub[, start[, end]]) int¶
Return the highest index in S where substring sub is found, such that sub is contained within S[start:end]. Optional arguments start and end are interpreted as in slice notation.
Raises ValueError when the substring is not found.
- rjust(width, fillchar=' ', /)¶
Return a right-justified string of length width.
Padding is done using the specified fill character (default is a space).
- rpartition(sep, /)¶
Partition the string into three parts using the given separator.
This will search for the separator in the string, starting at the end. If the separator is found, returns a 3-tuple containing the part before the separator, the separator itself, and the part after it.
If the separator is not found, returns a 3-tuple containing two empty strings and the original string.
- rsplit(sep=None, maxsplit=-1)¶
Return a list of the substrings in the string, using sep as the separator string.
- sep
The separator used to split the string.
When set to None (the default value), will split on any whitespace character (including n r t f and spaces) and will discard empty strings from the result.
- maxsplit
Maximum number of splits. -1 (the default value) means no limit.
Splitting starts at the end of the string and works to the front.
- rstrip(chars=None, /)¶
Return a copy of the string with trailing whitespace removed.
If chars is given and not None, remove characters in chars instead.
- split(sep=None, maxsplit=-1)¶
Return a list of the substrings in the string, using sep as the separator string.
- sep
The separator used to split the string.
When set to None (the default value), will split on any whitespace character (including n r t f and spaces) and will discard empty strings from the result.
- maxsplit
Maximum number of splits. -1 (the default value) means no limit.
Splitting starts at the front of the string and works to the end.
Note, str.split() is mainly useful for data that has been intentionally delimited. With natural text that includes punctuation, consider using the regular expression module.
- splitlines(keepends=False)¶
Return a list of the lines in the string, breaking at line boundaries.
Line breaks are not included in the resulting list unless keepends is given and true.
- startswith(prefix[, start[, end]]) bool¶
Return True if S starts with the specified prefix, False otherwise. With optional start, test S beginning at that position. With optional end, stop comparing S at that position. prefix can also be a tuple of strings to try.
- strip(chars=None, /)¶
Return a copy of the string with leading and trailing whitespace removed.
If chars is given and not None, remove characters in chars instead.
- swapcase()¶
Convert uppercase characters to lowercase and lowercase characters to uppercase.
- title()¶
Return a version of the string where each word is titlecased.
More specifically, words start with uppercased characters and all remaining cased characters have lower case.
- translate(table, /)¶
Replace each character in the string using the given translation table.
- table
Translation table, which must be a mapping of Unicode ordinals to Unicode ordinals, strings, or None.
The table must implement lookup/indexing via __getitem__, for instance a dictionary or list. If this operation raises LookupError, the character is left untouched. Characters mapped to None are deleted.
- upper()¶
Return a copy of the string converted to uppercase.
- zfill(width, /)¶
Pad a numeric string with zeros on the left, to fill a field of the given width.
The string is never truncated.
- START = 'Start'¶
- STOP = 'Stop'¶
- class azure.ai.ml.entities.ComputeRuntime(*, spark_runtime_version: str | None = None)[source]¶
Spark compute runtime configuration.
- Keyword Arguments:
spark_runtime_version (Optional[str]) – Spark runtime version.
Example:
Creating a ComputeRuntime object.¶from azure.ai.ml.entities import ComputeRuntime compute_runtime = ComputeRuntime(spark_runtime_version="3.3.0")
- class azure.ai.ml.entities.ComputeSchedules(*, compute_start_stop: List[ComputeStartStopSchedule] | None = None)[source]¶
Compute schedules.
- Parameters:
compute_start_stop (List[ComputeStartStopSchedule]) – Compute start or stop schedules.
kwargs (dict) – A dictionary of additional configuration parameters.
Example:
Creating a ComputeSchedules object.¶from azure.ai.ml.constants import TimeZone from azure.ai.ml.entities import ComputeSchedules, ComputeStartStopSchedule, CronTrigger start_stop = ComputeStartStopSchedule( trigger=CronTrigger( expression="15 10 * * 1", start_time="2022-03-10 10:15:00", end_time="2022-06-10 10:15:00", time_zone=TimeZone.PACIFIC_STANDARD_TIME, ) ) compute_schedules = ComputeSchedules(compute_start_stop=[start_stop])
- class azure.ai.ml.entities.ComputeStartStopSchedule(*, trigger: CronTrigger | RecurrenceTrigger | None = None, action: ComputePowerAction | None = None, state: ScheduleStatus = ScheduleStatus.ENABLED, **kwargs: Any)[source]¶
Schedules for compute start or stop scenario.
- Parameters:
trigger (Union[CronTrigger, RecurrenceTrigger]) – The trigger of the schedule.
action (ComputePowerAction) – The compute power action.
state (ScheduleState) – The state of the schedule.
kwargs (dict) – A dictionary of additional configuration parameters.
Example:
Creating a ComputeStartStopSchedule object.¶from azure.ai.ml.constants import TimeZone from azure.ai.ml.entities import ComputeSchedules, ComputeStartStopSchedule, CronTrigger start_stop = ComputeStartStopSchedule( trigger=CronTrigger( expression="15 10 * * 1", start_time="2022-03-10 10:15:00", end_time="2022-06-10 10:15:00", time_zone=TimeZone.PACIFIC_STANDARD_TIME, ) ) compute_schedules = ComputeSchedules(compute_start_stop=[start_stop])
- class azure.ai.ml.entities.ContainerRegistryCredential(*, location: str | None = None, username: str | None = None, passwords: List[str] | None = None)[source]¶
Key for ACR associated with given workspace.
- class azure.ai.ml.entities.CreatedByType(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source]¶
The type of identity that created the resource.
- capitalize()¶
Return a capitalized version of the string.
More specifically, make the first character have upper case and the rest lower case.
- casefold()¶
Return a version of the string suitable for caseless comparisons.
- center(width, fillchar=' ', /)¶
Return a centered string of length width.
Padding is done using the specified fill character (default is a space).
- count(sub[, start[, end]]) int¶
Return the number of non-overlapping occurrences of substring sub in string S[start:end]. Optional arguments start and end are interpreted as in slice notation.
- encode(encoding='utf-8', errors='strict')¶
Encode the string using the codec registered for encoding.
- encoding
The encoding in which to encode the string.
- errors
The error handling scheme to use for encoding errors. The default is ‘strict’ meaning that encoding errors raise a UnicodeEncodeError. Other possible values are ‘ignore’, ‘replace’ and ‘xmlcharrefreplace’ as well as any other name registered with codecs.register_error that can handle UnicodeEncodeErrors.
- endswith(suffix[, start[, end]]) bool¶
Return True if S ends with the specified suffix, False otherwise. With optional start, test S beginning at that position. With optional end, stop comparing S at that position. suffix can also be a tuple of strings to try.
- expandtabs(tabsize=8)¶
Return a copy where all tab characters are expanded using spaces.
If tabsize is not given, a tab size of 8 characters is assumed.
- find(sub[, start[, end]]) int¶
Return the lowest index in S where substring sub is found, such that sub is contained within S[start:end]. Optional arguments start and end are interpreted as in slice notation.
Return -1 on failure.
- format(*args, **kwargs) str¶
Return a formatted version of S, using substitutions from args and kwargs. The substitutions are identified by braces (‘{’ and ‘}’).
- format_map(mapping) str¶
Return a formatted version of S, using substitutions from mapping. The substitutions are identified by braces (‘{’ and ‘}’).
- index(sub[, start[, end]]) int¶
Return the lowest index in S where substring sub is found, such that sub is contained within S[start:end]. Optional arguments start and end are interpreted as in slice notation.
Raises ValueError when the substring is not found.
- isalnum()¶
Return True if the string is an alpha-numeric string, False otherwise.
A string is alpha-numeric if all characters in the string are alpha-numeric and there is at least one character in the string.
- isalpha()¶
Return True if the string is an alphabetic string, False otherwise.
A string is alphabetic if all characters in the string are alphabetic and there is at least one character in the string.
- isascii()¶
Return True if all characters in the string are ASCII, False otherwise.
ASCII characters have code points in the range U+0000-U+007F. Empty string is ASCII too.
- isdecimal()¶
Return True if the string is a decimal string, False otherwise.
A string is a decimal string if all characters in the string are decimal and there is at least one character in the string.
- isdigit()¶
Return True if the string is a digit string, False otherwise.
A string is a digit string if all characters in the string are digits and there is at least one character in the string.
- isidentifier()¶
Return True if the string is a valid Python identifier, False otherwise.
Call keyword.iskeyword(s) to test whether string s is a reserved identifier, such as “def” or “class”.
- islower()¶
Return True if the string is a lowercase string, False otherwise.
A string is lowercase if all cased characters in the string are lowercase and there is at least one cased character in the string.
- isnumeric()¶
Return True if the string is a numeric string, False otherwise.
A string is numeric if all characters in the string are numeric and there is at least one character in the string.
- isprintable()¶
Return True if the string is printable, False otherwise.
A string is printable if all of its characters are considered printable in repr() or if it is empty.
- isspace()¶
Return True if the string is a whitespace string, False otherwise.
A string is whitespace if all characters in the string are whitespace and there is at least one character in the string.
- istitle()¶
Return True if the string is a title-cased string, False otherwise.
In a title-cased string, upper- and title-case characters may only follow uncased characters and lowercase characters only cased ones.
- isupper()¶
Return True if the string is an uppercase string, False otherwise.
A string is uppercase if all cased characters in the string are uppercase and there is at least one cased character in the string.
- join(iterable, /)¶
Concatenate any number of strings.
The string whose method is called is inserted in between each given string. The result is returned as a new string.
Example: ‘.’.join([‘ab’, ‘pq’, ‘rs’]) -> ‘ab.pq.rs’
- ljust(width, fillchar=' ', /)¶
Return a left-justified string of length width.
Padding is done using the specified fill character (default is a space).
- lower()¶
Return a copy of the string converted to lowercase.
- lstrip(chars=None, /)¶
Return a copy of the string with leading whitespace removed.
If chars is given and not None, remove characters in chars instead.
- static maketrans()¶
Return a translation table usable for str.translate().
If there is only one argument, it must be a dictionary mapping Unicode ordinals (integers) or characters to Unicode ordinals, strings or None. Character keys will be then converted to ordinals. If there are two arguments, they must be strings of equal length, and in the resulting dictionary, each character in x will be mapped to the character at the same position in y. If there is a third argument, it must be a string, whose characters will be mapped to None in the result.
- partition(sep, /)¶
Partition the string into three parts using the given separator.
This will search for the separator in the string. If the separator is found, returns a 3-tuple containing the part before the separator, the separator itself, and the part after it.
If the separator is not found, returns a 3-tuple containing the original string and two empty strings.
- removeprefix(prefix, /)¶
Return a str with the given prefix string removed if present.
If the string starts with the prefix string, return string[len(prefix):]. Otherwise, return a copy of the original string.
- removesuffix(suffix, /)¶
Return a str with the given suffix string removed if present.
If the string ends with the suffix string and that suffix is not empty, return string[:-len(suffix)]. Otherwise, return a copy of the original string.
- replace(old, new, count=-1, /)¶
Return a copy with all occurrences of substring old replaced by new.
- count
Maximum number of occurrences to replace. -1 (the default value) means replace all occurrences.
If the optional argument count is given, only the first count occurrences are replaced.
- rfind(sub[, start[, end]]) int¶
Return the highest index in S where substring sub is found, such that sub is contained within S[start:end]. Optional arguments start and end are interpreted as in slice notation.
Return -1 on failure.
- rindex(sub[, start[, end]]) int¶
Return the highest index in S where substring sub is found, such that sub is contained within S[start:end]. Optional arguments start and end are interpreted as in slice notation.
Raises ValueError when the substring is not found.
- rjust(width, fillchar=' ', /)¶
Return a right-justified string of length width.
Padding is done using the specified fill character (default is a space).
- rpartition(sep, /)¶
Partition the string into three parts using the given separator.
This will search for the separator in the string, starting at the end. If the separator is found, returns a 3-tuple containing the part before the separator, the separator itself, and the part after it.
If the separator is not found, returns a 3-tuple containing two empty strings and the original string.
- rsplit(sep=None, maxsplit=-1)¶
Return a list of the substrings in the string, using sep as the separator string.
- sep
The separator used to split the string.
When set to None (the default value), will split on any whitespace character (including n r t f and spaces) and will discard empty strings from the result.
- maxsplit
Maximum number of splits. -1 (the default value) means no limit.
Splitting starts at the end of the string and works to the front.
- rstrip(chars=None, /)¶
Return a copy of the string with trailing whitespace removed.
If chars is given and not None, remove characters in chars instead.
- split(sep=None, maxsplit=-1)¶
Return a list of the substrings in the string, using sep as the separator string.
- sep
The separator used to split the string.
When set to None (the default value), will split on any whitespace character (including n r t f and spaces) and will discard empty strings from the result.
- maxsplit
Maximum number of splits. -1 (the default value) means no limit.
Splitting starts at the front of the string and works to the end.
Note, str.split() is mainly useful for data that has been intentionally delimited. With natural text that includes punctuation, consider using the regular expression module.
- splitlines(keepends=False)¶
Return a list of the lines in the string, breaking at line boundaries.
Line breaks are not included in the resulting list unless keepends is given and true.
- startswith(prefix[, start[, end]]) bool¶
Return True if S starts with the specified prefix, False otherwise. With optional start, test S beginning at that position. With optional end, stop comparing S at that position. prefix can also be a tuple of strings to try.
- strip(chars=None, /)¶
Return a copy of the string with leading and trailing whitespace removed.
If chars is given and not None, remove characters in chars instead.
- swapcase()¶
Convert uppercase characters to lowercase and lowercase characters to uppercase.
- title()¶
Return a version of the string where each word is titlecased.
More specifically, words start with uppercased characters and all remaining cased characters have lower case.
- translate(table, /)¶
Replace each character in the string using the given translation table.
- table
Translation table, which must be a mapping of Unicode ordinals to Unicode ordinals, strings, or None.
The table must implement lookup/indexing via __getitem__, for instance a dictionary or list. If this operation raises LookupError, the character is left untouched. Characters mapped to None are deleted.
- upper()¶
Return a copy of the string converted to uppercase.
- zfill(width, /)¶
Pad a numeric string with zeros on the left, to fill a field of the given width.
The string is never truncated.
- APPLICATION = 'Application'¶
- KEY = 'Key'¶
- MANAGED_IDENTITY = 'ManagedIdentity'¶
- USER = 'User'¶
- class azure.ai.ml.entities.CronTrigger(*, expression: str, start_time: str | datetime | None = None, end_time: str | datetime | None = None, time_zone: str | TimeZone = TimeZone.UTC)[source]¶
Cron Trigger for a job schedule.
- Keyword Arguments:
expression (str) – The cron expression of schedule, following NCronTab format.
start_time (Optional[Union[str, datetime]]) – The start time for the trigger. If using a datetime object, leave the tzinfo as None and use the
time_zoneparameter to specify a time zone if needed. If using a string, use the format YYYY-MM-DDThh:mm:ss. Defaults to running the first workload instantly and continuing future workloads based on the schedule. If the start time is in the past, the first workload is run at the next calculated run time.end_time (Optional[Union[str, datetime]]) – The start time for the trigger. If using a datetime object, leave the tzinfo as None and use the
time_zoneparameter to specify a time zone if needed. If using a string, use the format YYYY-MM-DDThh:mm:ss. Note that end_time is not supported for compute schedules.time_zone (Union[str, TimeZone]) – The time zone where the schedule will run. Defaults to UTC(+00:00). Note that this applies to the start_time and end_time.
- Raises:
Exception – Raised if end_time is in the past.
Example:
Configuring a CronTrigger.¶from datetime import datetime from azure.ai.ml.constants import TimeZone from azure.ai.ml.entities import CronTrigger trigger = CronTrigger( expression="15 10 * * 1", start_time=datetime(year=2022, month=3, day=10, hour=10, minute=15), end_time=datetime(year=2022, month=6, day=10, hour=10, minute=15), time_zone=TimeZone.PACIFIC_STANDARD_TIME, )
- class azure.ai.ml.entities.CustomApplications(*, name: str, image: ImageSettings, type: str = 'docker', endpoints: List[EndpointsSettings], environment_variables: Dict | None = None, bind_mounts: List[VolumeSettings] | None = None, **kwargs: Any)[source]¶
Specifies the custom service application configuration.
- Parameters:
name (str) – Name of the Custom Application.
image (ImageSettings) – Describes the Image Specifications.
type (Optional[str]) – Type of the Custom Application.
endpoints (List[EndpointsSettings]) – Configuring the endpoints for the container.
environment_variables (Optional[Dict[str, str]]) – Environment Variables for the container.
bind_mounts (Optional[List[VolumeSettings]]) – Configuration of the bind mounts for the container.
- class azure.ai.ml.entities.CustomInferencingServer(*, inference_configuration: OnlineInferenceConfiguration | None = None, **kwargs: Any)[source]¶
Note
This is an experimental class, and may change at any time. Please see https://aka.ms/azuremlexperimental for more information.
Custom inferencing configurations.
- Parameters:
inference_configuration (OnlineInferenceConfiguration) – The inference configuration of the inferencing server.
- Variables:
type – The type of the inferencing server.
- class azure.ai.ml.entities.CustomModelFineTuningJob(**kwargs: Any)[source]¶
Note
This is an experimental class, and may change at any time. Please see https://aka.ms/azuremlexperimental for more information.
>
- dump(dest: str | PathLike | IO, **kwargs: Any) None¶
Dumps the job content into a file in YAML format.
- Parameters:
dest (Union[PathLike, str, IO[AnyStr]]) – The local path or file stream to write the YAML content to. If dest is a file path, a new file will be created. If dest is an open file, the file will be written to directly.
- Raises:
FileExistsError – Raised if dest is a file path and the file already exists.
IOError – Raised if dest is an open file and the file is not writable.
- property base_path: str¶
The base path of the resource.
- Returns:
The base path of the resource.
- Return type:
- property creation_context: SystemData | None¶
The creation context of the resource.
- Returns:
The creation metadata for the resource.
- Return type:
Optional[SystemData]
- property id: str | None¶
The resource ID.
- Returns:
The global ID of the resource, an Azure Resource Manager (ARM) ID.
- Return type:
Optional[str]
- property model: Input | None¶
The model to be fine-tuned. :return: Input object representing the mlflow model to be fine-tuned. :rtype: Input
- property queue_settings: QueueSettings | None¶
Queue settings for job execution. :return: QueueSettings object. :rtype: QueueSettings
- property resources: JobResources | None¶
Job resources to use during job execution. :return: Job Resources object. :rtype: JobResources
- property status: str | None¶
The status of the job.
Common values returned include “Running”, “Completed”, and “Failed”. All possible values are:
NotStarted - This is a temporary state that client-side Run objects are in before cloud submission.
Starting - The Run has started being processed in the cloud. The caller has a run ID at this point.
Provisioning - On-demand compute is being created for a given job submission.
- Preparing - The run environment is being prepared and is in one of two stages:
Docker image build
conda environment setup
- Queued - The job is queued on the compute target. For example, in BatchAI, the job is in a queued state
while waiting for all the requested nodes to be ready.
Running - The job has started to run on the compute target.
Finalizing - User code execution has completed, and the run is in post-processing stages.
CancelRequested - Cancellation has been requested for the job.
- Completed - The run has completed successfully. This includes both the user code execution and run
post-processing stages.
Failed - The run failed. Usually the Error property on a run will provide details as to why.
Canceled - Follows a cancellation request and indicates that the run is now successfully cancelled.
NotResponding - For runs that have Heartbeats enabled, no heartbeat has been recently sent.
- Returns:
Status of the job.
- Return type:
Optional[str]
- property studio_url: str | None¶
Azure ML studio endpoint.
- Returns:
The URL to the job details page.
- Return type:
Optional[str]
- property task: str¶
Get finetuning task.
- Returns:
The type of task to run. Possible values include: “ChatCompletion” “TextCompletion”, “TextClassification”, “QuestionAnswering”,”TextSummarization”, “TokenClassification”, “TextTranslation”, “ImageClassification”, “ImageInstanceSegmentation”, “ImageObjectDetection”,”VideoMultiObjectTracking”.
- Return type:
- class azure.ai.ml.entities.CustomMonitoringMetricThreshold(*, metric_name: str | None, threshold: float | None = None)[source]¶
Note
This is an experimental class, and may change at any time. Please see https://aka.ms/azuremlexperimental for more information.
Feature attribution drift metric threshold
- class azure.ai.ml.entities.CustomMonitoringSignal(*, inputs: Dict[str, Input] | None = None, metric_thresholds: List[CustomMonitoringMetricThreshold], component_id: str, connection: Connection | None = None, input_data: Dict[str, ReferenceData] | None = None, alert_enabled: bool = False, properties: Dict[str, str] | None = None)[source]¶
Note
This is an experimental class, and may change at any time. Please see https://aka.ms/azuremlexperimental for more information.
Custom monitoring signal.
- Variables:
type (str) – The type of the signal. Set to “custom” for this class.
- Keyword Arguments:
input_data (Optional[dict[str, ReferenceData]]) – A dictionary of input datasets for monitoring. Each key is the component input port name, and its value is the data asset.
metric_thresholds (List[CustomMonitoringMetricThreshold]) – A list of metrics to calculate and their associated thresholds.
component_id (str) – The ARM (Azure Resource Manager) ID of the component resource used to calculate the custom metrics.
connection (Optional[WorkspaceConnection]) – Specify connection with environment variables and secret configs.
alert_enabled (bool) – Whether or not to enable alerts for the signal. Defaults to True.
properties (Optional[dict[str, str]]) – A dictionary of custom properties for the signal.
- class azure.ai.ml.entities.CustomerManagedKey(key_vault: str | None = None, key_uri: str | None = None, cosmosdb_id: str | None = None, storage_id: str | None = None, search_id: str | None = None)[source]¶
Key vault details for encrypting data with customer-managed keys.
- Parameters:
key_vault (str) – Key vault that is holding the customer-managed key.
key_uri (str) – URI for the customer-managed key.
cosmosdb_id (str) – ARM id of bring-your-own cosmosdb account that customer brings to store customer’s data with encryption.
storage_id (str) – ARM id of bring-your-own storage account that customer brings to store customer’s data with encryption.
search_id (str) – ARM id of bring-your-own search account that customer brings to store customer’s data with encryption.
Example:
Creating a CustomerManagedKey object.¶from azure.ai.ml.entities import CustomerManagedKey, Workspace cmk = CustomerManagedKey( key_vault="/subscriptions/00000000-1111-2222-3333-444444444444/resourceGroups/test-rg/providers/Microsoft.KeyVault/vaults/vault-name", key_uri="https://vault-name.vault.azure.net/keys/key-name/key-version", ) # special bring your own scenario byo_cmk = CustomerManagedKey( key_vault="/subscriptions/00000000-1111-2222-3333-444444444444/resourceGroups/test-rg/providers/Microsoft.KeyVault/vaults/vault-name", key_uri="https://vault-name.vault.azure.net/keys/key-name/key-version", cosmosdb_id="/subscriptions/00000000-1111-2222-3333-444444444444/resourceGroups/test-rg/providers/Microsoft.DocumentDB/databaseAccounts/cosmos-name", storage_id="/subscriptions/00000000-1111-2222-3333-444444444444/resourceGroups/test-rg/providers/Microsoft.Storage/storageAccounts/storage-name", search_id="/subscriptions/00000000-1111-2222-3333-444444444444/resourceGroups/test-rg/providers/Microsoft.Search/searchServices/search-name", ) ws = Workspace(name="ws-name", location="eastus", display_name="My workspace", customer_managed_key=cmk)
- class azure.ai.ml.entities.Data(*, name: str | None = None, version: str | None = None, description: str | None = None, tags: Dict | None = None, properties: Dict | None = None, path: str | None = None, type: str = 'uri_folder', **kwargs: Any)[source]¶
Data for training and scoring.
- Parameters:
name (str) – Name of the resource.
version (str) – Version of the resource.
description (str) – Description of the resource.
tags (dict[str, str]) – Tag dictionary. Tags can be added, removed, and updated.
properties (dict[str, str]) – The asset property dictionary.
path (str) – The path to the asset on the datastore. This can be local or remote
type (Literal[AssetTypes.URI_FILE, AssetTypes.URI_FOLDER, AssetTypes.MLTABLE]) – The type of the asset. Valid values are uri_file, uri_folder, mltable. Defaults to uri_folder.
kwargs (dict) – A dictionary of additional configuration parameters.
- dump(dest: str | PathLike | IO, **kwargs: Any) None¶
Dump the asset content into a file in YAML format.
- Parameters:
dest (Union[PathLike, str, IO[AnyStr]]) – The local path or file stream to write the YAML content to. If dest is a file path, a new file will be created. If dest is an open file, the file will be written to directly.
- Raises:
FileExistsError – Raised if dest is a file path and the file already exists.
IOError – Raised if dest is an open file and the file is not writable.
- property base_path: str¶
The base path of the resource.
- Returns:
The base path of the resource.
- Return type:
- property creation_context: SystemData | None¶
The creation context of the resource.
- Returns:
The creation metadata for the resource.
- Return type:
Optional[SystemData]
- class azure.ai.ml.entities.DataAsset(*, data_id: str | None = None, name: str | None = None, path: str | None = None, version: int | None = None)[source]¶
Note
This is an experimental class, and may change at any time. Please see https://aka.ms/azuremlexperimental for more information.
Data Asset entity
- class azure.ai.ml.entities.DataAvailabilityStatus(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source]¶
DataAvailabilityStatus.
- capitalize()¶
Return a capitalized version of the string.
More specifically, make the first character have upper case and the rest lower case.
- casefold()¶
Return a version of the string suitable for caseless comparisons.
- center(width, fillchar=' ', /)¶
Return a centered string of length width.
Padding is done using the specified fill character (default is a space).
- count(sub[, start[, end]]) int¶
Return the number of non-overlapping occurrences of substring sub in string S[start:end]. Optional arguments start and end are interpreted as in slice notation.
- encode(encoding='utf-8', errors='strict')¶
Encode the string using the codec registered for encoding.
- encoding
The encoding in which to encode the string.
- errors
The error handling scheme to use for encoding errors. The default is ‘strict’ meaning that encoding errors raise a UnicodeEncodeError. Other possible values are ‘ignore’, ‘replace’ and ‘xmlcharrefreplace’ as well as any other name registered with codecs.register_error that can handle UnicodeEncodeErrors.
- endswith(suffix[, start[, end]]) bool¶
Return True if S ends with the specified suffix, False otherwise. With optional start, test S beginning at that position. With optional end, stop comparing S at that position. suffix can also be a tuple of strings to try.
- expandtabs(tabsize=8)¶
Return a copy where all tab characters are expanded using spaces.
If tabsize is not given, a tab size of 8 characters is assumed.
- find(sub[, start[, end]]) int¶
Return the lowest index in S where substring sub is found, such that sub is contained within S[start:end]. Optional arguments start and end are interpreted as in slice notation.
Return -1 on failure.
- format(*args, **kwargs) str¶
Return a formatted version of S, using substitutions from args and kwargs. The substitutions are identified by braces (‘{’ and ‘}’).
- format_map(mapping) str¶
Return a formatted version of S, using substitutions from mapping. The substitutions are identified by braces (‘{’ and ‘}’).
- index(sub[, start[, end]]) int¶
Return the lowest index in S where substring sub is found, such that sub is contained within S[start:end]. Optional arguments start and end are interpreted as in slice notation.
Raises ValueError when the substring is not found.
- isalnum()¶
Return True if the string is an alpha-numeric string, False otherwise.
A string is alpha-numeric if all characters in the string are alpha-numeric and there is at least one character in the string.
- isalpha()¶
Return True if the string is an alphabetic string, False otherwise.
A string is alphabetic if all characters in the string are alphabetic and there is at least one character in the string.
- isascii()¶
Return True if all characters in the string are ASCII, False otherwise.
ASCII characters have code points in the range U+0000-U+007F. Empty string is ASCII too.
- isdecimal()¶
Return True if the string is a decimal string, False otherwise.
A string is a decimal string if all characters in the string are decimal and there is at least one character in the string.
- isdigit()¶
Return True if the string is a digit string, False otherwise.
A string is a digit string if all characters in the string are digits and there is at least one character in the string.
- isidentifier()¶
Return True if the string is a valid Python identifier, False otherwise.
Call keyword.iskeyword(s) to test whether string s is a reserved identifier, such as “def” or “class”.
- islower()¶
Return True if the string is a lowercase string, False otherwise.
A string is lowercase if all cased characters in the string are lowercase and there is at least one cased character in the string.
- isnumeric()¶
Return True if the string is a numeric string, False otherwise.
A string is numeric if all characters in the string are numeric and there is at least one character in the string.
- isprintable()¶
Return True if the string is printable, False otherwise.
A string is printable if all of its characters are considered printable in repr() or if it is empty.
- isspace()¶
Return True if the string is a whitespace string, False otherwise.
A string is whitespace if all characters in the string are whitespace and there is at least one character in the string.
- istitle()¶
Return True if the string is a title-cased string, False otherwise.
In a title-cased string, upper- and title-case characters may only follow uncased characters and lowercase characters only cased ones.
- isupper()¶
Return True if the string is an uppercase string, False otherwise.
A string is uppercase if all cased characters in the string are uppercase and there is at least one cased character in the string.
- join(iterable, /)¶
Concatenate any number of strings.
The string whose method is called is inserted in between each given string. The result is returned as a new string.
Example: ‘.’.join([‘ab’, ‘pq’, ‘rs’]) -> ‘ab.pq.rs’
- ljust(width, fillchar=' ', /)¶
Return a left-justified string of length width.
Padding is done using the specified fill character (default is a space).
- lower()¶
Return a copy of the string converted to lowercase.
- lstrip(chars=None, /)¶
Return a copy of the string with leading whitespace removed.
If chars is given and not None, remove characters in chars instead.
- static maketrans()¶
Return a translation table usable for str.translate().
If there is only one argument, it must be a dictionary mapping Unicode ordinals (integers) or characters to Unicode ordinals, strings or None. Character keys will be then converted to ordinals. If there are two arguments, they must be strings of equal length, and in the resulting dictionary, each character in x will be mapped to the character at the same position in y. If there is a third argument, it must be a string, whose characters will be mapped to None in the result.
- partition(sep, /)¶
Partition the string into three parts using the given separator.
This will search for the separator in the string. If the separator is found, returns a 3-tuple containing the part before the separator, the separator itself, and the part after it.
If the separator is not found, returns a 3-tuple containing the original string and two empty strings.
- removeprefix(prefix, /)¶
Return a str with the given prefix string removed if present.
If the string starts with the prefix string, return string[len(prefix):]. Otherwise, return a copy of the original string.
- removesuffix(suffix, /)¶
Return a str with the given suffix string removed if present.
If the string ends with the suffix string and that suffix is not empty, return string[:-len(suffix)]. Otherwise, return a copy of the original string.
- replace(old, new, count=-1, /)¶
Return a copy with all occurrences of substring old replaced by new.
- count
Maximum number of occurrences to replace. -1 (the default value) means replace all occurrences.
If the optional argument count is given, only the first count occurrences are replaced.
- rfind(sub[, start[, end]]) int¶
Return the highest index in S where substring sub is found, such that sub is contained within S[start:end]. Optional arguments start and end are interpreted as in slice notation.
Return -1 on failure.
- rindex(sub[, start[, end]]) int¶
Return the highest index in S where substring sub is found, such that sub is contained within S[start:end]. Optional arguments start and end are interpreted as in slice notation.
Raises ValueError when the substring is not found.
- rjust(width, fillchar=' ', /)¶
Return a right-justified string of length width.
Padding is done using the specified fill character (default is a space).
- rpartition(sep, /)¶
Partition the string into three parts using the given separator.
This will search for the separator in the string, starting at the end. If the separator is found, returns a 3-tuple containing the part before the separator, the separator itself, and the part after it.
If the separator is not found, returns a 3-tuple containing two empty strings and the original string.
- rsplit(sep=None, maxsplit=-1)¶
Return a list of the substrings in the string, using sep as the separator string.
- sep
The separator used to split the string.
When set to None (the default value), will split on any whitespace character (including n r t f and spaces) and will discard empty strings from the result.
- maxsplit
Maximum number of splits. -1 (the default value) means no limit.
Splitting starts at the end of the string and works to the front.
- rstrip(chars=None, /)¶
Return a copy of the string with trailing whitespace removed.
If chars is given and not None, remove characters in chars instead.
- split(sep=None, maxsplit=-1)¶
Return a list of the substrings in the string, using sep as the separator string.
- sep
The separator used to split the string.
When set to None (the default value), will split on any whitespace character (including n r t f and spaces) and will discard empty strings from the result.
- maxsplit
Maximum number of splits. -1 (the default value) means no limit.
Splitting starts at the front of the string and works to the end.
Note, str.split() is mainly useful for data that has been intentionally delimited. With natural text that includes punctuation, consider using the regular expression module.
- splitlines(keepends=False)¶
Return a list of the lines in the string, breaking at line boundaries.
Line breaks are not included in the resulting list unless keepends is given and true.
- startswith(prefix[, start[, end]]) bool¶
Return True if S starts with the specified prefix, False otherwise. With optional start, test S beginning at that position. With optional end, stop comparing S at that position. prefix can also be a tuple of strings to try.
- strip(chars=None, /)¶
Return a copy of the string with leading and trailing whitespace removed.
If chars is given and not None, remove characters in chars instead.
- swapcase()¶
Convert uppercase characters to lowercase and lowercase characters to uppercase.
- title()¶
Return a version of the string where each word is titlecased.
More specifically, words start with uppercased characters and all remaining cased characters have lower case.
- translate(table, /)¶
Replace each character in the string using the given translation table.
- table
Translation table, which must be a mapping of Unicode ordinals to Unicode ordinals, strings, or None.
The table must implement lookup/indexing via __getitem__, for instance a dictionary or list. If this operation raises LookupError, the character is left untouched. Characters mapped to None are deleted.
- upper()¶
Return a copy of the string converted to uppercase.
- zfill(width, /)¶
Pad a numeric string with zeros on the left, to fill a field of the given width.
The string is never truncated.
- COMPLETE = 'Complete'¶
- INCOMPLETE = 'Incomplete'¶
- NONE = 'None'¶
- PENDING = 'Pending'¶
- class azure.ai.ml.entities.DataCollector(collections: Dict[str, DeploymentCollection], *, rolling_rate: str | None = None, sampling_rate: float | None = None, request_logging: RequestLogging | None = None, **kwargs: Any)[source]¶
Note
This is an experimental class, and may change at any time. Please see https://aka.ms/azuremlexperimental for more information.
Data Capture deployment entity.
- Parameters:
collections (Mapping[str, DeploymentCollection]) – Mapping dictionary of strings mapped to DeploymentCollection entities.
rolling_rate (str) – The rolling rate of mdc files, possible values: [“minute”, “hour”, “day”].
sampling_rate (float) – The sampling rate of mdc files, possible values: [0.0, 1.0].
request_logging (RequestLogging) – Logging of request payload parameters.
- class azure.ai.ml.entities.DataColumn(*, name: str, type: str | DataColumnType | None = None, **kwargs: Any)[source]¶
A dataframe column
- Parameters:
name (str) – The column name
type (Optional[union[str, DataColumnType]]) – The column data type. Defaults to None.
kwargs (dict) – A dictionary of additional configuration parameters.
- Raises:
ValidationException – Raised if type is specified and is not a valid DataColumnType or str.
Example:
Using DataColumn when creating an index column for a feature store entity¶from azure.ai.ml.entities import DataColumn, DataColumnType, FeatureStoreEntity account_column = DataColumn(name="accountID", type=DataColumnType.STRING) account_entity_config = FeatureStoreEntity( name="account", version="1", index_columns=[account_column], stage="Development", description="This entity represents user account index key accountID.", tags={"data_type": "nonPII"}, ) # wait for featurestore entity creation fs_entity_poller = featurestore_client.feature_store_entities.begin_create_or_update(account_entity_config) print(fs_entity_poller.result())
- class azure.ai.ml.entities.DataColumnType(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source]¶
Dataframe Column Type Enum
Example:
Using DataColumnType when instantiating a DataColumn¶from azure.ai.ml.entities import DataColumn, DataColumnType, FeatureStoreEntity account_column = DataColumn(name="accountID", type=DataColumnType.STRING) account_entity_config = FeatureStoreEntity( name="account", version="1", index_columns=[account_column], stage="Development", description="This entity represents user account index key accountID.", tags={"data_type": "nonPII"}, ) # wait for featurestore entity creation fs_entity_poller = featurestore_client.feature_store_entities.begin_create_or_update(account_entity_config) print(fs_entity_poller.result())
- capitalize()¶
Return a capitalized version of the string.
More specifically, make the first character have upper case and the rest lower case.
- casefold()¶
Return a version of the string suitable for caseless comparisons.
- center(width, fillchar=' ', /)¶
Return a centered string of length width.
Padding is done using the specified fill character (default is a space).
- count(sub[, start[, end]]) int¶
Return the number of non-overlapping occurrences of substring sub in string S[start:end]. Optional arguments start and end are interpreted as in slice notation.
- encode(encoding='utf-8', errors='strict')¶
Encode the string using the codec registered for encoding.
- encoding
The encoding in which to encode the string.
- errors
The error handling scheme to use for encoding errors. The default is ‘strict’ meaning that encoding errors raise a UnicodeEncodeError. Other possible values are ‘ignore’, ‘replace’ and ‘xmlcharrefreplace’ as well as any other name registered with codecs.register_error that can handle UnicodeEncodeErrors.
- endswith(suffix[, start[, end]]) bool¶
Return True if S ends with the specified suffix, False otherwise. With optional start, test S beginning at that position. With optional end, stop comparing S at that position. suffix can also be a tuple of strings to try.
- expandtabs(tabsize=8)¶
Return a copy where all tab characters are expanded using spaces.
If tabsize is not given, a tab size of 8 characters is assumed.
- find(sub[, start[, end]]) int¶
Return the lowest index in S where substring sub is found, such that sub is contained within S[start:end]. Optional arguments start and end are interpreted as in slice notation.
Return -1 on failure.
- format(*args, **kwargs) str¶
Return a formatted version of S, using substitutions from args and kwargs. The substitutions are identified by braces (‘{’ and ‘}’).
- format_map(mapping) str¶
Return a formatted version of S, using substitutions from mapping. The substitutions are identified by braces (‘{’ and ‘}’).
- index(sub[, start[, end]]) int¶
Return the lowest index in S where substring sub is found, such that sub is contained within S[start:end]. Optional arguments start and end are interpreted as in slice notation.
Raises ValueError when the substring is not found.
- isalnum()¶
Return True if the string is an alpha-numeric string, False otherwise.
A string is alpha-numeric if all characters in the string are alpha-numeric and there is at least one character in the string.
- isalpha()¶
Return True if the string is an alphabetic string, False otherwise.
A string is alphabetic if all characters in the string are alphabetic and there is at least one character in the string.
- isascii()¶
Return True if all characters in the string are ASCII, False otherwise.
ASCII characters have code points in the range U+0000-U+007F. Empty string is ASCII too.
- isdecimal()¶
Return True if the string is a decimal string, False otherwise.
A string is a decimal string if all characters in the string are decimal and there is at least one character in the string.
- isdigit()¶
Return True if the string is a digit string, False otherwise.
A string is a digit string if all characters in the string are digits and there is at least one character in the string.
- isidentifier()¶
Return True if the string is a valid Python identifier, False otherwise.
Call keyword.iskeyword(s) to test whether string s is a reserved identifier, such as “def” or “class”.
- islower()¶
Return True if the string is a lowercase string, False otherwise.
A string is lowercase if all cased characters in the string are lowercase and there is at least one cased character in the string.
- isnumeric()¶
Return True if the string is a numeric string, False otherwise.
A string is numeric if all characters in the string are numeric and there is at least one character in the string.
- isprintable()¶
Return True if the string is printable, False otherwise.
A string is printable if all of its characters are considered printable in repr() or if it is empty.
- isspace()¶
Return True if the string is a whitespace string, False otherwise.
A string is whitespace if all characters in the string are whitespace and there is at least one character in the string.
- istitle()¶
Return True if the string is a title-cased string, False otherwise.
In a title-cased string, upper- and title-case characters may only follow uncased characters and lowercase characters only cased ones.
- isupper()¶
Return True if the string is an uppercase string, False otherwise.
A string is uppercase if all cased characters in the string are uppercase and there is at least one cased character in the string.
- join(iterable, /)¶
Concatenate any number of strings.
The string whose method is called is inserted in between each given string. The result is returned as a new string.
Example: ‘.’.join([‘ab’, ‘pq’, ‘rs’]) -> ‘ab.pq.rs’
- ljust(width, fillchar=' ', /)¶
Return a left-justified string of length width.
Padding is done using the specified fill character (default is a space).
- lower()¶
Return a copy of the string converted to lowercase.
- lstrip(chars=None, /)¶
Return a copy of the string with leading whitespace removed.
If chars is given and not None, remove characters in chars instead.
- static maketrans()¶
Return a translation table usable for str.translate().
If there is only one argument, it must be a dictionary mapping Unicode ordinals (integers) or characters to Unicode ordinals, strings or None. Character keys will be then converted to ordinals. If there are two arguments, they must be strings of equal length, and in the resulting dictionary, each character in x will be mapped to the character at the same position in y. If there is a third argument, it must be a string, whose characters will be mapped to None in the result.
- partition(sep, /)¶
Partition the string into three parts using the given separator.
This will search for the separator in the string. If the separator is found, returns a 3-tuple containing the part before the separator, the separator itself, and the part after it.
If the separator is not found, returns a 3-tuple containing the original string and two empty strings.
- removeprefix(prefix, /)¶
Return a str with the given prefix string removed if present.
If the string starts with the prefix string, return string[len(prefix):]. Otherwise, return a copy of the original string.
- removesuffix(suffix, /)¶
Return a str with the given suffix string removed if present.
If the string ends with the suffix string and that suffix is not empty, return string[:-len(suffix)]. Otherwise, return a copy of the original string.
- replace(old, new, count=-1, /)¶
Return a copy with all occurrences of substring old replaced by new.
- count
Maximum number of occurrences to replace. -1 (the default value) means replace all occurrences.
If the optional argument count is given, only the first count occurrences are replaced.
- rfind(sub[, start[, end]]) int¶
Return the highest index in S where substring sub is found, such that sub is contained within S[start:end]. Optional arguments start and end are interpreted as in slice notation.
Return -1 on failure.
- rindex(sub[, start[, end]]) int¶
Return the highest index in S where substring sub is found, such that sub is contained within S[start:end]. Optional arguments start and end are interpreted as in slice notation.
Raises ValueError when the substring is not found.
- rjust(width, fillchar=' ', /)¶
Return a right-justified string of length width.
Padding is done using the specified fill character (default is a space).
- rpartition(sep, /)¶
Partition the string into three parts using the given separator.
This will search for the separator in the string, starting at the end. If the separator is found, returns a 3-tuple containing the part before the separator, the separator itself, and the part after it.
If the separator is not found, returns a 3-tuple containing two empty strings and the original string.
- rsplit(sep=None, maxsplit=-1)¶
Return a list of the substrings in the string, using sep as the separator string.
- sep
The separator used to split the string.
When set to None (the default value), will split on any whitespace character (including n r t f and spaces) and will discard empty strings from the result.
- maxsplit
Maximum number of splits. -1 (the default value) means no limit.
Splitting starts at the end of the string and works to the front.
- rstrip(chars=None, /)¶
Return a copy of the string with trailing whitespace removed.
If chars is given and not None, remove characters in chars instead.
- split(sep=None, maxsplit=-1)¶
Return a list of the substrings in the string, using sep as the separator string.
- sep
The separator used to split the string.
When set to None (the default value), will split on any whitespace character (including n r t f and spaces) and will discard empty strings from the result.
- maxsplit
Maximum number of splits. -1 (the default value) means no limit.
Splitting starts at the front of the string and works to the end.
Note, str.split() is mainly useful for data that has been intentionally delimited. With natural text that includes punctuation, consider using the regular expression module.
- splitlines(keepends=False)¶
Return a list of the lines in the string, breaking at line boundaries.
Line breaks are not included in the resulting list unless keepends is given and true.
- startswith(prefix[, start[, end]]) bool¶
Return True if S starts with the specified prefix, False otherwise. With optional start, test S beginning at that position. With optional end, stop comparing S at that position. prefix can also be a tuple of strings to try.
- strip(chars=None, /)¶
Return a copy of the string with leading and trailing whitespace removed.
If chars is given and not None, remove characters in chars instead.
- swapcase()¶
Convert uppercase characters to lowercase and lowercase characters to uppercase.
- title()¶
Return a version of the string where each word is titlecased.
More specifically, words start with uppercased characters and all remaining cased characters have lower case.
- translate(table, /)¶
Replace each character in the string using the given translation table.
- table
Translation table, which must be a mapping of Unicode ordinals to Unicode ordinals, strings, or None.
The table must implement lookup/indexing via __getitem__, for instance a dictionary or list. If this operation raises LookupError, the character is left untouched. Characters mapped to None are deleted.
- upper()¶
Return a copy of the string converted to uppercase.
- zfill(width, /)¶
Pad a numeric string with zeros on the left, to fill a field of the given width.
The string is never truncated.
- BINARY = 'binary'¶
- BOOLEAN = 'boolean'¶
- DATETIME = 'datetime'¶
- DOUBLE = 'double'¶
- FLOAT = 'float'¶
- INTEGER = 'integer'¶
- LONG = 'long'¶
- STRING = 'string'¶
- class azure.ai.ml.entities.DataDriftMetricThreshold(*, data_type: MonitorFeatureType | None = None, threshold: float | None = None, metric: str | None = None, numerical: NumericalDriftMetrics | None = None, categorical: CategoricalDriftMetrics | None = None)[source]¶
Data drift metric threshold
- Parameters:
numerical – Numerical drift metrics
categorical – Categorical drift metrics
- class azure.ai.ml.entities.DataDriftSignal(*, production_data: ProductionData | None = None, reference_data: ReferenceData | None = None, features: List[str] | MonitorFeatureFilter | Literal['all_features'] | None = None, feature_type_override: Dict[str, str | MonitorFeatureDataType] | None = None, metric_thresholds: DataDriftMetricThreshold | List[MetricThreshold] | None = None, alert_enabled: bool = False, data_segment: DataSegment | None = None, properties: Dict[str, str] | None = None)[source]¶
Data drift signal.
- Variables:
type (str) – The type of the signal, set to “data_drift” for this class.
- Parameters:
production_data – The data for which drift will be calculated
reference_data – The data to calculate drift against
metric_thresholds – Metrics to calculate and their associated thresholds
alert_enabled – The current notification mode for this signal
data_segment – The data segment used for scoping on a subset of the data population.
feature_type_override – Dictionary of features and what they should be overridden to.
properties – Dictionary of additional properties.
- Keyword Arguments:
features (Union[List[str], MonitorFeatureFilter, Literal['all_features']]) – The feature filter identifying which feature(s) to calculate drift over.
- class azure.ai.ml.entities.DataImport(*, name: str, path: str, source: Database | FileSystem, version: str | None = None, description: str | None = None, tags: Dict | None = None, properties: Dict | None = None, **kwargs: Any)[source]¶
Note
This is an experimental class, and may change at any time. Please see https://aka.ms/azuremlexperimental for more information.
Data asset with a creating data import job.
- Parameters:
name (str) – Name of the asset.
path (str) – The path to the asset being created by data import job.
source (Union[Database, FileSystem]) – The source of the asset data being copied from.
version (str) – Version of the resource.
description (str) – Description of the resource.
tags (dict[str, str]) – Tag dictionary. Tags can be added, removed, and updated.
properties (dict[str, str]) – The asset property dictionary.
kwargs (dict) – A dictionary of additional configuration parameters.
- dump(dest: str | PathLike | IO, **kwargs: Any) None¶
Dump the asset content into a file in YAML format.
- Parameters:
dest (Union[PathLike, str, IO[AnyStr]]) – The local path or file stream to write the YAML content to. If dest is a file path, a new file will be created. If dest is an open file, the file will be written to directly.
- Raises:
FileExistsError – Raised if dest is a file path and the file already exists.
IOError – Raised if dest is an open file and the file is not writable.
- property base_path: str¶
The base path of the resource.
- Returns:
The base path of the resource.
- Return type:
- property creation_context: SystemData | None¶
The creation context of the resource.
- Returns:
The creation metadata for the resource.
- Return type:
Optional[SystemData]
- class azure.ai.ml.entities.DataQualityMetricThreshold(*, data_type: MonitorFeatureType | None = None, threshold: float | None = None, metric_name: str | None = None, numerical: DataQualityMetricsNumerical | None = None, categorical: DataQualityMetricsCategorical | None = None)[source]¶
Data quality metric threshold
- Parameters:
numerical – Numerical data quality metrics
categorical – Categorical data quality metrics
- class azure.ai.ml.entities.DataQualityMetricsCategorical(*, null_value_rate: float | None = None, data_type_error_rate: float | None = None, out_of_bounds_rate: float | None = None)[source]¶
Data Quality Categorical Metrics
- Parameters:
null_value_rate – The null value rate
data_type_error_rate – The data type error rate
out_of_bounds_rate – The out of bounds rate
- class azure.ai.ml.entities.DataQualityMetricsNumerical(*, null_value_rate: float | None = None, data_type_error_rate: float | None = None, out_of_bounds_rate: float | None = None)[source]¶
Data Quality Numerical Metrics
- Parameters:
null_value_rate – The null value rate
data_type_error_rate – The data type error rate
out_of_bounds_rate – The out of bounds rate
- class azure.ai.ml.entities.DataQualitySignal(*, production_data: ProductionData | None = None, reference_data: ReferenceData | None = None, features: List[str] | MonitorFeatureFilter | Literal['all_features'] | None = None, feature_type_override: Dict[str, str | MonitorFeatureDataType] | None = None, metric_thresholds: MetricThreshold | List[MetricThreshold] | None = None, alert_enabled: bool = False, properties: Dict[str, str] | None = None)[source]¶
Data quality signal
- Variables:
type (str) – The type of the signal. Set to “data_quality” for this class.
- Parameters:
production_data – The data for which drift will be calculated
reference_data – The data to calculate drift against
metric_thresholds – Metrics to calculate and their associated thresholds
alert_enabled – The current notification mode for this signal
feature_type_override – Dictionary of features and what they should be overridden to.
properties – Dictionary of additional properties.
- Keyword Arguments:
features (Union[List[str], MonitorFeatureFilter, Literal['all_features']]) – The feature filter identifying which feature(s) to calculate drift over.
- class azure.ai.ml.entities.DataSegment(*, feature_name: str | None = None, feature_values: List[str] | None = None)[source]¶
Data segment for monitoring.
- class azure.ai.ml.entities.Datastore(credentials: ServicePrincipalConfiguration | CertificateConfiguration | NoneCredentialConfiguration | AccountKeyConfiguration | SasTokenConfiguration | None, name: str | None = None, description: str | None = None, tags: Dict | None = None, properties: Dict | None = None, **kwargs: Any)[source]¶
Datastore of an Azure ML workspace, abstract class.
- Parameters:
name (str) – Name of the datastore.
description (str) – Description of the resource.
credentials – Credentials to use for Azure ML workspace to connect to the storage.
tags (dict[str, str]) – Tag dictionary. Tags can be added, removed, and updated.
properties (dict[str, str]) – The asset property dictionary.
kwargs (dict) – A dictionary of additional configuration parameters.
- dump(dest: str | PathLike | IO, **kwargs: Any) None[source]¶
Dump the datastore content into a file in yaml format.
- Parameters:
dest (Union[PathLike, str, IO[AnyStr]]) – The destination to receive this datastore’s content. Must be either a path to a local file, or an already-open file stream. If dest is a file path, a new file will be created, and an exception is raised if the file exists. If dest is an open file, the file will be written to directly, and an exception will be raised if the file is not writable.
- property base_path: str¶
The base path of the resource.
- Returns:
The base path of the resource.
- Return type:
- property creation_context: SystemData | None¶
The creation context of the resource.
- Returns:
The creation metadata for the resource.
- Return type:
Optional[SystemData]
- class azure.ai.ml.entities.DefaultActionType[source]¶
Specifies the default action when no IP rules are matched.
- ALLOW = 'Allow'¶
- DENY = 'Deny'¶
- class azure.ai.ml.entities.DefaultScaleSettings(**kwargs: Any)[source]¶
Default scale settings.
- Variables:
type (str) – Default scale settings type. Set automatically to “default” for this class.
- class azure.ai.ml.entities.Deployment(name: str | None = None, *, endpoint_name: str | None = None, description: str | None = None, tags: Dict[str, Any] | None = None, properties: Dict[str, Any] | None = None, model: str | Model | None = None, code_configuration: CodeConfiguration | None = None, environment: str | Environment | None = None, environment_variables: Dict[str, str] | None = None, code_path: str | PathLike | None = None, scoring_script: str | PathLike | None = None, **kwargs: Any)[source]¶
Endpoint Deployment base class.
- Parameters:
name (Optional[str]) – Name of the deployment resource, defaults to None
- Keyword Arguments:
endpoint_name (Optional[str]) – Name of the Endpoint resource, defaults to None
description (Optional[str]) – Description of the deployment resource, defaults to None
tags (Optional[Dict[str, Any]]) – Tag dictionary. Tags can be added, removed, and updated, defaults to None
properties (Optional[Dict[str, Any]]) – The asset property dictionary, defaults to None
model (Optional[Union[str, Model]]) – The Model entity, defaults to None
code_configuration (Optional[CodeConfiguration]) – Code Configuration, defaults to None
environment (Optional[Union[str, Environment]]) – The Environment entity, defaults to None
environment_variables (Optional[Dict[str, str]]) – Environment variables that will be set in deployment, defaults to None
code_path (Optional[Union[str, PathLike]]) – Folder path to local code assets. Equivalent to code_configuration.code.path , defaults to None
scoring_script (Optional[Union[str, PathLike]]) – Scoring script name. Equivalent to code_configuration.code.scoring_script , defaults to None
- Raises:
ValidationException – Raised if Deployment cannot be successfully validated. Exception details will be provided in the error message.
Endpoint Deployment base class.
Constructor of Endpoint Deployment base class.
- Parameters:
name (Optional[str]) – Name of the deployment resource, defaults to None
- Keyword Arguments:
endpoint_name (Optional[str]) – Name of the Endpoint resource, defaults to None
description (Optional[str]) – Description of the deployment resource, defaults to None
tags (Optional[Dict[str, Any]]) – Tag dictionary. Tags can be added, removed, and updated, defaults to None
properties (Optional[Dict[str, Any]]) – The asset property dictionary, defaults to None
model (Optional[Union[str, Model]]) – The Model entity, defaults to None
code_configuration (Optional[CodeConfiguration]) – Code Configuration, defaults to None
environment (Optional[Union[str, Environment]]) – The Environment entity, defaults to None
environment_variables (Optional[Dict[str, str]]) – Environment variables that will be set in deployment, defaults to None
code_path (Optional[Union[str, PathLike]]) – Folder path to local code assets. Equivalent to code_configuration.code.path , defaults to None
scoring_script (Optional[Union[str, PathLike]]) – Scoring script name. Equivalent to code_configuration.code.scoring_script , defaults to None
- Raises:
ValidationException – Raised if Deployment cannot be successfully validated. Exception details will be provided in the error message.
- dump(dest: str | PathLike | IO, **kwargs: Any) None[source]¶
Dump the deployment content into a file in yaml format.
- Parameters:
dest (Union[os.PathLike, str, IO[AnyStr]]) – The destination to receive this deployment’s content. Must be either a path to a local file, or an already-open file stream. If dest is a file path, a new file will be created, and an exception is raised if the file exists. If dest is an open file, the file will be written to directly, and an exception will be raised if the file is not writable.
- property base_path: str¶
The base path of the resource.
- Returns:
The base path of the resource.
- Return type:
- property code_path: str | PathLike | None¶
The code directory containing the scoring script.
- Return type:
Union[str, PathLike]
- property creation_context: SystemData | None¶
The creation context of the resource.
- Returns:
The creation metadata for the resource.
- Return type:
Optional[SystemData]
- property id: str | None¶
The resource ID.
- Returns:
The global ID of the resource, an Azure Resource Manager (ARM) ID.
- Return type:
Optional[str]
- class azure.ai.ml.entities.DeploymentCollection(*, enabled: str | None = None, data: str | DataAsset | None = None, client_id: str | None = None, **kwargs: Any)[source]¶
Note
This is an experimental class, and may change at any time. Please see https://aka.ms/azuremlexperimental for more information.
Collection entity
- class azure.ai.ml.entities.DiagnoseRequestProperties(*, udr: Dict[str, Any] | None = None, nsg: Dict[str, Any] | None = None, resource_lock: Dict[str, Any] | None = None, dns_resolution: Dict[str, Any] | None = None, storage_account: Dict[str, Any] | None = None, key_vault: Dict[str, Any] | None = None, container_registry: Dict[str, Any] | None = None, application_insights: Dict[str, Any] | None = None, others: Dict[str, Any] | None = None)[source]¶
DiagnoseRequestProperties.
- class azure.ai.ml.entities.DiagnoseResponseResult(*, value: DiagnoseResponseResultValue | None = None)[source]¶
DiagnoseResponseResult.
- class azure.ai.ml.entities.DiagnoseResponseResultValue(*, user_defined_route_results: List[DiagnoseResult] | None = None, network_security_rule_results: List[DiagnoseResult] | None = None, resource_lock_results: List[DiagnoseResult] | None = None, dns_resolution_results: List[DiagnoseResult] | None = None, storage_account_results: List[DiagnoseResult] | None = None, key_vault_results: List[DiagnoseResult] | None = None, container_registry_results: List[DiagnoseResult] | None = None, application_insights_results: List[DiagnoseResult] | None = None, other_results: List[DiagnoseResult] | None = None)[source]¶
DiagnoseResponseResultValue.
- class azure.ai.ml.entities.DiagnoseResult(*, code: str | None = None, level: str | None = None, message: str | None = None)[source]¶
Result of Diagnose.
- class azure.ai.ml.entities.DiagnoseWorkspaceParameters(*, value: DiagnoseRequestProperties | None = None)[source]¶
Parameters to diagnose a workspace.
- class azure.ai.ml.entities.Endpoint(auth_mode: str | None = None, location: str | None = None, name: str | None = None, tags: Dict[str, str] | None = None, properties: Dict[str, Any] | None = None, description: str | None = None, **kwargs: Any)[source]¶
Endpoint base class.
- Parameters:
auth_mode (str) – The authentication mode, defaults to None
location (str) – The location of the endpoint, defaults to None
name (str) – Name of the resource.
tags (Optional[Dict[str, str]]) – Tag dictionary. Tags can be added, removed, and updated.
properties (Optional[Dict[str, str]]) – The asset property dictionary.
- Keyword Arguments:
Endpoint base class.
Constructor for Endpoint base class.
- Parameters:
auth_mode (str) – The authentication mode, defaults to None
location (str) – The location of the endpoint, defaults to None
name (str) – Name of the resource.
tags (Optional[Dict[str, str]]) – Tag dictionary. Tags can be added, removed, and updated.
properties (Optional[Dict[str, str]]) – The asset property dictionary.
- Keyword Arguments:
- abstract dump(dest: str | PathLike | IO | None = None, **kwargs: Any) Dict[source]¶
Dump the object content into a file.
- Parameters:
dest (Union[PathLike, str, IO[AnyStr]]) – The local path or file stream to write the YAML content to. If dest is a file path, a new file will be created. If dest is an open file, the file will be written to directly.
- property base_path: str¶
The base path of the resource.
- Returns:
The base path of the resource.
- Return type:
- property creation_context: SystemData | None¶
The creation context of the resource.
- Returns:
The creation metadata for the resource.
- Return type:
Optional[SystemData]
- property id: str | None¶
The resource ID.
- Returns:
The global ID of the resource, an Azure Resource Manager (ARM) ID.
- Return type:
Optional[str]
- class azure.ai.ml.entities.EndpointAadToken(obj: AccessToken)[source]¶
Endpoint aad token.
- Variables:
Constructor for Endpoint aad token.
- Parameters:
obj (AccessToken) – Access token object
- class azure.ai.ml.entities.EndpointAuthKeys(**kwargs: Any)[source]¶
Keys for endpoint authentication.
Constructor for keys for endpoint authentication.
- class azure.ai.ml.entities.EndpointAuthToken(**kwargs: Any)[source]¶
Endpoint authentication token.
- Variables:
Constuctor for Endpoint authentication token.
- class azure.ai.ml.entities.EndpointConnection(subscription_id: str, resource_group: str, vnet_name: str, subnet_name: str, location: str | None = None)[source]¶
Private Endpoint Connection related to a workspace private endpoint.
- class azure.ai.ml.entities.EndpointsSettings(*, target: int, published: int)[source]¶
Specifies an endpoint configuration for a Custom Application.
- class azure.ai.ml.entities.Environment(*, name: str | None = None, version: str | None = None, description: str | None = None, image: str | None = None, build: BuildContext | None = None, conda_file: str | PathLike | Dict | None = None, tags: Dict | None = None, properties: Dict | None = None, datastore: str | None = None, **kwargs: Any)[source]¶
Environment for training.
- Parameters:
name (str) – Name of the resource.
version (str) – Version of the asset.
description (str) – Description of the resource.
image (str) – URI of a custom base image.
build (BuildContext) – Docker build context to create the environment. Mutually exclusive with “image”
conda_file (Union[str, os.PathLike]) – Path to configuration file listing conda packages to install.
tags (dict[str, str]) – Tag dictionary. Tags can be added, removed, and updated.
properties (dict[str, str]) – The asset property dictionary.
datastore (str) – The datastore to upload the local artifact to.
kwargs (dict) – A dictionary of additional configuration parameters.
Example:
Create a Environment object.¶from azure.ai.ml.entities._assets.environment import Environment environment = Environment( name="env-name", version="2.0", description="env-description", image="env-image", conda_file="./sdk/ml/azure-ai-ml/tests/test_configs/deployments/model-1/environment/conda.yml", tags={"tag1": "value1", "tag2": "value2"}, properties={"prop1": "value1", "prop2": "value2"}, datastore="datastore", )
- dump(dest: str | PathLike | IO, **kwargs: Any) None¶
Dump the asset content into a file in YAML format.
- Parameters:
dest (Union[PathLike, str, IO[AnyStr]]) – The local path or file stream to write the YAML content to. If dest is a file path, a new file will be created. If dest is an open file, the file will be written to directly.
- Raises:
FileExistsError – Raised if dest is a file path and the file already exists.
IOError – Raised if dest is an open file and the file is not writable.
- validate() None[source]¶
Validate the environment by checking its name, image and build
Example:
Validate environment example.¶from azure.ai.ml.entities import BuildContext, Environment env_docker_context = Environment( build=BuildContext( path="./sdk/ml/azure-ai-ml/tests/test_configs/environment/environment_files", dockerfile_path="DockerfileNonDefault", ), name="create-environment", version="2.0", description="Environment created from a Docker context.", ) env_docker_context.validate()
- property base_path: str¶
The base path of the resource.
- Returns:
The base path of the resource.
- Return type:
- property conda_file: str | PathLike | Dict | None¶
Conda environment specification.
- Returns:
Conda dependencies loaded from conda_file param.
- Return type:
Optional[Union[str, os.PathLike]]
- property creation_context: SystemData | None¶
The creation context of the resource.
- Returns:
The creation metadata for the resource.
- Return type:
Optional[SystemData]
- class azure.ai.ml.entities.FADProductionData(*, input_data: Input, data_context: MonitorDatasetContext | None = None, data_column_names: Dict[str, str] | None = None, pre_processing_component: str | None = None, data_window: BaselineDataRange | None = None)[source]¶
Note
This is an experimental class, and may change at any time. Please see https://aka.ms/azuremlexperimental for more information.
Feature Attribution Production Data
- Keyword Arguments:
input_data (Input) – Input data used by the monitor.
data_context (_monitoring) – The context of the input dataset. Accepted values are “model_inputs”, “model_outputs”, “training”, “test”, “validation”, and “ground_truth”.
data_column_names (Dict[str, str]) – The names of the columns in the input data.
pre_processing_component (string) – The ARM (Azure Resource Manager) resource ID of the component resource used to preprocess the data.
- Parameters:
data_window (BaselineDataRange) – The number of days or a time frame that a singal monitor looks back over the target.
- class azure.ai.ml.entities.Feature(*, name: str, data_type: DataColumnType, description: str | None = None, tags: Dict[str, str] | None = None, **kwargs: Any)[source]¶
- Parameters:
name (str) – The name of the feature.
data_type (DataColumnType) – The data type of the feature.
description (Optional[str]) – The description of the feature. Defaults to None.
tags (Optional[dict[str, str]]) – Tag dictionary. Tags can be added, removed, and updated. Defaults to None.
kwargs (dict) – A dictionary of additional configuration parameters.
- class azure.ai.ml.entities.FeatureAttributionDriftMetricThreshold(*, normalized_discounted_cumulative_gain: float | None = None, threshold: float | None = None)[source]¶
Note
This is an experimental class, and may change at any time. Please see https://aka.ms/azuremlexperimental for more information.
Feature attribution drift metric threshold
- Parameters:
normalized_discounted_cumulative_gain – The threshold value for metric.
- class azure.ai.ml.entities.FeatureAttributionDriftSignal(*, production_data: List[FADProductionData] | None = None, reference_data: ReferenceData, metric_thresholds: FeatureAttributionDriftMetricThreshold, alert_enabled: bool = False, properties: Dict[str, str] | None = None)[source]¶
Note
This is an experimental class, and may change at any time. Please see https://aka.ms/azuremlexperimental for more information.
Feature attribution drift signal
- Variables:
type (str) – The type of the signal. Set to “feature_attribution_drift” for this class.
- Keyword Arguments:
production_data – The data for which drift will be calculated.
reference_data (ReferenceData) – The data to calculate drift against.
metric_thresholds (FeatureAttributionDriftMetricThreshold) – Metrics to calculate and their associated thresholds.
alert_enabled (bool) – Whether or not to enable alerts for the signal. Defaults to True.
- Paratype production_data:
~azure.ai.ml.entities.FADProductionData
- class azure.ai.ml.entities.FeatureSet(*, name: str, version: str, entities: List[str], specification: FeatureSetSpecification | None, stage: str | None = 'Development', description: str | None = None, materialization_settings: MaterializationSettings | None = None, tags: Dict | None = None, **kwargs: Any)[source]¶
Feature Set
- Parameters:
name (str) – The name of the Feature Set resource.
version (str) – The version of the Feature Set resource.
specification (FeatureSetSpecification) – Specifies the feature set spec details.
stage (Optional[str]) – Feature set stage. Allowed values: Development, Production, Archived. Defatuls to Development.
description (Optional[str]) – The description of the Feature Set resource. Defaults to None.
tags (Optional[dict[str, str]]) – Tag dictionary. Tags can be added, removed, and updated. Defaults to None.
materialization_settings (Optional[MaterializationSettings]) – Specifies the materialization settings. Defaults to None.
kwargs (dict) – A dictionary of additional configuration parameters.
- Raises:
ValidationException – Raised if stage is specified and is not valid.
Example:
Instantiating a Feature Set object¶from azure.ai.ml.entities import FeatureSet, FeatureSetSpecification transaction_fset_config = FeatureSet( name="transactions", version="1", description="7-day and 3-day rolling aggregation of transactions featureset", entities=["azureml:account:1"], stage="Development", specification=FeatureSetSpecification(path="../azure-ai-ml/tests/test_configs/feature_set/code_sample/"), tags={"data_type": "nonPII"}, ) feature_set_poller = featurestore_client.feature_sets.begin_create_or_update(transaction_fset_config) print(feature_set_poller.result())
- dump(dest: str | PathLike | IO, **kwargs: Any) None[source]¶
Dump the asset content into a file in YAML format.
- Parameters:
dest (Union[PathLike, str, IO[AnyStr]]) – The local path or file stream to write the YAML content to. If dest is a file path, a new file will be created. If dest is an open file, the file will be written to directly.
- Raises:
FileExistsError – Raised if dest is a file path and the file already exists.
IOError – Raised if dest is an open file and the file is not writable.
- property base_path: str¶
The base path of the resource.
- Returns:
The base path of the resource.
- Return type:
- property creation_context: SystemData | None¶
The creation context of the resource.
- Returns:
The creation metadata for the resource.
- Return type:
Optional[SystemData]
- class azure.ai.ml.entities.FeatureSetBackfillMetadata(*, job_ids: List[str] | None = None, type: str | None = None, **kwargs: Any)[source]¶
Feature Set Backfill Metadata
- class azure.ai.ml.entities.FeatureSetBackfillRequest(*, name: str, version: str, feature_window: FeatureWindow | None = None, description: str | None = None, tags: Dict[str, str] | None = None, resource: MaterializationComputeResource | None = None, spark_configuration: Dict[str, str] | None = None, data_status: List[str] | None = None, job_id: str | None = None, **kwargs: Any)[source]¶
Feature Set Backfill Request
- Parameters:
name (str) – The name of the backfill job request
version (str) – The version of the backfill job request.
feature_window (FeatureWindow) – The time window for the feature set backfill request.
description (Optional[str]) – The description of the backfill job request. Defaults to None.
tags (Optional[dict[str, str]]) – Tag dictionary. Tags can be added, removed, and updated. Defaults to None.
spark_configuration (Optional[dict[str, str]]) – Specifies the spark configuration. Defaults to None.
- Keyword Arguments:
resource (Optional[MaterializationComputeResource]) – The compute resource settings. Defaults to None.
- class azure.ai.ml.entities.FeatureSetMaterializationMetadata(*, type: MaterializationType | None, feature_window_start_time: datetime | None, feature_window_end_time: datetime | None, name: str | None, display_name: str | None, creation_context: SystemData | None, duration: timedelta | None, status: str | None, tags: Dict[str, str] | None, **kwargs: Any)[source]¶
Feature Set Materialization Metadata
- Parameters:
type (MaterializationType) – The type of the materialization job.
feature_window_start_time (Optional[datetime]) – The feature window start time for the feature set materialization job.
feature_window_end_time (Optional[datetime]) – The feature window end time for the feature set materialization job.
name (Optional[str]) – The name of the feature set materialization job.
display_name (Optional[str]) – The display name for the feature set materialization job.
creation_context (Optional[SystemData]) – The creation context of the feature set materialization job.
duration (Optional[timedelta]) – current time elapsed for feature set materialization job.
status (Optional[str]) – The status of the feature set materialization job.
tags (Optional[dict[str, str]]) – Tag dictionary. Tags can be added, removed, and updated.
kwargs (dict) – A dictionary of additional configuration parameters.
- class azure.ai.ml.entities.FeatureSetSpecification(*, path: str | PathLike | None = None, **kwargs: Any)[source]¶
Feature Set Specification
- Parameters:
Example:
Using Feature Set Spec to create Feature Set¶from azure.ai.ml.entities import FeatureSet, FeatureSetSpecification transaction_fset_config = FeatureSet( name="transactions", version="1", description="7-day and 3-day rolling aggregation of transactions featureset", entities=["azureml:account:1"], stage="Development", specification=FeatureSetSpecification(path="../azure-ai-ml/tests/test_configs/feature_set/code_sample/"), tags={"data_type": "nonPII"}, ) feature_set_poller = featurestore_client.feature_sets.begin_create_or_update(transaction_fset_config) print(feature_set_poller.result())
- Parameters:
path (str) – Specifies the spec path.
- class azure.ai.ml.entities.FeatureStore(*, name: str, compute_runtime: ComputeRuntime | None = None, offline_store: MaterializationStore | None = None, online_store: MaterializationStore | None = None, materialization_identity: ManagedIdentityConfiguration | None = None, description: str | None = None, tags: Dict[str, str] | None = None, display_name: str | None = None, location: str | None = None, resource_group: str | None = None, hbi_workspace: bool = False, storage_account: str | None = None, container_registry: str | None = None, key_vault: str | None = None, application_insights: str | None = None, customer_managed_key: CustomerManagedKey | None = None, image_build_compute: str | None = None, public_network_access: str | None = None, identity: IdentityConfiguration | None = None, primary_user_assigned_identity: str | None = None, managed_network: ManagedNetwork | None = None, **kwargs: Any)[source]¶
Feature Store
- Parameters:
name (str) – The name of the feature store.
compute_runtime (Optional[ComputeRuntime]) – The compute runtime of the feature store. Defaults to None.
offline_store (Optional[MaterializationStore]) – The offline store for feature store. materialization_identity is required when offline_store is passed. Defaults to None.
online_store (Optional[MaterializationStore]) – The online store for feature store. materialization_identity is required when online_store is passed. Defaults to None.
materialization_identity (Optional[ManagedIdentityConfiguration]) – The identity used for materialization. Defaults to None.
description (Optional[str]) – The description of the feature store. Defaults to None.
tags (dict) – Tags of the feature store.
display_name (Optional[str]) – The display name for the feature store. This is non-unique within the resource group. Defaults to None.
location (Optional[str]) – The location to create the feature store in. If not specified, the same location as the resource group will be used. Defaults to None.
resource_group (Optional[str]) – The name of the resource group to create the feature store in. Defaults to None.
hbi_workspace (Optional[bool]) – Boolean for whether the customer data is of high business impact (HBI), containing sensitive business information. Defaults to False. For more information, see https://docs.microsoft.com/azure/machine-learning/concept-data-encryption#encryption-at-rest.
storage_account (Optional[str]) – The resource ID of an existing storage account to use instead of creating a new one. Defaults to None.
container_registry (Optional[str]) – The resource ID of an existing container registry to use instead of creating a new one. Defaults to None.
key_vault (Optional[str]) – The resource ID of an existing key vault to use instead of creating a new one. Defaults to None.
application_insights (Optional[str]) – The resource ID of an existing application insights to use instead of creating a new one. Defaults to None.
customer_managed_key (Optional[CustomerManagedKey]) – The key vault details for encrypting data with customer-managed keys. If not specified, Microsoft-managed keys will be used by default. Defaults to None.
image_build_compute (Optional[str]) – The name of the compute target to use for building environment Docker images with the container registry is behind a VNet. Defaults to None.
public_network_access (Optional[str]) – Whether to allow public endpoint connectivity when a workspace is private link enabled. Defaults to None.
identity (Optional[IdentityConfiguration]) – The workspace’s Managed Identity (user assigned, or system assigned). Defaults to None.
primary_user_assigned_identity (Optional[str]) – The workspace’s primary user assigned identity. Defaults to None.
managed_network (Optional[ManagedNetwork]) – The workspace’s Managed Network configuration. Defaults to None.
kwargs (dict) – A dictionary of additional configuration parameters.
Example:
Instantiating a Feature Store object¶from azure.ai.ml.entities import FeatureStore featurestore_name = "my-featurestore" featurestore_location = "eastus" featurestore = FeatureStore(name=featurestore_name, location=featurestore_location) # wait for featurestore creation fs_poller = ml_client.feature_stores.begin_create(featurestore, update_dependent_resources=True) print(fs_poller.result())
- dump(dest: str | PathLike | IO, **kwargs: Any) None¶
Dump the workspace spec into a file in yaml format.
- Parameters:
dest (Union[PathLike, str, IO[AnyStr]]) – The destination to receive this workspace’s spec. Must be either a path to a local file, or an already-open file stream. If dest is a file path, a new file will be created, and an exception is raised if the file exists. If dest is an open file, the file will be written to directly, and an exception will be raised if the file is not writable.
- property base_path: str¶
The base path of the resource.
- Returns:
The base path of the resource.
- Return type:
- property creation_context: SystemData | None¶
The creation context of the resource.
- Returns:
The creation metadata for the resource.
- Return type:
Optional[SystemData]
- property discovery_url: str | None¶
Backend service base URLs for the workspace.
- Returns:
Backend service URLs of the workspace
- Return type:
- class azure.ai.ml.entities.FeatureStoreEntity(*, name: str, version: str, index_columns: List[DataColumn], stage: str | None = 'Development', description: str | None = None, tags: Dict[str, str] | None = None, **kwargs: Any)[source]¶
Feature Store Entity
- Parameters:
name (str) – The name of the feature store entity resource.
version (str) – The version of the feature store entity resource.
index_columns (list[DataColumn]) – Specifies index columns of the feature-store entity resource.
stage (Optional[str]) – The feature store entity stage. Allowed values: Development, Production, Archived. Defaults to “Development”.
description (Optional[str]) – The description of the feature store entity resource. Defaults to None.
tags (Optional[dict[str, str]]) – Tag dictionary. Tags can be added, removed, and updated. Defaults to None.
kwargs (dict) – A dictionary of additional configuration parameters.
- Raises:
ValidationException – Raised if stage is specified and is not valid.
Example:
Configuring a Feature Store Entity¶from azure.ai.ml.entities import DataColumn, DataColumnType, FeatureStoreEntity account_column = DataColumn(name="accountID", type=DataColumnType.STRING) account_entity_config = FeatureStoreEntity( name="account", version="1", index_columns=[account_column], stage="Development", description="This entity represents user account index key accountID.", tags={"data_type": "nonPII"}, ) # wait for featurestore entity creation fs_entity_poller = featurestore_client.feature_store_entities.begin_create_or_update(account_entity_config) print(fs_entity_poller.result())
- dump(dest: str | PathLike | IO, **kwargs: Any) None¶
Dump the asset content into a file in YAML format.
- Parameters:
dest (Union[PathLike, str, IO[AnyStr]]) – The local path or file stream to write the YAML content to. If dest is a file path, a new file will be created. If dest is an open file, the file will be written to directly.
- Raises:
FileExistsError – Raised if dest is a file path and the file already exists.
IOError – Raised if dest is an open file and the file is not writable.
- property base_path: str¶
The base path of the resource.
- Returns:
The base path of the resource.
- Return type:
- property creation_context: SystemData | None¶
The creation context of the resource.
- Returns:
The creation metadata for the resource.
- Return type:
Optional[SystemData]
- class azure.ai.ml.entities.FeatureStoreSettings(*, compute_runtime: ComputeRuntime | None = None, offline_store_connection_name: str | None = None, online_store_connection_name: str | None = None)[source]¶
Feature Store Settings
- Parameters:
compute_runtime (Optional[ComputeRuntime]) – The spark compute runtime settings. defaults to None.
offline_store_connection_name (Optional[str]) – The offline store connection name. Defaults to None.
online_store_connection_name (Optional[str]) – The online store connection name. Defaults to None.
Example:
Instantiating FeatureStoreSettings¶from azure.ai.ml.entities import ComputeRuntime, FeatureStoreSettings offline_store_target = f"/subscriptions/{subscription_id}/resourceGroups/{resource_group}/providers/Microsoft.Storage/storageAccounts/{storage_account_name}/blobServices/default/containers/{storage_file_system_name}" online_store_target = f"/subscriptions/{subscription_id}/resourceGroups/{resource_group}/providers/Microsoft.Cache/Redis/{redis_cache_name}" FeatureStoreSettings( compute_runtime=ComputeRuntime(spark_runtime_version="3.3.0"), offline_store_connection_name=offline_store_target, online_store_connection_name=online_store_target, )
- class azure.ai.ml.entities.FeatureWindow(*, feature_window_start: datetime, feature_window_end: datetime, **kwargs: Any)[source]¶
Feature window :keyword feature_window_end: Specifies the feature window end time. :paramtype feature_window_end: ~datetime.datetime :keyword feature_window_start: Specifies the feature window start time. :paramtype feature_window_start: ~datetime.datetime
- class azure.ai.ml.entities.FixedInputData(*, data_context: MonitorDatasetContext | None = None, target_columns: Dict | None = None, job_type: str | None = None, uri: str | None = None)[source]¶
- Variables:
type – Specifies the type of monitoring input data. Set automatically to “Fixed” for this class.
type – MonitorInputDataType
- class azure.ai.ml.entities.FqdnDestination(*, name: str, destination: str, **kwargs: Any)[source]¶
Class representing a FQDN outbound rule.
- Parameters:
- Variables:
type (str) – Type of the outbound rule. Set to “FQDN” for this class.
Creating a FqdnDestination outbound rule object.¶from azure.ai.ml.entities import FqdnDestination # Example FQDN rule pypirule = FqdnDestination(name="rulename", destination="pypi.org")
- class azure.ai.ml.entities.GenerationSafetyQualityMonitoringMetricThreshold(*, groundedness: Dict[str, float] | None = None, relevance: Dict[str, float] | None = None, coherence: Dict[str, float] | None = None, fluency: Dict[str, float] | None = None, similarity: Dict[str, float] | None = None)[source]¶
Note
This is an experimental class, and may change at any time. Please see https://aka.ms/azuremlexperimental for more information.
Generation safety quality metric threshold
- Parameters:
groundedness – The groundedness metric threshold
relevance – The relevance metric threshold
coherence – The coherence metric threshold
fluency – The fluency metric threshold
similarity – The similarity metric threshold
- class azure.ai.ml.entities.GenerationSafetyQualitySignal(*, production_data: List[LlmData] | None = None, connection_id: str | None = None, metric_thresholds: GenerationSafetyQualityMonitoringMetricThreshold, alert_enabled: bool = False, properties: Dict[str, str] | None = None, sampling_rate: float | None = None)[source]¶
Note
This is an experimental class, and may change at any time. Please see https://aka.ms/azuremlexperimental for more information.
Generation Safety Quality monitoring signal.
- Variables:
type (str) – The type of the signal. Set to “generationsafetyquality” for this class.
- Keyword Arguments:
production_data – A list of input datasets for monitoring.
metric_thresholds (GenerationSafetyQualityMonitoringMetricThreshold) – Metrics to calculate and their associated thresholds.
alert_enabled (bool) – Whether or not to enable alerts for the signal. Defaults to True.
connection_id (str) – Gets or sets the connection ID used to connect to the content generation endpoint.
sampling_rate (float) – The sample rate of the target data, should be greater than 0 and at most 1.
- class azure.ai.ml.entities.GenerationTokenStatisticsMonitorMetricThreshold(*, totaltoken: Dict[str, float] | None = None)[source]¶
Note
This is an experimental class, and may change at any time. Please see https://aka.ms/azuremlexperimental for more information.
Generation token statistics metric threshold definition.
All required parameters must be populated in order to send to Azure.
- Variables:
metric (str or GenerationTokenStatisticsMetric) – Required. [Required] Gets or sets the feature attribution metric to calculate. Possible values include: “TotalTokenCount”, “TotalTokenCountPerGroup”.
threshold (MonitoringThreshold) – Gets or sets the threshold value. If null, a default value will be set depending on the selected metric.
- class azure.ai.ml.entities.GenerationTokenStatisticsSignal(*, production_data: LlmData | None = None, metric_thresholds: GenerationTokenStatisticsMonitorMetricThreshold | None = None, alert_enabled: bool = False, properties: Dict[str, str] | None = None, sampling_rate: float | None = None)[source]¶
Note
This is an experimental class, and may change at any time. Please see https://aka.ms/azuremlexperimental for more information.
Generation token statistics signal definition.
- Variables:
type (str) – The type of the signal. Set to “generationtokenstatisticssignal” for this class.
- Keyword Arguments:
production_data – input dataset for monitoring.
metric_thresholds (Optional[GenerationTokenStatisticsMonitorMetricThreshold]) – Metrics to calculate and their associated thresholds. Defaults to App Traces
alert_enabled (bool) – Whether or not to enable alerts for the signal. Defaults to True.
properties (Optional[Dict[str, str]]) – The properties of the signal
sampling_rate (float) – The sample rate of the target data, should be greater than 0 and at most 1.
Example:
Set Token Statistics Monitor.¶spark_compute = ServerlessSparkCompute(instance_type="standard_e4s_v3", runtime_version="3.3") monitoring_target = MonitoringTarget( ml_task=MonitorTargetTasks.QUESTION_ANSWERING, endpoint_deployment_id=f"azureml:{endpoint_name}:{deployment_name}", ) monitoring_target = MonitoringTarget( ml_task=MonitorTargetTasks.QUESTION_ANSWERING, endpoint_deployment_id=f"azureml:{endpoint_name}:{deployment_name}", ) monitor_settings = MonitorDefinition(compute=spark_compute, monitoring_target=monitoring_target) model_monitor = MonitorSchedule( name="qa_model_monitor", trigger=CronTrigger(expression="15 10 * * *"), create_monitor=monitor_settings ) ml_client.schedules.begin_create_or_update(model_monitor)
- class azure.ai.ml.entities.GitSource(*, url: str, branch_name: str, connection_id: str)[source]¶
Note
This is an experimental class, and may change at any time. Please see https://aka.ms/azuremlexperimental for more information.
Config class for creating an ML index from files located in a git repository.
- class azure.ai.ml.entities.Hub(*, name: str, description: str | None = None, tags: Dict[str, str] | None = None, display_name: str | None = None, location: str | None = None, resource_group: str | None = None, managed_network: ManagedNetwork | None = None, storage_account: str | None = None, key_vault: str | None = None, container_registry: str | None = None, customer_managed_key: CustomerManagedKey | None = None, public_network_access: str | None = None, network_acls: NetworkAcls | None = None, identity: IdentityConfiguration | None = None, primary_user_assigned_identity: str | None = None, enable_data_isolation: bool = False, default_resource_group: str | None = None, associated_workspaces: List[str] | None = None, **kwargs: Any)[source]¶
Note
This is an experimental class, and may change at any time. Please see https://aka.ms/azuremlexperimental for more information.
A Hub is a special type of workspace that acts as a parent and resource container for lightweight child workspaces called projects. Resources like the hub’s storage account, key vault, and container registry are shared by all child projects.
As a type of workspace, hub management is controlled by an MLClient’s workspace operations.
- Parameters:
name (str) – Name of the hub.
description (str) – Description of the hub.
tags (dict) – Tags of the hub.
display_name (str) – Display name for the hub. This is non-unique within the resource group.
location (str) – The location to create the hub in. If not specified, the same location as the resource group will be used.
resource_group (str) – Name of resource group to create the hub in.
managed_network (ManagedNetwork) – Hub’s Managed Network configuration
storage_account (str) – The resource ID of an existing storage account to use instead of creating a new one.
key_vault (str) – The resource ID of an existing key vault to use instead of creating a new one.
container_registry (str) – The resource ID of an existing container registry to use instead of creating a new one.
customer_managed_key (CustomerManagedKey) – Key vault details for encrypting data with customer-managed keys. If not specified, Microsoft-managed keys will be used by default.
image_build_compute (str) – The name of the compute target to use for building environment. Docker images with the container registry is behind a VNet.
public_network_access (str) – Whether to allow public endpoint connectivity. when a workspace is private link enabled.
network_acls (NetworkAcls) – The network access control list (ACL) settings of the workspace.
identity (IdentityConfiguration) – The hub’s Managed Identity (user assigned, or system assigned).
primary_user_assigned_identity (str) – The hub’s primary user assigned identity.
enable_data_isolation (bool) – A flag to determine if workspace has data isolation enabled. The flag can only be set at the creation phase, it can’t be updated.
default_resource_group (str) – The resource group that will be used by projects created under this hub if no resource group is specified.
kwargs (dict) – A dictionary of additional configuration parameters.
Creating a Hub object.¶from azure.ai.ml.entities import Hub ws = Hub(name="sample-ws", location="eastus", description="a sample workspace hub object")
- dump(dest: str | PathLike | IO, **kwargs: Any) None¶
Dump the workspace spec into a file in yaml format.
- Parameters:
dest (Union[PathLike, str, IO[AnyStr]]) – The destination to receive this workspace’s spec. Must be either a path to a local file, or an already-open file stream. If dest is a file path, a new file will be created, and an exception is raised if the file exists. If dest is an open file, the file will be written to directly, and an exception will be raised if the file is not writable.
- property associated_workspaces: List[str] | None¶
The workspaces associated with the hub.
- Returns:
The resource group.
- Return type:
Optional[List[str]]
- property base_path: str¶
The base path of the resource.
- Returns:
The base path of the resource.
- Return type:
- property creation_context: SystemData | None¶
The creation context of the resource.
- Returns:
The creation metadata for the resource.
- Return type:
Optional[SystemData]
- property default_resource_group: str | None¶
The default resource group for this hub and its children.
- Returns:
The resource group.
- Return type:
Optional[str]
- property discovery_url: str | None¶
Backend service base URLs for the workspace.
- Returns:
Backend service URLs of the workspace
- Return type:
- class azure.ai.ml.entities.IPRule(value: str | None)[source]¶
Represents an IP rule with a value.
- Parameters:
value (str) – An IPv4 address or range in CIDR notation.
- class azure.ai.ml.entities.IdentityConfiguration(*, type: str, user_assigned_identities: List[ManagedIdentityConfiguration] | None = None, **kwargs: dict)[source]¶
Identity configuration used to represent identity property on compute, endpoint, and registry resources.
- Parameters:
type (str) – The type of managed identity.
user_assigned_identities (Optional[list[ManagedIdentityConfiguration]]) – A list of ManagedIdentityConfiguration objects.
- class azure.ai.ml.entities.ImageMetadata(*, is_latest_os_image_version: bool, current_image_version: str, latest_image_version: str)[source]¶
Metadata about the operating system image for the compute instance.
- Parameters:
Example:
Creating a ImageMetadata object.¶from azure.ai.ml.entities import ImageMetadata os_image_metadata = ImageMetadata( current_image_version="22.08.19", latest_image_version="22.08.20", is_latest_os_image_version=False, )
- property current_image_version: str¶
The current OS image version number.
- Returns:
The current OS image version number.
- Return type:
- class azure.ai.ml.entities.ImageSettings(*, reference: str)[source]¶
Specifies an image configuration for a Custom Application.
- Parameters:
reference (str) – Image reference URL.
- class azure.ai.ml.entities.ImportDataSchedule(*, name: str, trigger: CronTrigger | RecurrenceTrigger | None, import_data: DataImport, display_name: str | None = None, description: str | None = None, tags: Dict | None = None, properties: Dict | None = None, **kwargs: Any)[source]¶
Note
This is an experimental class, and may change at any time. Please see https://aka.ms/azuremlexperimental for more information.
ImportDataSchedule object.
- Parameters:
name (str) – Name of the schedule.
trigger (Union[CronTrigger, RecurrenceTrigger]) – Trigger of the schedule.
import_data (DataImport) – The schedule action data import definition.
display_name (str) – Display name of the schedule.
description (str) – Description of the schedule, defaults to None
tags (dict[str, str]) – Tag dictionary. Tags can be added, removed, and updated.
properties (dict[str, str]) – The data import property dictionary.
- dump(dest: str | PathLike | IO, **kwargs: Any) None¶
Dump the schedule content into a file in YAML format.
- Parameters:
dest (Union[PathLike, str, IO[AnyStr]]) – The local path or file stream to write the YAML content to. If dest is a file path, a new file will be created. If dest is an open file, the file will be written to directly.
- Raises:
FileExistsError – Raised if dest is a file path and the file already exists.
IOError – Raised if dest is an open file and the file is not writable.
- property base_path: str¶
The base path of the resource.
- Returns:
The base path of the resource.
- Return type:
- property creation_context: SystemData | None¶
The creation context of the resource.
- Returns:
The creation metadata for the resource.
- Return type:
Optional[SystemData]
- property id: str | None¶
The resource ID.
- Returns:
The global ID of the resource, an Azure Resource Manager (ARM) ID.
- Return type:
Optional[str]
- property is_enabled: bool¶
Specifies if the schedule is enabled or not.
- Returns:
True if the schedule is enabled, False otherwise.
- Return type:
- class azure.ai.ml.entities.Index(*, name: str, version: str | None = None, stage: str = 'Development', description: str | None = None, tags: Dict[str, str] | None = None, properties: Dict[str, str] | None = None, path: str | PathLike | None = None, datastore: str | None = None, **kwargs: Any)[source]¶
Note
This is an experimental class, and may change at any time. Please see https://aka.ms/azuremlexperimental for more information.
Index asset.
- Variables:
name (str) – Name of the resource.
version (str) – Version of the resource.
id (str) – Fully qualified resource Id: azureml://workspace/{workspaceName}/indexes/{name}/versions/{version} of the index. Required.
stage (str) – Update stage to ‘Archive’ for soft delete. Default is Development, which means the asset is under development. Required.
description (Optional[str]) – Description information of the asset.
path (Optional[Union[str, os.PathLike]]) – The local or remote path to the asset.
- dump(dest: str | PathLike | IO, **kwargs: Any) None¶
Dump the asset content into a file in YAML format.
- Parameters:
dest (Union[PathLike, str, IO[AnyStr]]) – The local path or file stream to write the YAML content to. If dest is a file path, a new file will be created. If dest is an open file, the file will be written to directly.
- Raises:
FileExistsError – Raised if dest is a file path and the file already exists.
IOError – Raised if dest is an open file and the file is not writable.
- property base_path: str¶
The base path of the resource.
- Returns:
The base path of the resource.
- Return type:
- property creation_context: SystemData | None¶
The creation context of the resource.
- Returns:
The creation metadata for the resource.
- Return type:
Optional[SystemData]
- class azure.ai.ml.entities.IndexDataSource(*, input_type: str | IndexInputType)[source]¶
Note
This is an experimental class, and may change at any time. Please see https://aka.ms/azuremlexperimental for more information.
Base class for configs that define data that will be processed into an ML index. This class should not be instantiated directly. Use one of its child classes instead.
- Parameters:
input_type (Union[str, IndexInputType]) – A type enum describing the source of the index. Used to avoid direct type checking.
- azure.ai.ml.entities.IndexModelConfiguration¶
alias of
ModelConfiguration
- class azure.ai.ml.entities.InputPort(*, type_string: str, default: str | None = None, optional: bool | None = False)[source]¶
- class azure.ai.ml.entities.IntellectualProperty(*, publisher: str | None = None, protection_level: IPProtectionLevel = IPProtectionLevel.ALL)[source]¶
Note
This is an experimental class, and may change at any time. Please see https://aka.ms/azuremlexperimental for more information.
Intellectual property settings definition.
- Keyword Arguments:
publisher (Optional[str]) – The publisher’s name.
protection_level (Optional[Union[str, IPProtectionLevel]]) – Asset Protection Level. Accepted values are IPProtectionLevel.ALL (“all”) and IPProtectionLevel.NONE (“none”). Defaults to IPProtectionLevel.ALL (“all”).
Example:
Configuring intellectual property settings on a CommandComponent.¶from azure.ai.ml.constants import IPProtectionLevel from azure.ai.ml.entities import CommandComponent, IntellectualProperty component = CommandComponent( name="random_name", version="1", environment="azureml:AzureML-Minimal:1", command="echo hello", intellectual_property=IntellectualProperty(publisher="contoso", protection_level=IPProtectionLevel.ALL), )
- class azure.ai.ml.entities.IsolationMode[source]¶
IsolationMode for the workspace managed network.
- ALLOW_INTERNET_OUTBOUND = 'AllowInternetOutbound'¶
- ALLOW_ONLY_APPROVED_OUTBOUND = 'AllowOnlyApprovedOutbound'¶
- DISABLED = 'Disabled'¶
- class azure.ai.ml.entities.Job(name: str | None = None, display_name: str | None = None, description: str | None = None, tags: Dict | None = None, properties: Dict | None = None, experiment_name: str | None = None, compute: str | None = None, services: Dict[str, JobService] | None = None, **kwargs: Any)[source]¶
Base class for jobs.
This class should not be instantiated directly. Instead, use one of its subclasses.
- Parameters:
name (Optional[str]) – The name of the job.
display_name (Optional[str]) – The display name of the job.
description (Optional[str]) – The description of the job.
tags (Optional[dict[str, str]]) – Tag dictionary. Tags can be added, removed, and updated.
properties (Optional[dict[str, str]]) – The job property dictionary.
experiment_name (Optional[str]) – The name of the experiment the job will be created under. Defaults to the name of the current directory.
services (Optional[dict[str, JobService]]) – Information on services associated with the job.
compute (Optional[str]) – Information about the compute resources associated with the job.
- Keyword Arguments:
kwargs (dict) – A dictionary of additional configuration parameters.
- dump(dest: str | PathLike | IO, **kwargs: Any) None[source]¶
Dumps the job content into a file in YAML format.
- Parameters:
dest (Union[PathLike, str, IO[AnyStr]]) – The local path or file stream to write the YAML content to. If dest is a file path, a new file will be created. If dest is an open file, the file will be written to directly.
- Raises:
FileExistsError – Raised if dest is a file path and the file already exists.
IOError – Raised if dest is an open file and the file is not writable.
- property base_path: str¶
The base path of the resource.
- Returns:
The base path of the resource.
- Return type:
- property creation_context: SystemData | None¶
The creation context of the resource.
- Returns:
The creation metadata for the resource.
- Return type:
Optional[SystemData]
- property id: str | None¶
The resource ID.
- Returns:
The global ID of the resource, an Azure Resource Manager (ARM) ID.
- Return type:
Optional[str]
- property status: str | None¶
The status of the job.
Common values returned include “Running”, “Completed”, and “Failed”. All possible values are:
NotStarted - This is a temporary state that client-side Run objects are in before cloud submission.
Starting - The Run has started being processed in the cloud. The caller has a run ID at this point.
Provisioning - On-demand compute is being created for a given job submission.
- Preparing - The run environment is being prepared and is in one of two stages:
Docker image build
conda environment setup
- Queued - The job is queued on the compute target. For example, in BatchAI, the job is in a queued state
while waiting for all the requested nodes to be ready.
Running - The job has started to run on the compute target.
Finalizing - User code execution has completed, and the run is in post-processing stages.
CancelRequested - Cancellation has been requested for the job.
- Completed - The run has completed successfully. This includes both the user code execution and run
post-processing stages.
Failed - The run failed. Usually the Error property on a run will provide details as to why.
Canceled - Follows a cancellation request and indicates that the run is now successfully cancelled.
NotResponding - For runs that have Heartbeats enabled, no heartbeat has been recently sent.
- Returns:
Status of the job.
- Return type:
Optional[str]
- class azure.ai.ml.entities.JobResourceConfiguration(*, locations: List[str] | None = None, instance_count: int | None = None, instance_type: str | List | None = None, properties: Properties | Dict | None = None, docker_args: str | None = None, shm_size: str | None = None, max_instance_count: int | None = None, **kwargs: Any)[source]¶
Job resource configuration class, inherited and extended functionalities from ResourceConfiguration.
- Keyword Arguments:
locations (Optional[List[str]]) – A list of locations where the job can run.
instance_count (Optional[int]) – The number of instances or nodes used by the compute target.
instance_type (Optional[str]) – The type of VM to be used, as supported by the compute target.
properties (Optional[dict[str, Any]]) – A dictionary of properties for the job.
docker_args (Optional[str]) – Extra arguments to pass to the Docker run command. This would override any parameters that have already been set by the system, or in this section. This parameter is only supported for Azure ML compute types.
shm_size (Optional[str]) – The size of the docker container’s shared memory block. This should be in the format of (number)(unit) where the number has to be greater than 0 and the unit can be one of b(bytes), k(kilobytes), m(megabytes), or g(gigabytes).
max_instance_count (Optional[int]) – The maximum number of instances or nodes used by the compute target.
kwargs (dict) – A dictionary of additional configuration parameters.
Example:
Configuring a CommandJob with a JobResourceConfiguration.¶from azure.ai.ml import MpiDistribution from azure.ai.ml.entities import JobResourceConfiguration trial = CommandJob( environment="AzureML-sklearn-1.0-ubuntu20.04-py38-cpu:33", command="echo hello world", distribution=MpiDistribution(), environment_variables={"ENV1": "VAR1"}, resources=JobResourceConfiguration(instance_count=2, instance_type="STANDARD_BLA"), code="./", )
- class azure.ai.ml.entities.JobResources(*, instance_types: List[str])[source]¶
Resource configuration for a job.
This class should not be instantiated directly. Instead, use its subclasses.
- class azure.ai.ml.entities.JobSchedule(*, name: str, trigger: CronTrigger | RecurrenceTrigger | None, create_job: Job | str, display_name: str | None = None, description: str | None = None, tags: Dict | None = None, properties: Dict | None = None, **kwargs: Any)[source]¶
Class for managing job schedules.
- Keyword Arguments:
name (str) – The name of the schedule.
trigger (Union[CronTrigger, RecurrenceTrigger]) – The trigger configuration for the schedule.
create_job (Union[Job, str]) – The job definition or an existing job name.
display_name (Optional[str]) – The display name of the schedule.
description (Optional[str]) – The description of the schedule.
tags (Optional[dict[str, str]]) – Tag dictionary. Tags can be added, removed, and updated.
properties (Optional[dict[str, str]]) – A dictionary of properties to associate with the schedule.
Example:
Configuring a JobSchedule.¶from azure.ai.ml import load_job from azure.ai.ml.entities import JobSchedule, RecurrencePattern, RecurrenceTrigger pipeline_job = load_job("./sdk/ml/azure-ai-ml/tests/test_configs/command_job/command_job_test_local_env.yml") trigger = RecurrenceTrigger( frequency="week", interval=4, schedule=RecurrencePattern(hours=10, minutes=15, week_days=["Monday", "Tuesday"]), start_time="2023-03-10", ) job_schedule = JobSchedule(name="simple_sdk_create_schedule", trigger=trigger, create_job=pipeline_job)
- dump(dest: str | PathLike | IO, **kwargs: Any) None¶
Dump the schedule content into a file in YAML format.
- Parameters:
dest (Union[PathLike, str, IO[AnyStr]]) – The local path or file stream to write the YAML content to. If dest is a file path, a new file will be created. If dest is an open file, the file will be written to directly.
- Raises:
FileExistsError – Raised if dest is a file path and the file already exists.
IOError – Raised if dest is an open file and the file is not writable.
- property base_path: str¶
The base path of the resource.
- Returns:
The base path of the resource.
- Return type:
- property creation_context: SystemData | None¶
The creation context of the resource.
- Returns:
The creation metadata for the resource.
- Return type:
Optional[SystemData]
- property id: str | None¶
The resource ID.
- Returns:
The global ID of the resource, an Azure Resource Manager (ARM) ID.
- Return type:
Optional[str]
- property is_enabled: bool¶
Specifies if the schedule is enabled or not.
- Returns:
True if the schedule is enabled, False otherwise.
- Return type:
- class azure.ai.ml.entities.JobService(*, endpoint: str | None = None, type: Literal['jupyter_lab', 'ssh', 'tensor_board', 'vs_code'] | None = None, nodes: Literal['all'] | None = None, status: str | None = None, port: int | None = None, properties: Dict[str, str] | None = None, **kwargs: Dict)[source]¶
Basic job service configuration for backward compatibility.
This class is not intended to be used directly. Instead, use one of its subclasses specific to your job type.
- Keyword Arguments:
endpoint (Optional[str]) – The endpoint URL.
type (Optional[Literal["jupyter_lab", "ssh", "tensor_board", "vs_code"]]) – The endpoint type. Accepted values are “jupyter_lab”, “ssh”, “tensor_board”, and “vs_code”.
port (Optional[int]) – The port for the endpoint.
nodes (Optional[Literal["all"]]) – Indicates whether the service has to run in all nodes.
properties (Optional[dict[str, str]]) – Additional properties to set on the endpoint.
status (Optional[str]) – The status of the endpoint.
kwargs (dict) – A dictionary of additional configuration parameters.
- class azure.ai.ml.entities.JupyterLabJobService(*, endpoint: str | None = None, nodes: Literal['all'] | None = None, status: str | None = None, port: int | None = None, properties: Dict[str, str] | None = None, **kwargs: Any)[source]¶
JupyterLab job service configuration.
- Variables:
type (str) – Specifies the type of job service. Set automatically to “jupyter_lab” for this class.
- Keyword Arguments:
endpoint (Optional[str]) – The endpoint URL.
port (Optional[int]) – The port for the endpoint.
nodes (Optional[Literal["all"]]) – Indicates whether the service has to run in all nodes.
properties (Optional[dict[str, str]]) – Additional properties to set on the endpoint.
status (Optional[str]) – The status of the endpoint.
kwargs (dict) – A dictionary of additional configuration parameters.
Example:
Configuring JupyterLabJobService configuration on a command job.¶from azure.ai.ml import command from azure.ai.ml.entities import JupyterLabJobService, SshJobService, TensorBoardJobService, VsCodeJobService node = command( name="interactive-command-job", description="description", environment="AzureML-sklearn-1.0-ubuntu20.04-py38-cpu:33", command="ls", compute="testCompute", services={ "my_ssh": SshJobService(), "my_tensorboard": TensorBoardJobService(log_dir="~/blog"), "my_jupyter_lab": JupyterLabJobService(), "my_vscode": VsCodeJobService(), }, )
- class azure.ai.ml.entities.KubernetesCompute(*, namespace: str = 'default', properties: Dict[str, Any] | None = None, identity: IdentityConfiguration | None = None, **kwargs: Any)[source]¶
Kubernetes Compute resource.
- Parameters:
namespace (Optional[str]) – The namespace of the KubernetesCompute. Defaults to “default”.
properties (Optional[Dict]) – The properties of the Kubernetes compute resource.
identity (IdentityConfiguration) – The identities that are associated with the compute cluster.
Example:
Creating a KubernetesCompute object.¶from azure.ai.ml.entities import KubernetesCompute k8s_compute = KubernetesCompute( identity=IdentityConfiguration( type="UserAssigned", user_assigned_identities=[ ManagedIdentityConfiguration( resource_id="/subscriptions/1234567-abcd-ef12-1234-12345/resourcegroups/our_rg_eastus/providers/Microsoft.ManagedIdentity/userAssignedIdentities/our-agent-aks" ) ], ), )
- dump(dest: str | PathLike | IO, **kwargs: Any) None¶
Dump the compute content into a file in yaml format.
- Parameters:
dest (Union[PathLike, str, IO[AnyStr]]) – The destination to receive this compute’s content. Must be either a path to a local file, or an already-open file stream. If dest is a file path, a new file will be created, and an exception is raised if the file exists. If dest is an open file, the file will be written to directly, and an exception will be raised if the file is not writable.’.
- property base_path: str¶
The base path of the resource.
- Returns:
The base path of the resource.
- Return type:
- property created_on: str | None¶
The compute resource creation timestamp.
- Returns:
The compute resource creation timestamp.
- Return type:
Optional[str]
- property creation_context: SystemData | None¶
The creation context of the resource.
- Returns:
The creation metadata for the resource.
- Return type:
Optional[SystemData]
- property id: str | None¶
The resource ID.
- Returns:
The global ID of the resource, an Azure Resource Manager (ARM) ID.
- Return type:
Optional[str]
- property provisioning_errors: str | None¶
The compute resource provisioning errors.
- Returns:
The compute resource provisioning errors.
- Return type:
Optional[str]
- class azure.ai.ml.entities.KubernetesOnlineDeployment(*, name: str, endpoint_name: str | None = None, tags: Dict[str, Any] | None = None, properties: Dict[str, Any] | None = None, description: str | None = None, model: str | Model | None = None, code_configuration: CodeConfiguration | None = None, environment: str | Environment | None = None, app_insights_enabled: bool = False, scale_settings: DefaultScaleSettings | TargetUtilizationScaleSettings | None = None, request_settings: OnlineRequestSettings | None = None, liveness_probe: ProbeSettings | None = None, readiness_probe: ProbeSettings | None = None, environment_variables: Dict[str, str] | None = None, resources: ResourceRequirementsSettings | None = None, instance_count: int | None = None, instance_type: str | None = None, code_path: str | PathLike | None = None, scoring_script: str | PathLike | None = None, **kwargs: Any)[source]¶
Kubernetes Online endpoint deployment entity.
- Keyword Arguments:
name (str) – Name of the deployment resource.
endpoint_name (Optional[str]) – Name of the endpoint resource, defaults to None
tags (Optional[Dict[str, Any]]) – Tag dictionary. Tags can be added, removed, and updated., defaults to None
properties (Optional[Dict[str, Any]]) – The asset property dictionary, defaults to None
description (Optional[str]) – Description of the resource, defaults to None
model (Optional[Union[str, Model]]) – Model entity for the endpoint deployment, defaults to None
code_configuration (Optional[CodeConfiguration]) – Code Configuration, defaults to None
environment (Optional[Union[str, Environment]]) – Environment entity for the endpoint deployment, defaults to None
app_insights_enabled (bool) – Is appinsights enabled, defaults to False
scale_settings (Optional[Union[DefaultScaleSettings , TargetUtilizationScaleSettings]]) – How the online deployment will scale, defaults to None
request_settings (Optional[OnlineRequestSettings]) – Online Request Settings, defaults to None
liveness_probe (Optional[ProbeSettings]) – Liveness probe settings, defaults to None
readiness_probe (Optional[ProbeSettings]) – Readiness probe settings, defaults to None
environment_variables (Optional[Dict[str, str]]) – Environment variables that will be set in deployment, defaults to None
resources (Optional[ResourceRequirementsSettings]) – Resource requirements settings, defaults to None
instance_count (Optional[int]) – The instance count used for this deployment, defaults to None
instance_type (Optional[str]) – The instance type defined by K8S cluster admin, defaults to None
code_path (Optional[Union[str, os.PathLike]]) – Equivalent to code_configuration.code, will be ignored if code_configuration is present , defaults to None
scoring_script (Optional[Union[str, os.PathLike]]) – Equivalent to code_configuration.code.scoring_script. Will be ignored if code_configuration is present, defaults to None
Kubernetes Online endpoint deployment entity.
Constructor for Kubernetes Online endpoint deployment entity.
- Keyword Arguments:
name (str) – Name of the deployment resource.
endpoint_name (Optional[str]) – Name of the endpoint resource, defaults to None
tags (Optional[Dict[str, Any]]) – Tag dictionary. Tags can be added, removed, and updated., defaults to None
properties (Optional[Dict[str, Any]]) – The asset property dictionary, defaults to None
description (Optional[str]) – Description of the resource, defaults to None
model (Optional[Union[str, Model]]) – Model entity for the endpoint deployment, defaults to None
code_configuration (Optional[CodeConfiguration]) – Code Configuration, defaults to None
environment (Optional[Union[str, Environment]]) – Environment entity for the endpoint deployment, defaults to None
app_insights_enabled (bool) – Is appinsights enabled, defaults to False
scale_settings (Optional[Union[DefaultScaleSettings , TargetUtilizationScaleSettings]]) – How the online deployment will scale, defaults to None
request_settings (Optional[OnlineRequestSettings]) – Online Request Settings, defaults to None
liveness_probe (Optional[ProbeSettings]) – Liveness probe settings, defaults to None
readiness_probe (Optional[ProbeSettings]) – Readiness probe settings, defaults to None
environment_variables (Optional[Dict[str, str]]) – Environment variables that will be set in deployment, defaults to None
resources (Optional[ResourceRequirementsSettings]) – Resource requirements settings, defaults to None
instance_count (Optional[int]) – The instance count used for this deployment, defaults to None
instance_type (Optional[str]) – The instance type defined by K8S cluster admin, defaults to None
code_path (Optional[Union[str, os.PathLike]]) – Equivalent to code_configuration.code, will be ignored if code_configuration is present , defaults to None
scoring_script (Optional[Union[str, os.PathLike]]) – Equivalent to code_configuration.code.scoring_script. Will be ignored if code_configuration is present, defaults to None
- dump(dest: str | PathLike | IO, **kwargs: Any) None¶
Dump the deployment content into a file in yaml format.
- Parameters:
dest (Union[os.PathLike, str, IO[AnyStr]]) – The destination to receive this deployment’s content. Must be either a path to a local file, or an already-open file stream. If dest is a file path, a new file will be created, and an exception is raised if the file exists. If dest is an open file, the file will be written to directly, and an exception will be raised if the file is not writable.
- property base_path: str¶
The base path of the resource.
- Returns:
The base path of the resource.
- Return type:
- property code_path: str | PathLike | None¶
The code directory containing the scoring script.
- Return type:
Union[str, PathLike]
- property creation_context: SystemData | None¶
The creation context of the resource.
- Returns:
The creation metadata for the resource.
- Return type:
Optional[SystemData]
- property id: str | None¶
The resource ID.
- Returns:
The global ID of the resource, an Azure Resource Manager (ARM) ID.
- Return type:
Optional[str]
- class azure.ai.ml.entities.KubernetesOnlineEndpoint(*, name: str | None = None, tags: Dict[str, Any] | None = None, properties: Dict[str, Any] | None = None, auth_mode: str = 'key', description: str | None = None, location: str | None = None, traffic: Dict[str, int] | None = None, mirror_traffic: Dict[str, int] | None = None, compute: str | None = None, identity: IdentityConfiguration | None = None, kind: str | None = None, **kwargs: Any)[source]¶
K8s Online endpoint entity.
- Keyword Arguments:
name (Optional[str]) – Name of the resource, defaults to None
tags (Optional[Dict[str, Any]]) – Tag dictionary. Tags can be added, removed, and updated, defaults to None
properties (Optional[Dict[str, Any]]) – The asset property dictionary, defaults to None
auth_mode – Possible values include: “aml_token”, “key”, defaults to KEY
description (Optional[str]) – Description of the inference endpoint, defaults to None
location (Optional[str]) – Location of the resource, defaults to None
traffic (Optional[Dict[str, int]]) – Traffic rules on how the traffic will be routed across deployments, defaults to None
mirror_traffic (Optional[Dict[str, int]]) – Duplicated live traffic used to inference a single deployment, defaults to None
compute (Optional[str]) – Compute cluster id, defaults to None
identity (Optional[IdentityConfiguration]) – Identity Configuration, defaults to SystemAssigned
kind (Optional[str]) – Kind of the resource, we have two kinds: K8s and Managed online endpoints, defaults to None
K8s Online endpoint entity.
Constructor for K8s Online endpoint entity.
- Keyword Arguments:
name (Optional[str]) – Name of the resource, defaults to None
tags (Optional[Dict[str, Any]]) – Tag dictionary. Tags can be added, removed, and updated, defaults to None
properties (Optional[Dict[str, Any]]) – The asset property dictionary, defaults to None
auth_mode – Possible values include: “aml_token”, “key”, defaults to KEY
description (Optional[str]) – Description of the inference endpoint, defaults to None
location (Optional[str]) – Location of the resource, defaults to None
traffic (Optional[Dict[str, int]]) – Traffic rules on how the traffic will be routed across deployments, defaults to None
mirror_traffic (Optional[Dict[str, int]]) – Duplicated live traffic used to inference a single deployment, defaults to None
compute (Optional[str]) – Compute cluster id, defaults to None
identity (Optional[IdentityConfiguration]) – Identity Configuration, defaults to SystemAssigned
kind – Kind of the resource, we have two kinds: K8s and Managed online endpoints, defaults to None
- dump(dest: str | PathLike | IO | None = None, **kwargs: Any) Dict[str, Any][source]¶
Dump the object content into a file.
- Parameters:
dest (Union[PathLike, str, IO[AnyStr]]) – The local path or file stream to write the YAML content to. If dest is a file path, a new file will be created. If dest is an open file, the file will be written to directly.
- property base_path: str¶
The base path of the resource.
- Returns:
The base path of the resource.
- Return type:
- property creation_context: SystemData | None¶
The creation context of the resource.
- Returns:
The creation metadata for the resource.
- Return type:
Optional[SystemData]
- property id: str | None¶
The resource ID.
- Returns:
The global ID of the resource, an Azure Resource Manager (ARM) ID.
- Return type:
Optional[str]
- class azure.ai.ml.entities.LlmData(*, input_data: Input, data_column_names: Dict[str, str] | None = None, data_window: BaselineDataRange | None = None)[source]¶
Note
This is an experimental class, and may change at any time. Please see https://aka.ms/azuremlexperimental for more information.
LLM Request Response Data
- Parameters:
input_data – Input data used by the monitor.
data_column_names – The names of columns in the input data.
data_window – The number of days or a time frame that a singal monitor looks back over the target.
- class azure.ai.ml.entities.LocalSource(*, input_data: str)[source]¶
Note
This is an experimental class, and may change at any time. Please see https://aka.ms/azuremlexperimental for more information.
Config class for creating an ML index from a collection of local files.
- Parameters:
input_data (Input) – An input object describing the local location of index source files.
- class azure.ai.ml.entities.ManagedIdentityConfiguration(*, client_id: str | None = None, resource_id: str | None = None, object_id: str | None = None, principal_id: str | None = None)[source]¶
Managed Identity credential configuration.
- Keyword Arguments:
- class azure.ai.ml.entities.ManagedNetwork(*, isolation_mode: str = 'Disabled', outbound_rules: List[OutboundRule] | None = None, firewall_sku: str | None = None, network_id: str | None = None, **kwargs: Any)[source]¶
Managed Network settings for a workspace.
- Parameters:
isolation_mode (str) – Isolation of the managed network, defaults to Disabled.
firewall_sku (str) – Firewall Sku for FQDN rules in AllowOnlyApprovedOutbound..
outbound_rules (List[OutboundRule]) – List of outbound rules for the managed network.
network_id (str) – Network id for the managed network, not meant to be set by user.
Creating a ManagedNetwork object with one of each rule type.¶from azure.ai.ml.constants._workspace import FirewallSku from azure.ai.ml.entities import ( FqdnDestination, IsolationMode, ManagedNetwork, PrivateEndpointDestination, ServiceTagDestination, Workspace, ) # Example private endpoint outbound to a blob blobrule = PrivateEndpointDestination( name="blobrule", service_resource_id="/subscriptions/00000000-1111-2222-3333-444444444444/resourceGroups/test-rg/providers/Microsoft.Storage/storageAccounts/storage-name", subresource_target="blob", spark_enabled=False, ) # Example service tag rule datafactoryrule = ServiceTagDestination( name="datafactory", service_tag="DataFactory", protocol="TCP", port_ranges="80, 8080-8089" ) # Example FQDN rule pypirule = FqdnDestination(name="pypirule", destination="pypi.org") # Example FirewallSku # FirewallSku is an optional parameter, when unspecified this will default to FirewallSku.Standard firewallSku = FirewallSku.BASIC network = ManagedNetwork( isolation_mode=IsolationMode.ALLOW_ONLY_APPROVED_OUTBOUND, outbound_rules=[blobrule, datafactoryrule, pypirule], firewall_sku=firewallSku, ) # Workspace configuration ws = Workspace(name="ws-name", location="eastus", managed_network=network)
- class azure.ai.ml.entities.ManagedNetworkProvisionStatus(*, status: str | None = None, spark_ready: bool | None = None)[source]¶
ManagedNetworkProvisionStatus.
- class azure.ai.ml.entities.ManagedOnlineDeployment(*, name: str, endpoint_name: str | None = None, tags: Dict[str, Any] | None = None, properties: Dict[str, Any] | None = None, description: str | None = None, model: str | Model | None = None, code_configuration: CodeConfiguration | None = None, environment: str | Environment | None = None, app_insights_enabled: bool = False, scale_settings: DefaultScaleSettings | TargetUtilizationScaleSettings | None = None, request_settings: OnlineRequestSettings | None = None, liveness_probe: ProbeSettings | None = None, readiness_probe: ProbeSettings | None = None, environment_variables: Dict[str, str] | None = None, instance_type: str | None = None, instance_count: int | None = None, egress_public_network_access: str | None = None, code_path: str | PathLike | None = None, scoring_script: str | PathLike | None = None, data_collector: DataCollector | None = None, **kwargs: Any)[source]¶
Managed Online endpoint deployment entity.
- Keyword Arguments:
name (str) – Name of the deployment resource
endpoint_name (Optional[str]) – Name of the endpoint resource, defaults to None
tags (Optional[Dict[str, Any]]) – Tag dictionary. Tags can be added, removed, and updated., defaults to None
properties (Optional[Dict[str, Any]]) – The asset property dictionary, defaults to None
description (Optional[str]) – Description of the resource, defaults to None
model (Optional[Union[str, Model]]) – Model entity for the endpoint deployment, defaults to None
code_configuration (Optional[CodeConfiguration]) – Code Configuration, defaults to None
environment (Optional[Union[str, Environment]]) – Environment entity for the endpoint deployment, defaults to None
app_insights_enabled (bool) – Is appinsights enabled, defaults to False
scale_settings (Optional[Union[DefaultScaleSettings , TargetUtilizationScaleSettings]]) – How the online deployment will scale, defaults to None
request_settings (Optional[OnlineRequestSettings]) – Online Request Settings, defaults to None
liveness_probe (Optional[ProbeSettings]) – Liveness probe settings, defaults to None
readiness_probe (Optional[ProbeSettings]) – Readiness probe settings, defaults to None
environment_variables (Optional[Dict[str, str]]) – Environment variables that will be set in deployment, defaults to None
instance_type (Optional[str]) – Azure compute sku, defaults to None
instance_count (Optional[int]) – The instance count used for this deployment, defaults to None
egress_public_network_access (Optional[str]) – Whether to restrict communication between a deployment and the Azure resources used to by the deployment. Allowed values are: “enabled”, “disabled”, defaults to None
code_path (Optional[Union[str, os.PathLike]]) – Equivalent to code_configuration.code, will be ignored if code_configuration is present , defaults to None
scoring_script_path (Optional[Union[str, os.PathLike]]) – Equivalent to code_configuration.scoring_script, will be ignored if code_configuration is present, defaults to None
data_collector (Optional[List[DataCollector]]) – Data collector, defaults to None
Managed Online endpoint deployment entity.
Constructor for Managed Online endpoint deployment entity.
- Keyword Arguments:
name (str) – Name of the deployment resource
endpoint_name (Optional[str]) – Name of the endpoint resource, defaults to None
tags (Optional[Dict[str, Any]]) – Tag dictionary. Tags can be added, removed, and updated., defaults to None
properties (Optional[Dict[str, Any]]) – The asset property dictionary, defaults to None
description (Optional[str]) – Description of the resource, defaults to None
model (Optional[Union[str, Model]]) – Model entity for the endpoint deployment, defaults to None
code_configuration (Optional[CodeConfiguration]) – Code Configuration, defaults to None
environment (Optional[Union[str, Environment]]) – Environment entity for the endpoint deployment, defaults to None
app_insights_enabled (bool) – Is appinsights enabled, defaults to False
scale_settings (Optional[Union[DefaultScaleSettings , TargetUtilizationScaleSettings]]) – How the online deployment will scale, defaults to None
request_settings (Optional[OnlineRequestSettings]) – Online Request Settings, defaults to None
liveness_probe (Optional[ProbeSettings]) – Liveness probe settings, defaults to None
readiness_probe (Optional[ProbeSettings]) – Readiness probe settings, defaults to None
environment_variables (Optional[Dict[str, str]]) – Environment variables that will be set in deployment, defaults to None
instance_type (Optional[str]) – Azure compute sku, defaults to None
instance_count (Optional[int]) – The instance count used for this deployment, defaults to None
egress_public_network_access (Optional[str]) – Whether to restrict communication between a deployment and the Azure resources used to by the deployment. Allowed values are: “enabled”, “disabled”, defaults to None
code_path (Optional[Union[str, os.PathLike]]) – Equivalent to code_configuration.code, will be ignored if code_configuration is present , defaults to None
- dump(dest: str | PathLike | IO, **kwargs: Any) None¶
Dump the deployment content into a file in yaml format.
- Parameters:
dest (Union[os.PathLike, str, IO[AnyStr]]) – The destination to receive this deployment’s content. Must be either a path to a local file, or an already-open file stream. If dest is a file path, a new file will be created, and an exception is raised if the file exists. If dest is an open file, the file will be written to directly, and an exception will be raised if the file is not writable.
- property base_path: str¶
The base path of the resource.
- Returns:
The base path of the resource.
- Return type:
- property code_path: str | PathLike | None¶
The code directory containing the scoring script.
- Return type:
Union[str, PathLike]
- property creation_context: SystemData | None¶
The creation context of the resource.
- Returns:
The creation metadata for the resource.
- Return type:
Optional[SystemData]
- property id: str | None¶
The resource ID.
- Returns:
The global ID of the resource, an Azure Resource Manager (ARM) ID.
- Return type:
Optional[str]
- class azure.ai.ml.entities.ManagedOnlineEndpoint(*, name: str | None = None, tags: Dict[str, Any] | None = None, properties: Dict[str, Any] | None = None, auth_mode: str = 'key', description: str | None = None, location: str | None = None, traffic: Dict[str, int] | None = None, mirror_traffic: Dict[str, int] | None = None, identity: IdentityConfiguration | None = None, kind: str | None = None, public_network_access: str | None = None, **kwargs: Any)[source]¶
Managed Online endpoint entity.
- Keyword Arguments:
name (Optional[str]) – Name of the resource, defaults to None
tags (Optional[Dict[str, Any]]) – Tag dictionary. Tags can be added, removed, and updated, defaults to None
properties (Optional[Dict[str, Any]]) – The asset property dictionary, defaults to None
auth_mode – Possible values include: “aml_token”, “key”, defaults to KEY
description (Optional[str]) – Description of the inference endpoint, defaults to None
location (Optional[str]) – Location of the resource, defaults to None
traffic (Optional[Dict[str, int]]) – Traffic rules on how the traffic will be routed across deployments, defaults to None
mirror_traffic (Optional[Dict[str, int]]) – Duplicated live traffic used to inference a single deployment, defaults to None
identity (Optional[IdentityConfiguration]) – Identity Configuration, defaults to SystemAssigned
kind (Optional[str]) – Kind of the resource, we have two kinds: K8s and Managed online endpoints, defaults to None.
public_network_access – Whether to allow public endpoint connectivity, defaults to None Allowed values are: “enabled”, “disabled”
Managed Online endpoint entity.
Constructor for Managed Online endpoint entity.
- Keyword Arguments:
name (Optional[str]) – Name of the resource, defaults to None
tags (Optional[Dict[str, Any]]) – Tag dictionary. Tags can be added, removed, and updated, defaults to None
properties (Optional[Dict[str, Any]]) – The asset property dictionary, defaults to None
auth_mode – Possible values include: “aml_token”, “key”, defaults to KEY
description (Optional[str]) – Description of the inference endpoint, defaults to None
location (Optional[str]) – Location of the resource, defaults to None
traffic (Optional[Dict[str, int]]) – Traffic rules on how the traffic will be routed across deployments, defaults to None
mirror_traffic (Optional[Dict[str, int]]) – Duplicated live traffic used to inference a single deployment, defaults to None
identity (Optional[IdentityConfiguration]) – Identity Configuration, defaults to SystemAssigned
kind – Kind of the resource, we have two kinds: K8s and Managed online endpoints, defaults to None.
public_network_access – Whether to allow public endpoint connectivity, defaults to None Allowed values are: “enabled”, “disabled”
- dump(dest: str | PathLike | IO | None = None, **kwargs: Any) Dict[str, Any][source]¶
Dump the object content into a file.
- Parameters:
dest (Union[PathLike, str, IO[AnyStr]]) – The local path or file stream to write the YAML content to. If dest is a file path, a new file will be created. If dest is an open file, the file will be written to directly.
- property base_path: str¶
The base path of the resource.
- Returns:
The base path of the resource.
- Return type:
- property creation_context: SystemData | None¶
The creation context of the resource.
- Returns:
The creation metadata for the resource.
- Return type:
Optional[SystemData]
- property id: str | None¶
The resource ID.
- Returns:
The global ID of the resource, an Azure Resource Manager (ARM) ID.
- Return type:
Optional[str]
- class azure.ai.ml.entities.MarketplacePlan(*args: Any, **kwargs: Any)[source]¶
Note
This is an experimental class, and may change at any time. Please see https://aka.ms/azuremlexperimental for more information.
>
- as_dict(*, exclude_readonly: bool = False) Dict[str, Any]¶
Return a dict that can be JSONify using json.dump.
- clear() None. Remove all items from D.¶
- copy() Model¶
- get(k[, d]) D[k] if k in D, else d. d defaults to None.¶
- items() a set-like object providing a view on D's items¶
- keys() a set-like object providing a view on D's keys¶
- pop(k[, d]) v, remove specified key and return the corresponding value.¶
If key is not found, d is returned if given, otherwise KeyError is raised.
- popitem() (k, v), remove and return some (key, value) pair¶
as a 2-tuple; but raise KeyError if D is empty.
- setdefault(k[, d]) D.get(k,d), also set D[k]=d if k not in D¶
- update([E, ]**F) None. Update D from mapping/iterable E and F.¶
If E present and has a .keys() method, does: for k in E: D[k] = E[k] If E present and lacks .keys() method, does: for (k, v) in E: D[k] = v In either case, this is followed by: for k, v in F.items(): D[k] = v
- values() an object providing a view on D's values¶
- class azure.ai.ml.entities.MarketplaceSubscription(*args: Any, **kwargs: Any)[source]¶
Marketplace Subscription Definition.
Readonly variables are only populated by the server, and will be ignored when sending a request.
All required parameters must be populated in order to send to server.
- Variables:
name (str) – The marketplace subscription name. Required.
model_id (str) – Model id for which to create marketplace subscription. Required.
marketplace_plan (MarketplacePlan) – The plan associated with the marketplace subscription.
status (str) – Status of the marketplace subscription. Possible values are: “pending_fulfillment_start”, “subscribed”, “unsubscribed”, “suspended”.
provisioning_state (str) – Provisioning state of the marketplace subscription. Possible values are: “creating”, “deleting”, “succeeded”, “failed”, “updating”, and “canceled”.
id (str) – ARM resource id of the marketplace subscription.
system_data (SystemData) – System data of the marketplace subscription.
- as_dict(*, exclude_readonly: bool = False) Dict[str, Any][source]¶
Return a dict that can be JSONify using json.dump.
- clear() None. Remove all items from D.¶
- copy() Model¶
- get(k[, d]) D[k] if k in D, else d. d defaults to None.¶
- items() a set-like object providing a view on D's items¶
- keys() a set-like object providing a view on D's keys¶
- pop(k[, d]) v, remove specified key and return the corresponding value.¶
If key is not found, d is returned if given, otherwise KeyError is raised.
- popitem() (k, v), remove and return some (key, value) pair¶
as a 2-tuple; but raise KeyError if D is empty.
- setdefault(k[, d]) D.get(k,d), also set D[k]=d if k not in D¶
- update([E, ]**F) None. Update D from mapping/iterable E and F.¶
If E present and has a .keys() method, does: for k in E: D[k] = E[k] If E present and lacks .keys() method, does: for (k, v) in E: D[k] = v In either case, this is followed by: for k, v in F.items(): D[k] = v
- values() an object providing a view on D's values¶
- marketplace_plan: '_models.MarketplacePlan' | None¶
The plan associated with the marketplace subscription.
- provisioning_state: str | None¶
“creating”, “deleting”, “succeeded”, “failed”, “updating”, and “canceled”.
- Type:
Provisioning state of the marketplace subscription. Possible values are
- status: str | None¶
“pending_fulfillment_start”, “subscribed”, “unsubscribed”, “suspended”.
- Type:
Status of the marketplace subscription. Possible values are
- system_data: SystemData | None¶
System data of the endpoint.
- class azure.ai.ml.entities.MaterializationComputeResource(*, instance_type: str, **kwargs: Any)[source]¶
Materialization Compute resource
- Keyword Arguments:
instance_type (str) – The compute instance type.
- Parameters:
kwargs (dict) – A dictionary of additional configuration parameters.
Example:
Creating a MaterializationComputeResource object.¶from azure.ai.ml.entities import MaterializationComputeResource materialization_compute = MaterializationComputeResource(instance_type="standard_e4s_v3")
- class azure.ai.ml.entities.MaterializationSettings(*, schedule: RecurrenceTrigger | None = None, offline_enabled: bool | None = None, online_enabled: bool | None = None, notification: Notification | None = None, resource: MaterializationComputeResource | None = None, spark_configuration: Dict[str, str] | None = None, **kwargs: Any)[source]¶
Defines materialization settings.
- Keyword Arguments:
schedule (Optional[RecurrenceTrigger]) – The schedule details. Defaults to None.
offline_enabled (Optional[bool]) – Boolean that specifies if offline store is enabled. Defaults to None.
online_enabled (Optional[bool]) – Boolean that specifies if online store is enabled. Defaults to None.
notification (Optional[Notification]) – The notification details. Defaults to None.
resource (Optional[MaterializationComputeResource]) – The compute resource settings. Defaults to None.
spark_configuration (Optional[dict[str, str]]) – The spark compute settings. Defaults to None.
- Parameters:
kwargs (dict) – A dictionary of additional configuration parameters.
Example:
Configuring MaterializationSettings.¶from azure.ai.ml.entities import MaterializationComputeResource, MaterializationSettings materialization_settings = MaterializationSettings( offline_enabled=True, spark_configuration={ "spark.driver.cores": 2, "spark.driver.memory": "18g", "spark.executor.cores": 4, "spark.executor.memory": "18g", "spark.executor.instances": 5, }, resource=MaterializationComputeResource(instance_type="standard_e4s_v3"), )
- class azure.ai.ml.entities.MaterializationStore(type: str, target: str)[source]¶
Materialization Store
- Parameters:
Example:
Configuring a Materialization Store¶from azure.ai.ml.entities import ManagedIdentityConfiguration, MaterializationStore gen2_container_arm_id = "/subscriptions/{sub_id}/resourceGroups/{rg}/providers/Microsoft.Storage/storageAccounts/{account}/blobServices/default/containers/{container}".format( sub_id=subscription_id, rg=resource_group, account=storage_account_name, container=storage_file_system_name, ) offline_store = MaterializationStore( type="azure_data_lake_gen2", target=gen2_container_arm_id, ) # Must define materialization identity when defining offline/online store. fs = FeatureStore( name=featurestore_name, offline_store=offline_store, materialization_identity=ManagedIdentityConfiguration( client_id="<YOUR-UAI-CLIENT-ID>", resource_id="<YOUR-UAI-RESOURCE-ID>", principal_id="<YOUR-UAI-PRINCIPAL-ID>", ), )
- class azure.ai.ml.entities.MaterializationType(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source]¶
Materialization Type Enum
- capitalize()¶
Return a capitalized version of the string.
More specifically, make the first character have upper case and the rest lower case.
- casefold()¶
Return a version of the string suitable for caseless comparisons.
- center(width, fillchar=' ', /)¶
Return a centered string of length width.
Padding is done using the specified fill character (default is a space).
- count(sub[, start[, end]]) int¶
Return the number of non-overlapping occurrences of substring sub in string S[start:end]. Optional arguments start and end are interpreted as in slice notation.
- encode(encoding='utf-8', errors='strict')¶
Encode the string using the codec registered for encoding.
- encoding
The encoding in which to encode the string.
- errors
The error handling scheme to use for encoding errors. The default is ‘strict’ meaning that encoding errors raise a UnicodeEncodeError. Other possible values are ‘ignore’, ‘replace’ and ‘xmlcharrefreplace’ as well as any other name registered with codecs.register_error that can handle UnicodeEncodeErrors.
- endswith(suffix[, start[, end]]) bool¶
Return True if S ends with the specified suffix, False otherwise. With optional start, test S beginning at that position. With optional end, stop comparing S at that position. suffix can also be a tuple of strings to try.
- expandtabs(tabsize=8)¶
Return a copy where all tab characters are expanded using spaces.
If tabsize is not given, a tab size of 8 characters is assumed.
- find(sub[, start[, end]]) int¶
Return the lowest index in S where substring sub is found, such that sub is contained within S[start:end]. Optional arguments start and end are interpreted as in slice notation.
Return -1 on failure.
- format(*args, **kwargs) str¶
Return a formatted version of S, using substitutions from args and kwargs. The substitutions are identified by braces (‘{’ and ‘}’).
- format_map(mapping) str¶
Return a formatted version of S, using substitutions from mapping. The substitutions are identified by braces (‘{’ and ‘}’).
- index(sub[, start[, end]]) int¶
Return the lowest index in S where substring sub is found, such that sub is contained within S[start:end]. Optional arguments start and end are interpreted as in slice notation.
Raises ValueError when the substring is not found.
- isalnum()¶
Return True if the string is an alpha-numeric string, False otherwise.
A string is alpha-numeric if all characters in the string are alpha-numeric and there is at least one character in the string.
- isalpha()¶
Return True if the string is an alphabetic string, False otherwise.
A string is alphabetic if all characters in the string are alphabetic and there is at least one character in the string.
- isascii()¶
Return True if all characters in the string are ASCII, False otherwise.
ASCII characters have code points in the range U+0000-U+007F. Empty string is ASCII too.
- isdecimal()¶
Return True if the string is a decimal string, False otherwise.
A string is a decimal string if all characters in the string are decimal and there is at least one character in the string.
- isdigit()¶
Return True if the string is a digit string, False otherwise.
A string is a digit string if all characters in the string are digits and there is at least one character in the string.
- isidentifier()¶
Return True if the string is a valid Python identifier, False otherwise.
Call keyword.iskeyword(s) to test whether string s is a reserved identifier, such as “def” or “class”.
- islower()¶
Return True if the string is a lowercase string, False otherwise.
A string is lowercase if all cased characters in the string are lowercase and there is at least one cased character in the string.
- isnumeric()¶
Return True if the string is a numeric string, False otherwise.
A string is numeric if all characters in the string are numeric and there is at least one character in the string.
- isprintable()¶
Return True if the string is printable, False otherwise.
A string is printable if all of its characters are considered printable in repr() or if it is empty.
- isspace()¶
Return True if the string is a whitespace string, False otherwise.
A string is whitespace if all characters in the string are whitespace and there is at least one character in the string.
- istitle()¶
Return True if the string is a title-cased string, False otherwise.
In a title-cased string, upper- and title-case characters may only follow uncased characters and lowercase characters only cased ones.
- isupper()¶
Return True if the string is an uppercase string, False otherwise.
A string is uppercase if all cased characters in the string are uppercase and there is at least one cased character in the string.
- join(iterable, /)¶
Concatenate any number of strings.
The string whose method is called is inserted in between each given string. The result is returned as a new string.
Example: ‘.’.join([‘ab’, ‘pq’, ‘rs’]) -> ‘ab.pq.rs’
- ljust(width, fillchar=' ', /)¶
Return a left-justified string of length width.
Padding is done using the specified fill character (default is a space).
- lower()¶
Return a copy of the string converted to lowercase.
- lstrip(chars=None, /)¶
Return a copy of the string with leading whitespace removed.
If chars is given and not None, remove characters in chars instead.
- static maketrans()¶
Return a translation table usable for str.translate().
If there is only one argument, it must be a dictionary mapping Unicode ordinals (integers) or characters to Unicode ordinals, strings or None. Character keys will be then converted to ordinals. If there are two arguments, they must be strings of equal length, and in the resulting dictionary, each character in x will be mapped to the character at the same position in y. If there is a third argument, it must be a string, whose characters will be mapped to None in the result.
- partition(sep, /)¶
Partition the string into three parts using the given separator.
This will search for the separator in the string. If the separator is found, returns a 3-tuple containing the part before the separator, the separator itself, and the part after it.
If the separator is not found, returns a 3-tuple containing the original string and two empty strings.
- removeprefix(prefix, /)¶
Return a str with the given prefix string removed if present.
If the string starts with the prefix string, return string[len(prefix):]. Otherwise, return a copy of the original string.
- removesuffix(suffix, /)¶
Return a str with the given suffix string removed if present.
If the string ends with the suffix string and that suffix is not empty, return string[:-len(suffix)]. Otherwise, return a copy of the original string.
- replace(old, new, count=-1, /)¶
Return a copy with all occurrences of substring old replaced by new.
- count
Maximum number of occurrences to replace. -1 (the default value) means replace all occurrences.
If the optional argument count is given, only the first count occurrences are replaced.
- rfind(sub[, start[, end]]) int¶
Return the highest index in S where substring sub is found, such that sub is contained within S[start:end]. Optional arguments start and end are interpreted as in slice notation.
Return -1 on failure.
- rindex(sub[, start[, end]]) int¶
Return the highest index in S where substring sub is found, such that sub is contained within S[start:end]. Optional arguments start and end are interpreted as in slice notation.
Raises ValueError when the substring is not found.
- rjust(width, fillchar=' ', /)¶
Return a right-justified string of length width.
Padding is done using the specified fill character (default is a space).
- rpartition(sep, /)¶
Partition the string into three parts using the given separator.
This will search for the separator in the string, starting at the end. If the separator is found, returns a 3-tuple containing the part before the separator, the separator itself, and the part after it.
If the separator is not found, returns a 3-tuple containing two empty strings and the original string.
- rsplit(sep=None, maxsplit=-1)¶
Return a list of the substrings in the string, using sep as the separator string.
- sep
The separator used to split the string.
When set to None (the default value), will split on any whitespace character (including n r t f and spaces) and will discard empty strings from the result.
- maxsplit
Maximum number of splits. -1 (the default value) means no limit.
Splitting starts at the end of the string and works to the front.
- rstrip(chars=None, /)¶
Return a copy of the string with trailing whitespace removed.
If chars is given and not None, remove characters in chars instead.
- split(sep=None, maxsplit=-1)¶
Return a list of the substrings in the string, using sep as the separator string.
- sep
The separator used to split the string.
When set to None (the default value), will split on any whitespace character (including n r t f and spaces) and will discard empty strings from the result.
- maxsplit
Maximum number of splits. -1 (the default value) means no limit.
Splitting starts at the front of the string and works to the end.
Note, str.split() is mainly useful for data that has been intentionally delimited. With natural text that includes punctuation, consider using the regular expression module.
- splitlines(keepends=False)¶
Return a list of the lines in the string, breaking at line boundaries.
Line breaks are not included in the resulting list unless keepends is given and true.
- startswith(prefix[, start[, end]]) bool¶
Return True if S starts with the specified prefix, False otherwise. With optional start, test S beginning at that position. With optional end, stop comparing S at that position. prefix can also be a tuple of strings to try.
- strip(chars=None, /)¶
Return a copy of the string with leading and trailing whitespace removed.
If chars is given and not None, remove characters in chars instead.
- swapcase()¶
Convert uppercase characters to lowercase and lowercase characters to uppercase.
- title()¶
Return a version of the string where each word is titlecased.
More specifically, words start with uppercased characters and all remaining cased characters have lower case.
- translate(table, /)¶
Replace each character in the string using the given translation table.
- table
Translation table, which must be a mapping of Unicode ordinals to Unicode ordinals, strings, or None.
The table must implement lookup/indexing via __getitem__, for instance a dictionary or list. If this operation raises LookupError, the character is left untouched. Characters mapped to None are deleted.
- upper()¶
Return a copy of the string converted to uppercase.
- zfill(width, /)¶
Pad a numeric string with zeros on the left, to fill a field of the given width.
The string is never truncated.
- BACKFILL_MATERIALIZATION = '2'¶
- RECURRENT_MATERIALIZATION = '1'¶
- class azure.ai.ml.entities.MicrosoftOneLakeConnection(*, endpoint: str, artifact: OneLakeConnectionArtifact | None = None, one_lake_workspace_name: str | None = None, metadata: Dict[Any, Any] | None = None, **kwargs)[source]¶
A connection to a Microsoft One Lake. Connections of this type are further specified by their artifact class type, although the number of artifact classes is currently limited.
- Parameters:
name (str) – Name of the connection.
endpoint (str) – The endpoint of the connection.
artifact (Optional[OneLakeArtifact]) – The artifact class used to further specify the connection.
one_lake_workspace_name (Optional[str]) – The name, not ID, of the workspace where the One Lake resource lives.
credentials (Union[ AccessKeyConfiguration, SasTokenConfiguration, AadCredentialConfiguration, ]) – The credentials for authenticating to the blob store. This type of connection accepts 3 types of credentials: account key and SAS token credentials, or NoneCredentialConfiguration for credential-less connections.
- dump(dest: str | PathLike | IO, **kwargs: Any) None¶
Dump the connection spec into a file in yaml format.
- Parameters:
dest (Union[PathLike, str, IO[AnyStr]]) – The destination to receive this connection’s spec. Must be either a path to a local file, or an already-open file stream. If dest is a file path, a new file will be created, and an exception is raised if the file exists. If dest is an open file, the file will be written to directly, and an exception will be raised if the file is not writable.
- property api_base: str | None¶
Alternate name for the target of the connection, which is used by some connection subclasses.
- Returns:
The target of the connection.
- Return type:
- property azure_endpoint: str | None¶
Alternate name for the target of the connection, which is used by some connection subclasses.
- Returns:
The target of the connection.
- Return type:
- property base_path: str¶
The base path of the resource.
- Returns:
The base path of the resource.
- Return type:
- property creation_context: SystemData | None¶
The creation context of the resource.
- Returns:
The creation metadata for the resource.
- Return type:
Optional[SystemData]
- property credentials: PatTokenConfiguration | SasTokenConfiguration | UsernamePasswordConfiguration | ManagedIdentityConfiguration | ServicePrincipalConfiguration | AccessKeyConfiguration | ApiKeyConfiguration | NoneCredentialConfiguration | AccountKeyConfiguration | AadCredentialConfiguration¶
Credentials for connection.
- Returns:
Credentials for connection.
- Return type:
Union[ PatTokenConfiguration, SasTokenConfiguration, UsernamePasswordConfiguration, ServicePrincipalConfiguration, AccessKeyConfiguration, NoneCredentialConfiguration, AccountKeyConfiguration, AadCredentialConfiguration, ]
- property endpoint: str | None¶
Alternate name for the target of the connection, which is used by some connection subclasses.
- Returns:
The target of the connection.
- Return type:
- property id: str | None¶
The resource ID.
- Returns:
The global ID of the resource, an Azure Resource Manager (ARM) ID.
- Return type:
Optional[str]
Get the Boolean describing if this connection is shared amongst its cohort within a hub. Only applicable for connections created within a project.
- Return type:
- property metadata: Dict[str, Any] | None¶
The connection’s metadata dictionary. :return: This connection’s metadata. :rtype: Optional[Dict[str, Any]]
- property tags: Dict[str, Any] | None¶
Deprecated. Use metadata instead. :return: This connection’s metadata. :rtype: Optional[Dict[str, Any]]
- property target: str | None¶
Target url for the connection.
- Returns:
Target of the connection.
- Return type:
Optional[str]
- class azure.ai.ml.entities.Model(*, name: str | None = None, version: str | None = None, type: str | None = None, path: str | PathLike | None = None, utc_time_created: str | None = None, flavors: Dict[str, Dict[str, Any]] | None = None, description: str | None = None, tags: Dict | None = None, properties: Dict | None = None, stage: str | None = None, **kwargs: Any)[source]¶
Model for training and scoring.
- Parameters:
name (Optional[str]) – The name of the model. Defaults to a random GUID.
version (Optional[str]) – The version of the model. Defaults to “1” if either no name or an unregistered name is provided. Otherwise, defaults to autoincrement from the last registered version of the model with that name.
type (Optional[str]) – The storage format for this entity, used for NCD (Novel Class Discovery). Accepted values are “custom_model”, “mlflow_model”, or “triton_model”. Defaults to “custom_model”.
utc_time_created (Optional[str]) – The date and time when the model was created, in UTC ISO 8601 format. (e.g. ‘2020-10-19 17:44:02.096572’).
flavors (Optional[dict[str, Any]]) – The flavors in which the model can be interpreted. Defaults to None.
path (Optional[str]) – A remote uri or a local path pointing to a model. Defaults to None.
description (Optional[str]) – The description of the resource. Defaults to None
tags (Optional[dict[str, str]]) – Tag dictionary. Tags can be added, removed, and updated. Defaults to None.
properties (Optional[dict[str, str]]) – The asset property dictionary. Defaults to None.
stage (Optional[str]) – The stage of the resource. Defaults to None.
kwargs (Optional[dict]) – A dictionary of additional configuration parameters.
Example:
Creating a Model object.¶from azure.ai.ml.entities import Model model = Model( name="model1", version="5", description="my first model in prod", path="models/very_important_model.pkl", properties={"prop1": "value1", "prop2": "value2"}, type="mlflow_model", flavors={ "sklearn": {"sklearn_version": "0.23.2"}, "python_function": {"loader_module": "office.plrmodel", "python_version": 3.6}, }, stage="Production", ) ml_client.models.create_or_update(model)
- dump(dest: str | PathLike | IO, **kwargs: Any) None¶
Dump the asset content into a file in YAML format.
- Parameters:
dest (Union[PathLike, str, IO[AnyStr]]) – The local path or file stream to write the YAML content to. If dest is a file path, a new file will be created. If dest is an open file, the file will be written to directly.
- Raises:
FileExistsError – Raised if dest is a file path and the file already exists.
IOError – Raised if dest is an open file and the file is not writable.
- property base_path: str¶
The base path of the resource.
- Returns:
The base path of the resource.
- Return type:
- property creation_context: SystemData | None¶
The creation context of the resource.
- Returns:
The creation metadata for the resource.
- Return type:
Optional[SystemData]
- class azure.ai.ml.entities.ModelBatchDeployment(*, name: str | None, endpoint_name: str | None = None, environment: str | Environment | None = None, properties: Dict[str, str] | None = None, model: str | Model | None = None, description: str | None = None, tags: Dict[str, Any] | None = None, settings: ModelBatchDeploymentSettings | None = None, resources: ResourceConfiguration | None = None, compute: str | None = None, code_configuration: CodeConfiguration | None = None, code_path: str | PathLike | None = None, scoring_script: str | PathLike | None = None, **kwargs: Any)[source]¶
Note
This is an experimental class, and may change at any time. Please see https://aka.ms/azuremlexperimental for more information.
Job Definition entity.
- Parameters:
Endpoint Deployment base class.
Constructor of Endpoint Deployment base class.
- Parameters:
name (Optional[str]) – Name of the deployment resource, defaults to None
- Keyword Arguments:
endpoint_name (Optional[str]) – Name of the Endpoint resource, defaults to None
description (Optional[str]) – Description of the deployment resource, defaults to None
tags (Optional[Dict[str, Any]]) – Tag dictionary. Tags can be added, removed, and updated, defaults to None
properties (Optional[Dict[str, Any]]) – The asset property dictionary, defaults to None
model (Optional[Union[str, Model]]) – The Model entity, defaults to None
code_configuration (Optional[CodeConfiguration]) – Code Configuration, defaults to None
environment (Optional[Union[str, Environment]]) – The Environment entity, defaults to None
environment_variables (Optional[Dict[str, str]]) – Environment variables that will be set in deployment, defaults to None
code_path (Optional[Union[str, PathLike]]) – Folder path to local code assets. Equivalent to code_configuration.code.path , defaults to None
scoring_script (Optional[Union[str, PathLike]]) – Scoring script name. Equivalent to code_configuration.code.scoring_script , defaults to None
- Raises:
ValidationException – Raised if Deployment cannot be successfully validated. Exception details will be provided in the error message.
- dump(dest: str | PathLike | IO, **kwargs: Any) None¶
Dump the deployment content into a file in yaml format.
- Parameters:
dest (Union[os.PathLike, str, IO[AnyStr]]) – The destination to receive this deployment’s content. Must be either a path to a local file, or an already-open file stream. If dest is a file path, a new file will be created, and an exception is raised if the file exists. If dest is an open file, the file will be written to directly, and an exception will be raised if the file is not writable.
- property base_path: str¶
The base path of the resource.
- Returns:
The base path of the resource.
- Return type:
- property code_path: str | PathLike | None¶
The code directory containing the scoring script.
- Return type:
Union[str, PathLike]
- property creation_context: SystemData | None¶
The creation context of the resource.
- Returns:
The creation metadata for the resource.
- Return type:
Optional[SystemData]
- property id: str | None¶
The resource ID.
- Returns:
The global ID of the resource, an Azure Resource Manager (ARM) ID.
- Return type:
Optional[str]
- property provisioning_state: str | None¶
Batch deployment provisioning state, readonly.
- Returns:
Batch deployment provisioning state.
- Return type:
Optional[str]
- class azure.ai.ml.entities.ModelBatchDeploymentSettings(*, mini_batch_size: int | None, instance_count: int | None = None, max_concurrency_per_instance: int | None = None, output_action: BatchDeploymentOutputAction | None = None, output_file_name: str | None = None, retry_settings: BatchRetrySettings | None = None, environment_variables: Dict[str, str] | None = None, error_threshold: int | None = None, logging_level: str | None = None, **kwargs: Any)[source]¶
Note
This is an experimental class, and may change at any time. Please see https://aka.ms/azuremlexperimental for more information.
Model Batch Deployment Settings entity.
- Parameters:
mini_batch_size (int) – Size of the mini-batch passed to each batch invocation, defaults to 10
instance_count (int) – Number of instances the interfering will run on. Equivalent to resources.instance_count.
output_action (str or BatchDeploymentOutputAction) – Indicates how the output will be organized. Possible values include: “summary_only”, “append_row”. Defaults to “append_row”
output_file_name (str) – Customized output file name for append_row output action, defaults to “predictions.csv”
max_concurrency_per_instance (int) – Indicates maximum number of parallelism per instance, defaults to 1
retry_settings (BatchRetrySettings) – Retry settings for a batch inference operation, defaults to None
environment_variables (dict) – Environment variables that will be set in deployment.
error_threshold (int) – Error threshold, if the error count for the entire input goes above this value, the batch inference will be aborted. Range is [-1, int.MaxValue] -1 value indicates, ignore all failures during batch inference For FileDataset count of file failures For TabularDataset, this is the count of record failures, defaults to -1
logging_level (str) – Logging level for batch inference operation, defaults to “info”
Example:
Creating a Model Batch Deployment Settings object.¶from azure.ai.ml.entities._deployment.model_batch_deployment_settings import ModelBatchDeploymentSettings modelBatchDeploymentSetting = ModelBatchDeploymentSettings( mini_batch_size=256, instance_count=5, max_concurrency_per_instance=2, output_file_name="output-file-name", environment_variables={"env1": "value1", "env2": "value2"}, error_threshold=2, logging_level=1, )
- class azure.ai.ml.entities.ModelConfiguration(*, mode: str | None = None, mount_path: str | None = None)[source]¶
Note
This is an experimental class, and may change at any time. Please see https://aka.ms/azuremlexperimental for more information.
ModelConfiguration.
- Parameters:
Example:
Creating a Model Configuration object.¶from azure.ai.ml.entities._assets._artifacts._package.model_configuration import ModelConfiguration modelConfiguration = ModelConfiguration(mode="model-mode", mount_path="model-mount-path")
- class azure.ai.ml.entities.ModelPackage(*, target_environment: str | Dict[str, str], inferencing_server: AzureMLOnlineInferencingServer | AzureMLBatchInferencingServer, base_environment_source: BaseEnvironment | None = None, environment_variables: Dict[str, str] | None = None, inputs: List[ModelPackageInput] | None = None, model_configuration: ModelConfiguration | None = None, tags: Dict[str, str] | None = None, **kwargs: Any)[source]¶
Note
This is an experimental class, and may change at any time. Please see https://aka.ms/azuremlexperimental for more information.
Model package.
- Parameters:
target_environment_name (str) – The target environment name for the model package.
inferencing_server (Union[AzureMLOnlineInferencingServer, AzureMLBatchInferencingServer]) – The inferencing server of the model package.
base_environment_source (Optional[BaseEnvironment]) – The base environment source of the model package.
target_environment_version (Optional[str]) – The version of the model package.
environment_variables (Optional[dict[str, str]]) – The environment variables of the model package.
inputs (Optional[list[ModelPackageInput]]) – The inputs of the model package.
model_configuration (Optional[ModelConfiguration]) – The model configuration.
tags (Optional[dict[str, str]]) – The tags of the model package.
Example:
Create a Model Package object.¶from azure.ai.ml.entities import AzureMLOnlineInferencingServer, CodeConfiguration, ModelPackage modelPackage = ModelPackage( inferencing_server=AzureMLOnlineInferencingServer( code_configuration=CodeConfiguration(code="../model-1/foo/", scoring_script="score.py") ), target_environment_name="env-name", target_environment_version="1.0", environment_variables={"env1": "value1", "env2": "value2"}, tags={"tag1": "value1", "tag2": "value2"}, )
- as_dict(keep_readonly=True, key_transformer=<function attribute_transformer>, **kwargs)¶
Return a dict that can be JSONify using json.dump.
Advanced usage might optionally use a callback as parameter:
Key is the attribute name used in Python. Attr_desc is a dict of metadata. Currently contains ‘type’ with the msrest type and ‘key’ with the RestAPI encoded key. Value is the current value in this object.
The string returned will be used to serialize the key. If the return type is a list, this is considered hierarchical result dict.
See the three examples in this file:
attribute_transformer
full_restapi_key_transformer
last_restapi_key_transformer
If you want XML serialization, you can pass the kwargs is_xml=True.
- Parameters:
key_transformer (function) – A key transformer function.
- Returns:
A dict JSON compatible object
- Return type:
- classmethod deserialize(data, content_type=None)¶
Parse a str using the RestAPI syntax and return a model.
- dump(dest: str | PathLike | IO, **kwargs: Any) None[source]¶
Dumps the job content into a file in YAML format.
- Parameters:
dest (Union[PathLike, str, IO[AnyStr]]) – The local path or file stream to write the YAML content to. If dest is a file path, a new file will be created. If dest is an open file, the file will be written to directly.
- Raises:
FileExistsError – Raised if dest is a file path and the file already exists.
IOError – Raised if dest is an open file and the file is not writable.
- classmethod enable_additional_properties_sending()¶
- classmethod from_dict(data, key_extractors=None, content_type=None)¶
Parse a dict using given key extractor return a model.
By default consider key extractors (rest_key_case_insensitive_extractor, attribute_key_case_insensitive_extractor and last_rest_key_case_insensitive_extractor)
- classmethod is_xml_model()¶
- serialize(keep_readonly=False, **kwargs)¶
Return the JSON that would be sent to azure from this model.
This is an alias to as_dict(full_restapi_key_transformer, keep_readonly=False).
If you want XML serialization, you can pass the kwargs is_xml=True.
- validate()¶
Validate this model recursively and return a list of ValidationError.
- Returns:
A list of validation error
- Return type:
- property base_path: str¶
The base path of the resource.
- Returns:
The base path of the resource.
- Return type:
- property creation_context: SystemData | None¶
The creation context of the resource.
- Returns:
The creation metadata for the resource.
- Return type:
Optional[SystemData]
- class azure.ai.ml.entities.ModelPackageInput(*, type: str | None = None, path: PackageInputPathId | PackageInputPathUrl | PackageInputPathVersion | None = None, mode: str | None = None, mount_path: str | None = None)[source]¶
Note
This is an experimental class, and may change at any time. Please see https://aka.ms/azuremlexperimental for more information.
Model package input.
- Parameters:
type (Optional[str]) – The type of the input.
path (Optional[Union[PackageInputPathId, PackageInputPathUrl, PackageInputPathVersion]]) – The path of the input.
mode (Optional[str]) – The input mode.
mount_path (Optional[str]) – The mount path for the input.
Example:
Create a Model Package Input object.¶from azure.ai.ml.entities._assets._artifacts._package.model_package import ModelPackageInput modelPackageInput = ModelPackageInput(type="input-type", mode="input-mode", mount_path="input-mount-path")
- class azure.ai.ml.entities.ModelPerformanceClassificationThresholds(*, accuracy: float | None = None, precision: float | None = None, recall: float | None = None)[source]¶
Note
This is an experimental class, and may change at any time. Please see https://aka.ms/azuremlexperimental for more information.
>
- class azure.ai.ml.entities.ModelPerformanceMetricThreshold(*, classification: ModelPerformanceClassificationThresholds | None = None, regression: ModelPerformanceRegressionThresholds | None = None)[source]¶
Note
This is an experimental class, and may change at any time. Please see https://aka.ms/azuremlexperimental for more information.
>
- class azure.ai.ml.entities.ModelPerformanceRegressionThresholds(*, mean_absolute_error: float | None = None, mean_squared_error: float | None = None, root_mean_squared_error: float | None = None)[source]¶
Note
This is an experimental class, and may change at any time. Please see https://aka.ms/azuremlexperimental for more information.
>
- class azure.ai.ml.entities.ModelPerformanceSignal(*, production_data: ProductionData, reference_data: ReferenceData, metric_thresholds: ModelPerformanceMetricThreshold, data_segment: DataSegment | None = None, alert_enabled: bool = False, properties: Dict[str, str] | None = None)[source]¶
Note
This is an experimental class, and may change at any time. Please see https://aka.ms/azuremlexperimental for more information.
Model performance signal.
- Keyword Arguments:
baseline_dataset (MonitorInputData) – The data to calculate performance against.
metric_thresholds (ModelPerformanceMetricThreshold) – A list of metrics to calculate and their associated thresholds.
model_type (MonitorModelType) – The model type.
data_segment (DataSegment) – The data segment to calculate performance against.
alert_enabled (bool) – Whether or not to enable alerts for the signal. Defaults to True.
- class azure.ai.ml.entities.MonitorDefinition(*, compute: ServerlessSparkCompute, monitoring_target: MonitoringTarget | None = None, monitoring_signals: Dict[str, DataDriftSignal | DataQualitySignal | PredictionDriftSignal | FeatureAttributionDriftSignal | CustomMonitoringSignal | GenerationSafetyQualitySignal | GenerationTokenStatisticsSignal] = None, alert_notification: Literal['azmonitoring'] | AlertNotification | None = None)[source]¶
Monitor definition
- Keyword Arguments:
compute (SparkResourceConfiguration) – The Spark resource configuration to be associated with the monitor
monitoring_target (Optional[MonitoringTarget]) – The ARM ID object associated with the model or deployment that is being monitored.
monitoring_signals (Optional[Dict[str, Union[DataDriftSignal , DataQualitySignal, PredictionDriftSignal , FeatureAttributionDriftSignal , CustomMonitoringSignal , GenerationSafetyQualitySignal , GenerationTokenStatisticsSignal , ModelPerformanceSignal]]]) – The dictionary of signals to monitor. The key is the name of the signal and the value is the DataSignal object. Accepted values for the DataSignal objects are DataDriftSignal, DataQualitySignal, PredictionDriftSignal, FeatureAttributionDriftSignal, and CustomMonitoringSignal.
alert_notification (Optional[Union[Literal['azmonitoring'], ~azure.ai.ml.entities.AlertNotification]]) – The alert configuration for the monitor.
Example:
Creating Monitor definition.¶from azure.ai.ml.entities import ( AlertNotification, MonitorDefinition, MonitoringTarget, SparkResourceConfiguration, ) monitor_definition = MonitorDefinition( compute=SparkResourceConfiguration(instance_type="standard_e4s_v3", runtime_version="3.3"), monitoring_target=MonitoringTarget( ml_task="Classification", endpoint_deployment_id="azureml:fraud_detection_endpoint:fraud_detection_deployment", ), alert_notification=AlertNotification(emails=["abc@example.com", "def@example.com"]), )
- class azure.ai.ml.entities.MonitorFeatureFilter(*, top_n_feature_importance: int = 10)[source]¶
Monitor feature filter
- Keyword Arguments:
top_n_feature_importance (int) – The number of top features to include. Defaults to 10.
- class azure.ai.ml.entities.MonitorInputData(*, type: MonitorInputDataType | None = None, data_context: MonitorDatasetContext | None = None, target_columns: Dict | None = None, job_type: str | None = None, uri: str | None = None)[source]¶
Monitor input data.
- Keyword Arguments:
type (MonitorInputDataType) – Specifies the type of monitoring input data.
input_dataset (Optional[Input]) – Input data used by the monitor
dataset_context (Optional[Union[str, MonitorDatasetContext]]) – The context of the input dataset. Accepted values are “model_inputs”, “model_outputs”, “training”, “test”, “validation”, and “ground_truth”.
target_column_name (Optional[str]) – The target column in the given input dataset.
pre_processing_component (Optional[str]) – The ARM (Azure Resource Manager) resource ID of the component resource used to preprocess the data.
- class azure.ai.ml.entities.MonitorSchedule(*, name: str, trigger: CronTrigger | RecurrenceTrigger | None, create_monitor: MonitorDefinition, display_name: str | None = None, description: str | None = None, tags: Dict | None = None, properties: Dict | None = None, **kwargs: Any)[source]¶
Monitor schedule.
- Keyword Arguments:
name (str) – The schedule name.
trigger (Union[CronTrigger, RecurrenceTrigger]) – The schedule trigger.
create_monitor (MonitorDefinition) – The schedule action monitor definition.
display_name (Optional[str]) – The display name of the schedule.
description (Optional[str]) – A description of the schedule.
tags (Optional[dict[str, str]]) – Tag dictionary. Tags can be added, removed, and updated.
properties (Optional[dict[str, str]]) – The job property dictionary.
- dump(dest: str | PathLike | IO, **kwargs: Any) None[source]¶
Dump the asset content into a file in YAML format.
- Parameters:
dest (Union[PathLike, str, IO[AnyStr]]) – The local path or file stream to write the YAML content to. If dest is a file path, a new file will be created. If dest is an open file, the file will be written to directly.
- Raises:
FileExistsError – Raised if dest is a file path and the file already exists.
IOError – Raised if dest is an open file and the file is not writable.
- property base_path: str¶
The base path of the resource.
- Returns:
The base path of the resource.
- Return type:
- property creation_context: SystemData | None¶
The creation context of the resource.
- Returns:
The creation metadata for the resource.
- Return type:
Optional[SystemData]
- property id: str | None¶
The resource ID.
- Returns:
The global ID of the resource, an Azure Resource Manager (ARM) ID.
- Return type:
Optional[str]
- property is_enabled: bool¶
Specifies if the schedule is enabled or not.
- Returns:
True if the schedule is enabled, False otherwise.
- Return type:
- class azure.ai.ml.entities.MonitoringTarget(*, ml_task: str | MonitorTargetTasks | None = None, endpoint_deployment_id: str | None = None, model_id: str | None = None)[source]¶
Monitoring target.
- Keyword Arguments:
ml_task (Optional[Union[str, MonitorTargetTasks]]) – Type of task. Allowed values: Classification, Regression, and QuestionAnswering
endpoint_deployment_id (Optional[str]) – The ARM ID of the target deployment. Mutually exclusive with model_id.
model_id (Optional[str]) – ARM ID of the target model ID. Mutually exclusive with endpoint_deployment_id.
Example:
Setting a monitoring target using endpoint_deployment_id.¶from azure.ai.ml.entities import ( AlertNotification, MonitorDefinition, MonitoringTarget, SparkResourceConfiguration, ) monitor_definition = MonitorDefinition( compute=SparkResourceConfiguration(instance_type="standard_e4s_v3", runtime_version="3.3"), monitoring_target=MonitoringTarget( ml_task="Classification", endpoint_deployment_id="azureml:fraud_detection_endpoint:fraud_detection_deployment", ), alert_notification=AlertNotification(emails=["abc@example.com", "def@example.com"]), )
- class azure.ai.ml.entities.NetworkAcls(*, default_action: str = 'Allow', ip_rules: List[IPRule] | None = None)[source]¶
Network Access Setting for Workspace
- Parameters:
Example:
Configuring one of the three public network access settings.¶from azure.ai.ml.entities import DefaultActionType, IPRule, NetworkAcls # Get existing workspace ws = ml_client.workspaces.get("test-ws1") # 1. Enabled from all networks # Note: default_action should be set to 'Allow', allowing all access. ws.public_network_access = "Enabled" ws.network_acls = NetworkAcls(default_action=DefaultActionType.ALLOW, ip_rules=[]) updated_ws = ml_client.workspaces.begin_update(workspace=ws).result() # 2. Enabled from selected IP addresses # Note: default_action should be set to 'Deny', allowing only specified IPs/ranges ws.public_network_access = "Enabled" ws.network_acls = NetworkAcls( default_action=DefaultActionType.DENY, ip_rules=[IPRule(value="103.248.19.87/32"), IPRule(value="103.248.19.86/32")], ) updated_ws = ml_client.workspaces.begin_update(workspace=ws).result() # 3. Disabled # NetworkAcls IP Rules will reset ws.public_network_access = "Disabled" updated_ws = ml_client.workspaces.begin_update(workspace=ws).result()
- class azure.ai.ml.entities.NetworkSettings(*, vnet_name: str | None = None, subnet: str | None = None, **kwargs: Any)[source]¶
Network settings for a compute resource. If the workspace and VNet are in different resource groups, please provide the full URI for subnet and leave vnet_name as None.
- Parameters:
Example:
Configuring NetworkSettings for an AmlCompute object.¶from azure.ai.ml.entities import ( AmlCompute, IdentityConfiguration, ManagedIdentityConfiguration, NetworkSettings, ) aml_compute = AmlCompute( name="my-compute", min_instances=0, max_instances=10, idle_time_before_scale_down=100, network_settings=NetworkSettings(vnet_name="my-vnet", subnet="default"), )
- class azure.ai.ml.entities.NoneCredentialConfiguration[source]¶
None Credential Configuration. In many uses cases, the presence of this credential configuration indicates that the user’s Entra ID will be implicitly used instead of any other form of authentication.
- class azure.ai.ml.entities.NotebookAccessKeys(*, primary_access_key: str | None = None, secondary_access_key: str | None = None)[source]¶
Key for notebook resource associated with given workspace.
- class azure.ai.ml.entities.Notification(*, email_on: List[str] | None = None, emails: List[str] | None = None)[source]¶
Configuration for notification.
- Parameters:
email_on (Optional[list[str]]) – Send email notification to user on specified notification type. Accepted values are “JobCompleted”, “JobFailed”, and “JobCancelled”.
- Param:
The email recipient list which. Note that this parameter has a character limit of 499 which includes all of the recipient strings and each comma seperator.
- class azure.ai.ml.entities.NumericalDriftMetrics(*, jensen_shannon_distance: float | None = None, normalized_wasserstein_distance: float | None = None, population_stability_index: float | None = None, two_sample_kolmogorov_smirnov_test: float | None = None, metric: str | None = None, metric_threshold: float | None = None)[source]¶
Numerical Drift Metrics
- Parameters:
jensen_shannon_distance – The Jensen-Shannon distance between the two distributions
normalized_wasserstein_distance – The normalized Wasserstein distance between the two distributions
population_stability_index – The population stability index between the two distributions
two_sample_kolmogorov_smirnov_test – The two sample Kolmogorov-Smirnov test between the two distributions
- class azure.ai.ml.entities.OneLakeArtifact(*, name: str, type: str | None = None)[source]¶
Note
This is an experimental class, and may change at any time. Please see https://aka.ms/azuremlexperimental for more information.
OneLake artifact (data source) backing the OneLake workspace.
- Parameters:
- class azure.ai.ml.entities.OneLakeConnectionArtifact(*, name: str, **kwargs: Any)[source]¶
Note
This is an experimental class, and may change at any time. Please see https://aka.ms/azuremlexperimental for more information.
Artifact class used by the Connection subclass known as a MicrosoftOneLakeConnection. Supplying this class further specifies the connection as a Lake House connection.
- class azure.ai.ml.entities.OneLakeDatastore(*, name: str, artifact: OneLakeArtifact, one_lake_workspace_name: str, endpoint: str | None = None, description: str | None = None, tags: Dict | None = None, properties: Dict | None = None, credentials: NoneCredentialConfiguration | ServicePrincipalConfiguration | None = None, **kwargs: Any)[source]¶
Note
This is an experimental class, and may change at any time. Please see https://aka.ms/azuremlexperimental for more information.
OneLake datastore that is linked to an Azure ML workspace.
- Parameters:
name (str) – Name of the datastore.
artifact (OneLakeArtifact) – OneLake Artifact. Only LakeHouse artifacts are currently supported.
one_lake_workspace_name (str) – OneLake workspace name/GUID. ex) 01234567-abcd-1234-5678-012345678901
endpoint (str) – OneLake endpoint to use for the datastore. ex) https://onelake.dfs.fabric.microsoft.com
description (str) – Description of the resource.
tags (dict[str, str]) – Tag dictionary. Tags can be added, removed, and updated.
properties (dict[str, str]) – The asset property dictionary.
credentials (Union[ ServicePrincipalConfiguration, NoneCredentialConfiguration]) – Credentials to use to authenticate against OneLake.
kwargs (dict) – A dictionary of additional configuration parameters.
- dump(dest: str | PathLike | IO, **kwargs: Any) None¶
Dump the datastore content into a file in yaml format.
- Parameters:
dest (Union[PathLike, str, IO[AnyStr]]) – The destination to receive this datastore’s content. Must be either a path to a local file, or an already-open file stream. If dest is a file path, a new file will be created, and an exception is raised if the file exists. If dest is an open file, the file will be written to directly, and an exception will be raised if the file is not writable.
- property base_path: str¶
The base path of the resource.
- Returns:
The base path of the resource.
- Return type:
- property creation_context: SystemData | None¶
The creation context of the resource.
- Returns:
The creation metadata for the resource.
- Return type:
Optional[SystemData]
- class azure.ai.ml.entities.OnlineDeployment(name: str, *, endpoint_name: str | None = None, tags: Dict[str, Any] | None = None, properties: Dict[str, Any] | None = None, description: str | None = None, model: str | Model | None = None, data_collector: DataCollector | None = None, code_configuration: CodeConfiguration | None = None, environment: str | Environment | None = None, app_insights_enabled: bool | None = False, scale_settings: OnlineScaleSettings | None = None, request_settings: OnlineRequestSettings | None = None, liveness_probe: ProbeSettings | None = None, readiness_probe: ProbeSettings | None = None, environment_variables: Dict[str, str] | None = None, instance_count: int | None = None, instance_type: str | None = None, model_mount_path: str | None = None, code_path: str | PathLike | None = None, scoring_script: str | PathLike | None = None, **kwargs: Any)[source]¶
Online endpoint deployment entity.
- Parameters:
name (str) – Name of the deployment resource.
- Keyword Arguments:
endpoint_name (Optional[str]) – Name of the endpoint resource, defaults to None
tags (Optional[Dict[str, Any]]) – Tag dictionary. Tags can be added, removed, and updated, defaults to None
properties (Optional[Dict[str, Any]]) – The asset property dictionary, defaults to None
description (Optional[str]) – Description of the resource, defaults to None
model (Optional[Union[str, Model]]) – Model entity for the endpoint deployment, defaults to None
data_collector (Optional[Union[str, DataCollector]]) – Data Collector entity for the endpoint deployment, defaults to None
code_configuration (Optional[CodeConfiguration]) – Code Configuration, defaults to None
environment (Optional[Union[str, Environment]]) – Environment entity for the endpoint deployment, defaults to None
app_insights_enabled (Optional[bool]) – Is appinsights enabled, defaults to False
scale_settings (Optional[OnlineScaleSettings]) – How the online deployment will scale, defaults to None
request_settings (Optional[OnlineRequestSettings]) – Online Request Settings, defaults to None
liveness_probe (Optional[ProbeSettings]) – Liveness probe settings, defaults to None
readiness_probe (Optional[ProbeSettings]) – Readiness probe settings, defaults to None
environment_variables (Optional[Dict[str, str]]) – Environment variables that will be set in deployment, defaults to None
instance_count (Optional[int]) – The instance count used for this deployment, defaults to None
instance_type (Optional[str]) – Azure compute sku, defaults to None
model_mount_path (Optional[str]) – The path to mount the model in custom container, defaults to None
code_path (Optional[Union[str, os.PathLike]]) – Equivalent to code_configuration.code, will be ignored if code_configuration is present , defaults to None
scoring_script (Optional[Union[str, os.PathLike]]) – Equivalent to code_configuration.code.scoring_script. Will be ignored if code_configuration is present, defaults to None
Online endpoint deployment entity.
Constructor for Online endpoint deployment entity
- Parameters:
name (str) – Name of the deployment resource.
- Keyword Arguments:
endpoint_name (Optional[str]) – Name of the endpoint resource, defaults to None
tags (Optional[Dict[str, Any]]) – Tag dictionary. Tags can be added, removed, and updated, defaults to None
properties (Optional[Dict[str, Any]]) – The asset property dictionary, defaults to None
description (Optional[str]) – Description of the resource, defaults to None
model (Optional[Union[str, Model]]) – Model entity for the endpoint deployment, defaults to None
code_configuration (Optional[CodeConfiguration]) – Code Configuration, defaults to None
environment (Optional[Union[str, Environment]]) – Environment entity for the endpoint deployment, defaults to None
app_insights_enabled (Optional[bool]) – Is appinsights enabled, defaults to False
scale_settings (Optional[OnlineScaleSettings]) – How the online deployment will scale, defaults to None
request_settings (Optional[OnlineRequestSettings]) – Online Request Settings, defaults to None
liveness_probe (Optional[ProbeSettings]) – Liveness probe settings, defaults to None
readiness_probe (Optional[ProbeSettings]) – Readiness probe settings, defaults to None
environment_variables (Optional[Dict[str, str]]) – Environment variables that will be set in deployment, defaults to None
instance_count (Optional[int]) – The instance count used for this deployment, defaults to None
instance_type (Optional[str]) – Azure compute sku, defaults to None
model_mount_path (Optional[str]) – The path to mount the model in custom container, defaults to None
code_path (Optional[Union[str, os.PathLike]]) – Equivalent to code_configuration.code, will be ignored if code_configuration is present , defaults to None
scoring_script (Optional[Union[str, os.PathLike]]) – Equivalent to code_configuration.code.scoring_script. Will be ignored if code_configuration is present, defaults to None
- dump(dest: str | PathLike | IO, **kwargs: Any) None¶
Dump the deployment content into a file in yaml format.
- Parameters:
dest (Union[os.PathLike, str, IO[AnyStr]]) – The destination to receive this deployment’s content. Must be either a path to a local file, or an already-open file stream. If dest is a file path, a new file will be created, and an exception is raised if the file exists. If dest is an open file, the file will be written to directly, and an exception will be raised if the file is not writable.
- property base_path: str¶
The base path of the resource.
- Returns:
The base path of the resource.
- Return type:
- property code_path: str | PathLike | None¶
The code directory containing the scoring script.
- Return type:
Union[str, PathLike]
- property creation_context: SystemData | None¶
The creation context of the resource.
- Returns:
The creation metadata for the resource.
- Return type:
Optional[SystemData]
- property id: str | None¶
The resource ID.
- Returns:
The global ID of the resource, an Azure Resource Manager (ARM) ID.
- Return type:
Optional[str]
- class azure.ai.ml.entities.OnlineEndpoint(*, name: str | None = None, tags: Dict[str, Any] | None = None, properties: Dict[str, Any] | None = None, auth_mode: str = 'key', description: str | None = None, location: str | None = None, traffic: Dict[str, int] | None = None, mirror_traffic: Dict[str, int] | None = None, identity: IdentityConfiguration | None = None, scoring_uri: str | None = None, openapi_uri: str | None = None, provisioning_state: str | None = None, kind: str | None = None, **kwargs: Any)[source]¶
Online endpoint entity.
- Keyword Arguments:
name (Optional[str]) – Name of the resource, defaults to None
tags (Optional[Dict[str, Any]]) – Tag dictionary. Tags can be added, removed, and updated. defaults to None
properties (Optional[Dict[str, Any]]) – The asset property dictionary, defaults to None
auth_mode – Possible values include: “aml_token”, “key”, defaults to KEY
description (Optional[str]) – Description of the inference endpoint, defaults to None
location (Optional[str]) – Location of the resource, defaults to None
traffic (Optional[Dict[str, int]]) – Traffic rules on how the traffic will be routed across deployments, defaults to None
mirror_traffic (Optional[Dict[str, int]]) – Duplicated live traffic used to inference a single deployment, defaults to None
identity (Optional[IdentityConfiguration]) – Identity Configuration, defaults to SystemAssigned
provisioning_state (Optional[str]) – Provisioning state of an endpoint, defaults to None
kind (Optional[str]) – Kind of the resource, we have two kinds: K8s and Managed online endpoints, defaults to None
Online endpoint entity.
Constructor for an Online endpoint entity.
- Keyword Arguments:
name (Optional[str]) – Name of the resource, defaults to None
tags (Optional[Dict[str, Any]]) – Tag dictionary. Tags can be added, removed, and updated. defaults to None
properties (Optional[Dict[str, Any]]) – The asset property dictionary, defaults to None
auth_mode – Possible values include: “aml_token”, “key”, defaults to KEY
description (Optional[str]) – Description of the inference endpoint, defaults to None
location (Optional[str]) – Location of the resource, defaults to None
traffic (Optional[Dict[str, int]]) – Traffic rules on how the traffic will be routed across deployments, defaults to None
mirror_traffic (Optional[Dict[str, int]]) – Duplicated live traffic used to inference a single deployment, defaults to None
identity (Optional[IdentityConfiguration]) – Identity Configuration, defaults to SystemAssigned
provisioning_state (Optional[str]) – Provisioning state of an endpoint, defaults to None
kind – Kind of the resource, we have two kinds: K8s and Managed online endpoints, defaults to None
- abstract dump(dest: str | PathLike | IO | None = None, **kwargs: Any) Dict¶
Dump the object content into a file.
- Parameters:
dest (Union[PathLike, str, IO[AnyStr]]) – The local path or file stream to write the YAML content to. If dest is a file path, a new file will be created. If dest is an open file, the file will be written to directly.
- property base_path: str¶
The base path of the resource.
- Returns:
The base path of the resource.
- Return type:
- property creation_context: SystemData | None¶
The creation context of the resource.
- Returns:
The creation metadata for the resource.
- Return type:
Optional[SystemData]
- property id: str | None¶
The resource ID.
- Returns:
The global ID of the resource, an Azure Resource Manager (ARM) ID.
- Return type:
Optional[str]
- class azure.ai.ml.entities.OnlineRequestSettings(max_concurrent_requests_per_instance: int | None = None, request_timeout_ms: int | None = None, max_queue_wait_ms: int | None = None)[source]¶
Request Settings entity.
- class azure.ai.ml.entities.OnlineScaleSettings(type: str, **kwargs: Any)[source]¶
Scale settings for online deployment.
- Parameters:
type (str) – Type of the scale settings, allowed values are “default” and “target_utilization”.
- class azure.ai.ml.entities.OpenAIConnection(*, api_key: str | None = None, metadata: Dict[Any, Any] | None = None, **kwargs)[source]¶
Note
This is an experimental class, and may change at any time. Please see https://aka.ms/azuremlexperimental for more information.
A connection geared towards direct connections to Open AI. Not to be confused with the AzureOpenAIWorkspaceConnection, which is for Azure’s Open AI services.
- Parameters:
- dump(dest: str | PathLike | IO, **kwargs: Any) None¶
Dump the connection spec into a file in yaml format.
- Parameters:
dest (Union[PathLike, str, IO[AnyStr]]) – The destination to receive this connection’s spec. Must be either a path to a local file, or an already-open file stream. If dest is a file path, a new file will be created, and an exception is raised if the file exists. If dest is an open file, the file will be written to directly, and an exception will be raised if the file is not writable.
- property api_base: str | None¶
Alternate name for the target of the connection, which is used by some connection subclasses.
- Returns:
The target of the connection.
- Return type:
- property api_key: str | None¶
The API key of the connection.
- Returns:
The API key of the connection.
- Return type:
Optional[str]
- property azure_endpoint: str | None¶
Alternate name for the target of the connection, which is used by some connection subclasses.
- Returns:
The target of the connection.
- Return type:
- property base_path: str¶
The base path of the resource.
- Returns:
The base path of the resource.
- Return type:
- property creation_context: SystemData | None¶
The creation context of the resource.
- Returns:
The creation metadata for the resource.
- Return type:
Optional[SystemData]
- property credentials: PatTokenConfiguration | SasTokenConfiguration | UsernamePasswordConfiguration | ManagedIdentityConfiguration | ServicePrincipalConfiguration | AccessKeyConfiguration | ApiKeyConfiguration | NoneCredentialConfiguration | AccountKeyConfiguration | AadCredentialConfiguration¶
Credentials for connection.
- Returns:
Credentials for connection.
- Return type:
Union[ PatTokenConfiguration, SasTokenConfiguration, UsernamePasswordConfiguration, ServicePrincipalConfiguration, AccessKeyConfiguration, NoneCredentialConfiguration, AccountKeyConfiguration, AadCredentialConfiguration, ]
- property endpoint: str | None¶
Alternate name for the target of the connection, which is used by some connection subclasses.
- Returns:
The target of the connection.
- Return type:
- property id: str | None¶
The resource ID.
- Returns:
The global ID of the resource, an Azure Resource Manager (ARM) ID.
- Return type:
Optional[str]
Get the Boolean describing if this connection is shared amongst its cohort within a hub. Only applicable for connections created within a project.
- Return type:
- property metadata: Dict[str, Any] | None¶
The connection’s metadata dictionary. :return: This connection’s metadata. :rtype: Optional[Dict[str, Any]]
- property tags: Dict[str, Any] | None¶
Deprecated. Use metadata instead. :return: This connection’s metadata. :rtype: Optional[Dict[str, Any]]
- property target: str | None¶
Target url for the connection.
- Returns:
Target of the connection.
- Return type:
Optional[str]
- class azure.ai.ml.entities.OutboundRule(*, name: str | None = None, **kwargs: Any)[source]¶
Base class for Outbound Rules, cannot be instantiated directly. Please see FqdnDestination, PrivateEndpointDestination, and ServiceTagDestination objects to create outbound rules.
- class azure.ai.ml.entities.PackageInputPathId(*, input_path_type: str | None = None, resource_id: str | None = None)[source]¶
Note
This is an experimental class, and may change at any time. Please see https://aka.ms/azuremlexperimental for more information.
Package input path specified with a resource ID.
- Parameters:
input_path_type (Optional[str]) – The type of the input path. Accepted values are “Url”, “PathId”, and “PathVersion”.
resource_id (Optional[str]) – The resource ID of the input path. e.g. “azureml://subscriptions/<>/resourceGroups/ <>/providers/Microsoft.MachineLearningServices/workspaces/<>/data/<>/versions/<>”.
- class azure.ai.ml.entities.PackageInputPathUrl(*, input_path_type: str | None = None, url: str | None = None)[source]¶
Note
This is an experimental class, and may change at any time. Please see https://aka.ms/azuremlexperimental for more information.
Package input path specified with a url.
- Parameters:
- class azure.ai.ml.entities.PackageInputPathVersion(*, input_path_type: str | None = None, resource_name: str | None = None, resource_version: str | None = None)[source]¶
Note
This is an experimental class, and may change at any time. Please see https://aka.ms/azuremlexperimental for more information.
Package input path specified with a resource name and version.
- class azure.ai.ml.entities.Parallel(*, component: ParallelComponent | str, compute: str | None = None, inputs: Dict[str, NodeOutput | Input | str | bool | int | float | Enum] | None = None, outputs: Dict[str, str | Output] | None = None, retry_settings: RetrySettings | Dict[str, str] | None = None, logging_level: str | None = None, max_concurrency_per_instance: int | None = None, error_threshold: int | None = None, mini_batch_error_threshold: int | None = None, input_data: str | None = None, task: ParallelTask | RunFunction | Dict | None = None, partition_keys: List | None = None, mini_batch_size: int | str | None = None, resources: JobResourceConfiguration | None = None, environment_variables: Dict | None = None, identity: Dict | ManagedIdentityConfiguration | AmlTokenConfiguration | UserIdentityConfiguration | None = None, **kwargs: Any)[source]¶
Base class for parallel node, used for parallel component version consumption.
You should not instantiate this class directly. Instead, you should create from builder function: parallel.
- Parameters:
component (parallelComponent) – Id or instance of the parallel component/job to be run for the step
name (str) – Name of the parallel
description (str) – Description of the commad
tags (dict[str, str]) – Tag dictionary. Tags can be added, removed, and updated
display_name (str) – Display name of the job
retry_settings (BatchRetrySettings) – Parallel job run failed retry
logging_level (str) – A string of the logging level name
max_concurrency_per_instance (int) – The max parallellism that each compute instance has
error_threshold (int) – The number of item processing failures should be ignored
mini_batch_error_threshold (int) – The number of mini batch processing failures should be ignored
task (ParallelTask) – The parallel task
mini_batch_size (str) – For FileDataset input, this field is the number of files a user script can process in one run() call. For TabularDataset input, this field is the approximate size of data the user script can process in one run() call. Example values are 1024, 1024KB, 10MB, and 1GB. (optional, default value is 10 files for FileDataset and 1MB for TabularDataset.) This value could be set through PipelineParameter
partition_keys (List) – The keys used to partition dataset into mini-batches. If specified, the data with the same key will be partitioned into the same mini-batch. If both partition_keys and mini_batch_size are specified, the partition keys will take effect. The input(s) must be partitioned dataset(s), and the partition_keys must be a subset of the keys of every input dataset for this to work.
input_data (str) – The input data
inputs (dict) – Inputs of the component/job
outputs (dict) – Outputs of the component/job
- Keyword Arguments:
identity (Optional[Union[ dict[str, str], ManagedIdentityConfiguration, AmlTokenConfiguration, UserIdentityConfiguration]]) – The identity that the command job will use while running on compute.
- clear() None. Remove all items from D.¶
- copy() a shallow copy of D¶
- dump(dest: str | PathLike | IO, **kwargs: Any) None¶
Dumps the job content into a file in YAML format.
- Parameters:
dest (Union[PathLike, str, IO[AnyStr]]) – The local path or file stream to write the YAML content to. If dest is a file path, a new file will be created. If dest is an open file, the file will be written to directly.
- Raises:
FileExistsError – Raised if dest is a file path and the file already exists.
IOError – Raised if dest is an open file and the file is not writable.
- fromkeys(value=None, /)¶
Create a new dictionary with keys from iterable and values set to value.
- get(key, default=None, /)¶
Return the value for key if key is in the dictionary, else default.
- items() a set-like object providing a view on D's items¶
- keys() a set-like object providing a view on D's keys¶
- pop(k[, d]) v, remove specified key and return the corresponding value.¶
If the key is not found, return the default if given; otherwise, raise a KeyError.
- popitem()¶
Remove and return a (key, value) pair as a 2-tuple.
Pairs are returned in LIFO (last-in, first-out) order. Raises KeyError if the dict is empty.
- set_resources(*, instance_type: str | List[str] | None = None, instance_count: int | None = None, properties: Dict | None = None, docker_args: str | None = None, shm_size: str | None = None, **kwargs: Any) None[source]¶
Set the resources for the parallel job.
- Keyword Arguments:
instance_type (Union[str, List[str]]) – The instance type or a list of instance types used as supported by the compute target.
instance_count (int) – The number of instances or nodes used by the compute target.
properties (dict) – The property dictionary for the resources.
docker_args (str) – Extra arguments to pass to the Docker run command.
shm_size (str) – Size of the Docker container’s shared memory block.
- setdefault(key, default=None, /)¶
Insert key with a value of default if key is not in the dictionary.
Return the value for key if key is in the dictionary, else default.
- update([E, ]**F) None. Update D from dict/iterable E and F.¶
If E is present and has a .keys() method, then does: for k in E: D[k] = E[k] If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k]
- values() an object providing a view on D's values¶
- property base_path: str¶
The base path of the resource.
- Returns:
The base path of the resource.
- Return type:
- property component: str | ParallelComponent¶
Get the component of the parallel job.
- Returns:
The component of the parallel job.
- Return type:
- property creation_context: SystemData | None¶
The creation context of the resource.
- Returns:
The creation metadata for the resource.
- Return type:
Optional[SystemData]
- property id: str | None¶
The resource ID.
- Returns:
The global ID of the resource, an Azure Resource Manager (ARM) ID.
- Return type:
Optional[str]
- property identity: Dict | ManagedIdentityConfiguration | AmlTokenConfiguration | UserIdentityConfiguration | None¶
The identity that the job will use while running on compute.
- Returns:
The identity that the job will use while running on compute.
- Return type:
Optional[Union[ManagedIdentityConfiguration, AmlTokenConfiguration, UserIdentityConfiguration]]
- property resources: JobResourceConfiguration | None¶
Get the resource configuration for the parallel job.
- Returns:
The resource configuration for the parallel job.
- Return type:
- property retry_settings: RetrySettings¶
Get the retry settings for the parallel job.
- Returns:
The retry settings for the parallel job.
- Return type:
- property status: str | None¶
The status of the job.
Common values returned include “Running”, “Completed”, and “Failed”. All possible values are:
NotStarted - This is a temporary state that client-side Run objects are in before cloud submission.
Starting - The Run has started being processed in the cloud. The caller has a run ID at this point.
Provisioning - On-demand compute is being created for a given job submission.
- Preparing - The run environment is being prepared and is in one of two stages:
Docker image build
conda environment setup
- Queued - The job is queued on the compute target. For example, in BatchAI, the job is in a queued state
while waiting for all the requested nodes to be ready.
Running - The job has started to run on the compute target.
Finalizing - User code execution has completed, and the run is in post-processing stages.
CancelRequested - Cancellation has been requested for the job.
- Completed - The run has completed successfully. This includes both the user code execution and run
post-processing stages.
Failed - The run failed. Usually the Error property on a run will provide details as to why.
Canceled - Follows a cancellation request and indicates that the run is now successfully cancelled.
NotResponding - For runs that have Heartbeats enabled, no heartbeat has been recently sent.
- Returns:
Status of the job.
- Return type:
Optional[str]
- property studio_url: str | None¶
Azure ML studio endpoint.
- Returns:
The URL to the job details page.
- Return type:
Optional[str]
- property task: ParallelTask | None¶
Get the parallel task.
- Returns:
The parallel task.
- Return type:
- class azure.ai.ml.entities.ParallelComponent(*, name: str | None = None, version: str | None = None, description: str | None = None, tags: Dict[str, Any] | None = None, display_name: str | None = None, retry_settings: RetrySettings | None = None, logging_level: str | None = None, max_concurrency_per_instance: int | None = None, error_threshold: int | None = None, mini_batch_error_threshold: int | None = None, task: ParallelTask | None = None, mini_batch_size: str | None = None, partition_keys: List | None = None, input_data: str | None = None, resources: JobResourceConfiguration | None = None, inputs: Dict | None = None, outputs: Dict | None = None, code: str | None = None, instance_count: int | None = None, is_deterministic: bool = True, **kwargs: Any)[source]¶
Parallel component version, used to define a parallel component.
- Parameters:
name (str) – Name of the component. Defaults to None
version (str) – Version of the component. Defaults to None
description (str) – Description of the component. Defaults to None
tags (dict) – Tag dictionary. Tags can be added, removed, and updated. Defaults to None
display_name (str) – Display name of the component. Defaults to None
retry_settings (BatchRetrySettings) – parallel component run failed retry. Defaults to None
logging_level (str) – A string of the logging level name. Defaults to None
max_concurrency_per_instance (int) – The max parallellism that each compute instance has. Defaults to None
error_threshold (int) – The number of item processing failures should be ignored. Defaults to None
mini_batch_error_threshold (int) – The number of mini batch processing failures should be ignored. Defaults to None
task (ParallelTask) – The parallel task. Defaults to None
mini_batch_size (str) – For FileDataset input, this field is the number of files a user script can process in one run() call. For TabularDataset input, this field is the approximate size of data the user script can process in one run() call. Example values are 1024, 1024KB, 10MB, and 1GB. (optional, default value is 10 files for FileDataset and 1MB for TabularDataset.) This value could be set through PipelineParameter.
partition_keys (list) – The keys used to partition dataset into mini-batches. Defaults to None If specified, the data with the same key will be partitioned into the same mini-batch. If both partition_keys and mini_batch_size are specified, partition_keys will take effect. The input(s) must be partitioned dataset(s), and the partition_keys must be a subset of the keys of every input dataset for this to work.
input_data (str) – The input data. Defaults to None
resources (Union[dict, JobResourceConfiguration]) – Compute Resource configuration for the component. Defaults to None
inputs (dict) – Inputs of the component. Defaults to None
outputs (dict) – Outputs of the component. Defaults to None
code (str) – promoted property from task.code
instance_count (int) – promoted property from resources.instance_count. Defaults to None
is_deterministic (bool) – Whether the parallel component is deterministic. Defaults to True
- Raises:
ValidationException – Raised if ParallelComponent cannot be successfully validated. Details will be provided in the error message.
- dump(dest: str | PathLike | IO, **kwargs: Any) None¶
Dump the component content into a file in yaml format.
- Parameters:
dest (Union[PathLike, str, IO[AnyStr]]) – The destination to receive this component’s content. Must be either a path to a local file, or an already-open file stream. If dest is a file path, a new file will be created, and an exception is raised if the file exists. If dest is an open file, the file will be written to directly, and an exception will be raised if the file is not writable.
- property base_path: str¶
The base path of the resource.
- Returns:
The base path of the resource.
- Return type:
- property code: str | None¶
Return value of promoted property task.code, which is a local or remote path pointing at source code.
- Returns:
Value of task.code.
- Return type:
Optional[str]
- property creation_context: SystemData | None¶
The creation context of the resource.
- Returns:
The creation metadata for the resource.
- Return type:
Optional[SystemData]
- property display_name: str | None¶
Display name of the component.
- Returns:
Display name of the component.
- Return type:
- property environment: str | None¶
Return value of promoted property task.environment, indicate the environment that training job will run in.
- Returns:
Value of task.environment.
- Return type:
Optional[Environment, str]
- property id: str | None¶
The resource ID.
- Returns:
The global ID of the resource, an Azure Resource Manager (ARM) ID.
- Return type:
Optional[str]
- property instance_count: int | None¶
Return value of promoted property resources.instance_count.
- Returns:
Value of resources.instance_count.
- Return type:
Optional[int]
- property is_deterministic: bool | None¶
Whether the component is deterministic.
- Returns:
Whether the component is deterministic
- Return type:
- property resources: dict | JobResourceConfiguration | None¶
- property retry_settings: RetrySettings | None¶
- property task: ParallelTask | None¶
- class azure.ai.ml.entities.ParallelTask(*, type: str | None = None, code: str | None = None, entry_script: str | None = None, program_arguments: str | None = None, model: str | None = None, append_row_to: str | None = None, environment: Environment | str | None = None, **kwargs: Any)[source]¶
Parallel task.
- Parameters:
type (str) – The type of the parallel task. Possible values are ‘run_function’and ‘model’.
code (str) – A local or remote path pointing at source code.
entry_script (str) – User script which will be run in parallel on multiple nodes. This is specified as a local file path. The entry_script should contain two functions:
init(): this function should be used for any costly or common preparation for subsequent inferences, e.g., deserializing and loading the model into a global object.run(mini_batch): The method to be parallelized. Each invocation will have one mini-batch. ‘mini_batch’: Batch inference will invoke run method and pass either a list or a Pandas DataFrame as an argument to the method. Each entry in min_batch will be a filepath if input is a FileDataset, a Pandas DataFrame if input is a TabularDataset. run() method should return a Pandas DataFrame or an array. For append_row output_action, these returned elements are appended into the common output file. For summary_only, the contents of the elements are ignored. For all output actions, each returned output element indicates one successful inference of input element in the input mini-batch. Each parallel worker process will call init once and then loop over run function until all mini-batches are processed.program_arguments (str) – The arguments of the parallel task.
model (str) – The model of the parallel task.
append_row_to (str) – All values output by run() method invocations will be aggregated into one unique file which is created in the output location. if it is not set, ‘summary_only’ would invoked, which means user script is expected to store the output itself.
environment (Union[Environment, str]) – Environment that training job will run in.
- class azure.ai.ml.entities.ParameterizedCommand(command: str | None = '', resources: dict | JobResourceConfiguration | None = None, code: PathLike | str | None = None, environment_variables: Dict | None = None, distribution: Dict | MpiDistribution | TensorFlowDistribution | PyTorchDistribution | RayDistribution | DistributionConfiguration | None = None, environment: Environment | str | None = None, queue_settings: QueueSettings | None = None, **kwargs: Dict)[source]¶
Command component version that contains the command and supporting parameters for a Command component or job.
This class should not be instantiated directly. Instead, use the child class ~azure.ai.ml.entities.CommandComponent.
- Parameters:
command (str) – The command to be executed. Defaults to “”.
resources (Optional[Union[dict, JobResourceConfiguration]]) – The compute resource configuration for the command.
code (Optional[str]) – The source code to run the job. Can be a local path or “http:”, “https:”, or “azureml:” url pointing to a remote location.
environment_variables (Optional[dict[str, str]]) – A dictionary of environment variable names and values. These environment variables are set on the process where user script is being executed.
distribution (Optional[Union[dict, PyTorchDistribution, MpiDistribution, TensorFlowDistribution, RayDistribution]]) – The distribution configuration for distributed jobs.
environment (Optional[Union[str, Environment]]) – The environment that the job will run in.
queue_settings (Optional[QueueSettings]) – The queue settings for the job.
- Keyword Arguments:
kwargs (dict) – A dictionary of additional configuration parameters.
- property distribution: dict | MpiDistribution | TensorFlowDistribution | PyTorchDistribution | RayDistribution | DistributionConfiguration | None¶
The configuration for the distributed command component or job.
- Returns:
The distribution configuration.
- Return type:
Union[PyTorchDistribution, MpiDistribution, TensorFlowDistribution, RayDistribution]
- property resources: JobResourceConfiguration¶
The compute resource configuration for the command component or job.
- Returns:
The compute resource configuration for the command component or job.
- Return type:
- class azure.ai.ml.entities.PatTokenConfiguration(*, pat: str)[source]¶
Personal access token credentials.
- Parameters:
pat (str) – Personal access token.
Example:
Configuring a personal access token configuration for a WorkspaceConnection.¶from azure.ai.ml.entities import PatTokenConfiguration, WorkspaceConnection ws_connection = WorkspaceConnection( target="my_target", type="python_feed", credentials=PatTokenConfiguration(pat="abcdefghijklmnopqrstuvwxyz"), name="my_connection", metadata=None, )
- class azure.ai.ml.entities.Pipeline(*, component: Component | str, inputs: Dict[str, Input | str | bool | int | float | Enum] | None = None, outputs: Dict[str, str | Output] | None = None, settings: PipelineJobSettings | None = None, **kwargs: Any)[source]¶
Base class for pipeline node, used for pipeline component version consumption. You should not instantiate this class directly. Instead, you should use @pipeline decorator to create a pipeline node.
- Parameters:
component (Union[Component, str]) – Id or instance of the pipeline component/job to be run for the step.
inputs (Optional[Dict[str, Union[ Input, str, bool, int, float, Enum, "Input"]]].) – Inputs of the pipeline node.
outputs (Optional[Dict[str, Union[str, Output, "Output"]]]) – Outputs of the pipeline node.
settings (Optional[PipelineJobSettings]) – Setting of pipeline node, only taking effect for root pipeline job.
- clear() None. Remove all items from D.¶
- copy() a shallow copy of D¶
- dump(dest: str | PathLike | IO, **kwargs: Any) None¶
Dumps the job content into a file in YAML format.
- Parameters:
dest (Union[PathLike, str, IO[AnyStr]]) – The local path or file stream to write the YAML content to. If dest is a file path, a new file will be created. If dest is an open file, the file will be written to directly.
- Raises:
FileExistsError – Raised if dest is a file path and the file already exists.
IOError – Raised if dest is an open file and the file is not writable.
- fromkeys(value=None, /)¶
Create a new dictionary with keys from iterable and values set to value.
- get(key, default=None, /)¶
Return the value for key if key is in the dictionary, else default.
- items() a set-like object providing a view on D's items¶
- keys() a set-like object providing a view on D's keys¶
- pop(k[, d]) v, remove specified key and return the corresponding value.¶
If the key is not found, return the default if given; otherwise, raise a KeyError.
- popitem()¶
Remove and return a (key, value) pair as a 2-tuple.
Pairs are returned in LIFO (last-in, first-out) order. Raises KeyError if the dict is empty.
- setdefault(key, default=None, /)¶
Insert key with a value of default if key is not in the dictionary.
Return the value for key if key is in the dictionary, else default.
- update([E, ]**F) None. Update D from dict/iterable E and F.¶
If E is present and has a .keys() method, then does: for k in E: D[k] = E[k] If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k]
- values() an object providing a view on D's values¶
- property base_path: str¶
The base path of the resource.
- Returns:
The base path of the resource.
- Return type:
- property component: str | Component | None¶
Id or instance of the pipeline component/job to be run for the step.
- property creation_context: SystemData | None¶
The creation context of the resource.
- Returns:
The creation metadata for the resource.
- Return type:
Optional[SystemData]
- property id: str | None¶
The resource ID.
- Returns:
The global ID of the resource, an Azure Resource Manager (ARM) ID.
- Return type:
Optional[str]
- property settings: PipelineJobSettings | None¶
Settings of the pipeline.
- Note: settings is available only when create node as a job.
i.e. ml_client.jobs.create_or_update(node).
- Returns:
Settings of the pipeline.
- Return type:
- property status: str | None¶
The status of the job.
Common values returned include “Running”, “Completed”, and “Failed”. All possible values are:
NotStarted - This is a temporary state that client-side Run objects are in before cloud submission.
Starting - The Run has started being processed in the cloud. The caller has a run ID at this point.
Provisioning - On-demand compute is being created for a given job submission.
- Preparing - The run environment is being prepared and is in one of two stages:
Docker image build
conda environment setup
- Queued - The job is queued on the compute target. For example, in BatchAI, the job is in a queued state
while waiting for all the requested nodes to be ready.
Running - The job has started to run on the compute target.
Finalizing - User code execution has completed, and the run is in post-processing stages.
CancelRequested - Cancellation has been requested for the job.
- Completed - The run has completed successfully. This includes both the user code execution and run
post-processing stages.
Failed - The run failed. Usually the Error property on a run will provide details as to why.
Canceled - Follows a cancellation request and indicates that the run is now successfully cancelled.
NotResponding - For runs that have Heartbeats enabled, no heartbeat has been recently sent.
- Returns:
Status of the job.
- Return type:
Optional[str]
- class azure.ai.ml.entities.PipelineComponent(*, name: str | None = None, version: str | None = None, description: str | None = None, tags: Dict | None = None, display_name: str | None = None, inputs: Dict | None = None, outputs: Dict | None = None, jobs: Dict[str, BaseNode] | None = None, is_deterministic: bool | None = None, **kwargs: Any)[source]¶
Pipeline component, currently used to store components in an azure.ai.ml.dsl.pipeline.
- Parameters:
name (str) – Name of the component.
version (str) – Version of the component.
description (str) – Description of the component.
tags (dict) – Tag dictionary. Tags can be added, removed, and updated.
display_name (str) – Display name of the component.
inputs (dict) – Component inputs.
outputs (dict) – Component outputs.
jobs (Dict[str, BaseNode]) – Id to components dict inside the pipeline definition.
is_deterministic (bool) – Whether the pipeline component is deterministic.
- Raises:
ValidationException – Raised if PipelineComponent cannot be successfully validated. Details will be provided in the error message.
- dump(dest: str | PathLike | IO, **kwargs: Any) None¶
Dump the component content into a file in yaml format.
- Parameters:
dest (Union[PathLike, str, IO[AnyStr]]) – The destination to receive this component’s content. Must be either a path to a local file, or an already-open file stream. If dest is a file path, a new file will be created, and an exception is raised if the file exists. If dest is an open file, the file will be written to directly, and an exception will be raised if the file is not writable.
- property base_path: str¶
The base path of the resource.
- Returns:
The base path of the resource.
- Return type:
- property creation_context: SystemData | None¶
The creation context of the resource.
- Returns:
The creation metadata for the resource.
- Return type:
Optional[SystemData]
- property display_name: str | None¶
Display name of the component.
- Returns:
Display name of the component.
- Return type:
- property id: str | None¶
The resource ID.
- Returns:
The global ID of the resource, an Azure Resource Manager (ARM) ID.
- Return type:
Optional[str]
- property is_deterministic: bool | None¶
Whether the component is deterministic.
- Returns:
Whether the component is deterministic
- Return type:
- property jobs: Dict[str, BaseNode]¶
Return a dictionary from component variable name to component object.
- Returns:
Dictionary mapping component variable names to component objects.
- Return type:
Dict[str, BaseNode]
- class azure.ai.ml.entities.PipelineComponentBatchDeployment(*, name: str | None, endpoint_name: str | None = None, component: Component | str | None = None, settings: Dict[str, str] | None = None, job_definition: Dict[str, BaseNode] | None = None, tags: Dict | None = None, description: str | None = None, **kwargs: Any)[source]¶
Note
This is an experimental class, and may change at any time. Please see https://aka.ms/azuremlexperimental for more information.
Pipeline Component Batch Deployment entity.
- Parameters:
type (Optional[str]) – Job definition type. Allowed value: “pipeline”
name (Optional[str]) – Name of the deployment resource.
description (Optional[str]) – Description of the deployment resource.
component (Optional[Union[Component, str]]) – Component definition.
settings (Optional[Dict[str, Any]]) – Run-time settings for the pipeline job.
tags (Optional[Dict[str, Any]]) – A set of tags. The tags which will be applied to the job.
job_definition (Optional[Dict[str, BaseNode]]) – Arm ID or PipelineJob entity of an existing pipeline job.
endpoint_name (Optional[str]) – Name of the Endpoint resource, defaults to None.
- dump(dest: str | PathLike | IO, **kwargs: Any) None[source]¶
Dump the deployment content into a file in yaml format.
- Parameters:
dest (Union[os.PathLike, str, IO[AnyStr]]) – The destination to receive this deployment’s content. Must be either a path to a local file, or an already-open file stream. If dest is a file path, a new file will be created, and an exception is raised if the file exists. If dest is an open file, the file will be written to directly, and an exception will be raised if the file is not writable.
- property base_path: str¶
The base path of the resource.
- Returns:
The base path of the resource.
- Return type:
- property creation_context: SystemData | None¶
The creation context of the resource.
- Returns:
The creation metadata for the resource.
- Return type:
Optional[SystemData]
- class azure.ai.ml.entities.PipelineJob(*, component: str | PipelineComponent | Component | None = None, inputs: Dict[str, Input | str | bool | int | float] | None = None, outputs: Dict[str, Output] | None = None, name: str | None = None, description: str | None = None, display_name: str | None = None, experiment_name: str | None = None, jobs: Dict[str, BaseNode] | None = None, settings: PipelineJobSettings | None = None, identity: ManagedIdentityConfiguration | AmlTokenConfiguration | UserIdentityConfiguration | None = None, compute: str | None = None, tags: Dict[str, str] | None = None, **kwargs: Any)[source]¶
Pipeline job.
You should not instantiate this class directly. Instead, you should use the @pipeline decorator to create a PipelineJob.
- Parameters:
component (Union[str, PipelineComponent]) – Pipeline component version. The field is mutually exclusive with ‘jobs’.
inputs (dict[str, Union[Input, str, bool, int, float]]) – Inputs to the pipeline job.
name (str) – Name of the PipelineJob. Defaults to None.
description (str) – Description of the pipeline job. Defaults to None
display_name (str) – Display name of the pipeline job. Defaults to None
experiment_name (str) – Name of the experiment the job will be created under. If None is provided, the experiment will be set to the current directory. Defaults to None
jobs (dict[str, BaseNode]) – Pipeline component node name to component object. Defaults to None
settings (PipelineJobSettings) – Setting of the pipeline job. Defaults to None
identity (Union[ ManagedIdentityConfiguration, AmlTokenConfiguration, UserIdentityConfiguration) – Identity that the training job will use while running on compute. Defaults to None
] :param compute: Compute target name of the built pipeline. Defaults to None :type compute: str :param tags: Tag dictionary. Tags can be added, removed, and updated. Defaults to None :type tags: dict[str, str] :param kwargs: A dictionary of additional configuration parameters. Defaults to None :type kwargs: dict
Example:
Shows how to create a pipeline using this class.¶from azure.ai.ml.entities import PipelineJob, PipelineJobSettings pipeline_job = PipelineJob( description="test pipeline job", tags={}, display_name="test display name", experiment_name="pipeline_job_samples", properties={}, settings=PipelineJobSettings(force_rerun=True, default_compute="cpu-cluster"), jobs={"component1": component_func(component_in_number=1.0, component_in_path=uri_file_input)}, ) ml_client.jobs.create_or_update(pipeline_job)
- dump(dest: str | PathLike | IO, **kwargs: Any) None¶
Dumps the job content into a file in YAML format.
- Parameters:
dest (Union[PathLike, str, IO[AnyStr]]) – The local path or file stream to write the YAML content to. If dest is a file path, a new file will be created. If dest is an open file, the file will be written to directly.
- Raises:
FileExistsError – Raised if dest is a file path and the file already exists.
IOError – Raised if dest is an open file and the file is not writable.
- property base_path: str¶
The base path of the resource.
- Returns:
The base path of the resource.
- Return type:
- property creation_context: SystemData | None¶
The creation context of the resource.
- Returns:
The creation metadata for the resource.
- Return type:
Optional[SystemData]
- property id: str | None¶
The resource ID.
- Returns:
The global ID of the resource, an Azure Resource Manager (ARM) ID.
- Return type:
Optional[str]
- property settings: PipelineJobSettings | None¶
Settings of the pipeline job.
- Returns:
Settings of the pipeline job.
- Return type:
- property status: str | None¶
The status of the job.
Common values returned include “Running”, “Completed”, and “Failed”. All possible values are:
NotStarted - This is a temporary state that client-side Run objects are in before cloud submission.
Starting - The Run has started being processed in the cloud. The caller has a run ID at this point.
Provisioning - On-demand compute is being created for a given job submission.
- Preparing - The run environment is being prepared and is in one of two stages:
Docker image build
conda environment setup
- Queued - The job is queued on the compute target. For example, in BatchAI, the job is in a queued state
while waiting for all the requested nodes to be ready.
Running - The job has started to run on the compute target.
Finalizing - User code execution has completed, and the run is in post-processing stages.
CancelRequested - Cancellation has been requested for the job.
- Completed - The run has completed successfully. This includes both the user code execution and run
post-processing stages.
Failed - The run failed. Usually the Error property on a run will provide details as to why.
Canceled - Follows a cancellation request and indicates that the run is now successfully cancelled.
NotResponding - For runs that have Heartbeats enabled, no heartbeat has been recently sent.
- Returns:
Status of the job.
- Return type:
Optional[str]
- class azure.ai.ml.entities.PipelineJobSettings(default_datastore: str | None = None, default_compute: str | None = None, continue_on_step_failure: bool | None = None, force_rerun: bool | None = None, **kwargs: Any)[source]¶
Settings of PipelineJob.
- Parameters:
default_datastore (str) – The default datastore of the pipeline.
default_compute (str) – The default compute target of the pipeline.
continue_on_step_failure (bool) – Flag indicating whether to continue pipeline execution if a step fails.
force_rerun (bool) – Flag indicating whether to force rerun pipeline execution.
Example:
Shows how to set pipeline properties using this class.¶from azure.ai.ml.entities import PipelineJob, PipelineJobSettings pipeline_job = PipelineJob( description="test pipeline job", tags={}, display_name="test display name", experiment_name="pipeline_job_samples", properties={}, settings=PipelineJobSettings(force_rerun=True, default_compute="cpu-cluster"), jobs={"component1": component_func(component_in_number=1.0, component_in_path=uri_file_input)}, ) ml_client.jobs.create_or_update(pipeline_job)
Initialize a attribute dictionary.
- Parameters:
allowed_keys – A dictionary of keys that allowed to set as arbitrary attributes. None means all keys can be set as arbitrary attributes.
:type dict :param kwargs: A dictionary of additional configuration parameters. :type kwargs: dict
- clear() None. Remove all items from D.¶
- copy() a shallow copy of D¶
- fromkeys(value=None, /)¶
Create a new dictionary with keys from iterable and values set to value.
- get(key, default=None, /)¶
Return the value for key if key is in the dictionary, else default.
- items() a set-like object providing a view on D's items¶
- keys() a set-like object providing a view on D's keys¶
- pop(k[, d]) v, remove specified key and return the corresponding value.¶
If the key is not found, return the default if given; otherwise, raise a KeyError.
- popitem()¶
Remove and return a (key, value) pair as a 2-tuple.
Pairs are returned in LIFO (last-in, first-out) order. Raises KeyError if the dict is empty.
- setdefault(key, default=None, /)¶
Insert key with a value of default if key is not in the dictionary.
Return the value for key if key is in the dictionary, else default.
- update([E, ]**F) None. Update D from dict/iterable E and F.¶
If E is present and has a .keys() method, then does: for k in E: D[k] = E[k] If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k]
- values() an object providing a view on D's values¶
- class azure.ai.ml.entities.PredictionDriftMetricThreshold(*, data_type: MonitorFeatureType | None = None, threshold: float | None = None, numerical: NumericalDriftMetrics | None = None, categorical: CategoricalDriftMetrics | None = None)[source]¶
Prediction drift metric threshold
- Parameters:
numerical – Numerical drift metrics
categorical – Categorical drift metrics
- class azure.ai.ml.entities.PredictionDriftSignal(*, production_data: ProductionData | None = None, reference_data: ReferenceData | None = None, metric_thresholds: PredictionDriftMetricThreshold, alert_enabled: bool = False, properties: Dict[str, str] | None = None)[source]¶
Prediction drift signal.
- Variables:
type (str) – The type of the signal, set to “prediction_drift” for this class.
- Parameters:
production_data – The data for which drift will be calculated
reference_data – The data to calculate drift against
metric_thresholds – Metrics to calculate and their associated thresholds
alert_enabled – The current notification mode for this signal
properties – Dictionary of additional properties.
- class azure.ai.ml.entities.PrivateEndpoint(approval_type: str | None = None, connections: Dict[str, EndpointConnection] | None = None)[source]¶
Private Endpoint of a workspace.
- Parameters:
approval_type (str) – Approval type of the private endpoint.
connections (List[EndpointConnection]) – List of private endpoint connections.
- class azure.ai.ml.entities.PrivateEndpointDestination(*, name: str, service_resource_id: str, subresource_target: str, spark_enabled: bool = False, fqdns: List[str] | None = None, **kwargs: Any)[source]¶
Class representing a Private Endpoint outbound rule.
- Parameters:
name (str) – Name of the outbound rule.
service_resource_id (str) – The resource URI of the root service that supports creation of the private link.
subresource_target (str) – The target endpoint of the subresource of the service.
spark_enabled (bool) – Indicates if the private endpoint can be used for Spark jobs, default is “false”.
fqdns (List[str]) – String list of FQDNs particular to the Private Endpoint resource creation. For application gateway Private Endpoints, this is the FQDN which will resolve to the private IP of the application gateway PE inside the workspace’s managed network.
- Variables:
type (str) – Type of the outbound rule. Set to “PrivateEndpoint” for this class.
Creating a PrivateEndpointDestination outbound rule object.¶from azure.ai.ml.entities import PrivateEndpointDestination # Example private endpoint outbound to a blob blobrule = PrivateEndpointDestination( name="blobrule", service_resource_id="/subscriptions/00000000-1111-2222-3333-444444444444/resourceGroups/test-rg/providers/Microsoft.Storage/storageAccounts/storage-name", subresource_target="blob", spark_enabled=False, ) # Example private endpoint outbound to an application gateway appGwRule = PrivateEndpointDestination( name="appGwRule", service_resource_id="/subscriptions/00000000-1111-2222-3333-444444444444/resourceGroups/test-rg/providers/Microsoft.Network/applicationGateways/appgw-name", # cspell:disable-line subresource_target="appGwPrivateFrontendIpIPv4", spark_enabled=False, fqdns=["contoso.com", "contoso2.com"], )
- class azure.ai.ml.entities.ProbeSettings(*, failure_threshold: int | None = None, success_threshold: int | None = None, timeout: int | None = None, period: int | None = None, initial_delay: int | None = None)[source]¶
Settings on how to probe an endpoint.
- Parameters:
failure_threshold (int) – Threshold for probe failures, defaults to 30
success_threshold (int) – Threshold for probe success, defaults to 1
timeout (int) – timeout in seconds, defaults to 2
period (int) – How often (in seconds) to perform the probe, defaults to 10
initial_delay (int) – How long (in seconds) to wait for the first probe, defaults to 10
- class azure.ai.ml.entities.ProductionData(*, input_data: Input, data_context: MonitorDatasetContext | None = None, pre_processing_component: str | None = None, data_window: BaselineDataRange | None = None, data_column_names: Dict[str, str] | None = None)[source]¶
Production Data
- Parameters:
input_data – The data for which drift will be calculated
data_context – The context of the input dataset. Possible values include: model_inputs, model_outputs, training, test, validation, ground_truth
pre_processing_component (string) – ARM resource ID of the component resource used to preprocess the data.
data_window – The number of days or a time frame that a singal monitor looks back over the target.
- class azure.ai.ml.entities.Project(*, name: str, hub_id: str, description: str | None = None, tags: Dict[str, str] | None = None, display_name: str | None = None, location: str | None = None, resource_group: str | None = None, **kwargs)[source]¶
Note
This is an experimental class, and may change at any time. Please see https://aka.ms/azuremlexperimental for more information.
A Project is a lightweight object for orchestrating AI applications, and is parented by a hub. Unlike a standard workspace, a project does not have a variety of sub-resources directly associated with it. Instead, its parent hub managed these resources, which are then used by the project and its siblings.
As a type of workspace, project management is controlled by an MLClient’s workspace operations.
- Parameters:
name (str) – The name of the project.
hub_id (str) – The hub parent of the project, as a resource ID.
description (Optional[str]) – The description of the project.
tags (Optional[Dict[str, str]]) – Tags associated with the project.
display_name (Optional[str]) – The display name of the project.
location (Optional[str]) – The location of the project. Must match that of the parent hub and is automatically assigned to match the parent hub’s location during creation.
resource_group (Optional[str]) – The project’s resource group name.
- dump(dest: str | PathLike | IO, **kwargs: Any) None¶
Dump the workspace spec into a file in yaml format.
- Parameters:
dest (Union[PathLike, str, IO[AnyStr]]) – The destination to receive this workspace’s spec. Must be either a path to a local file, or an already-open file stream. If dest is a file path, a new file will be created, and an exception is raised if the file exists. If dest is an open file, the file will be written to directly, and an exception will be raised if the file is not writable.
- property base_path: str¶
The base path of the resource.
- Returns:
The base path of the resource.
- Return type:
- property creation_context: SystemData | None¶
The creation context of the resource.
- Returns:
The creation metadata for the resource.
- Return type:
Optional[SystemData]
- property discovery_url: str | None¶
Backend service base URLs for the workspace.
- Returns:
Backend service URLs of the workspace
- Return type:
- property hub_id: str¶
The UID of the hub parent of the project.
- Returns:
Resource ID of the parent hub.
- Return type:
- class azure.ai.ml.entities.QueueSettings(*, job_tier: str | None = None, priority: str | None = None, **kwargs: Any)[source]¶
Note
This is an experimental class, and may change at any time. Please see https://aka.ms/azuremlexperimental for more information.
Queue settings for a pipeline job.
- Variables:
- Keyword Arguments:
job_tier (Optional[Literal]]) – The job tier. Accepted values are “Spot”, “Basic”, “Standard”, and “Premium”.
priority (Optional[Literal]) – The priority of the job on a compute. Accepted values are “low”, “medium”, and “high”. Defaults to “medium”.
kwargs (Optional[dict]) – Additional properties for QueueSettings.
- class azure.ai.ml.entities.RecurrencePattern(*, hours: int | List[int], minutes: int | List[int], week_days: str | List[str] | None = None, month_days: int | List[int] | None = None)[source]¶
Recurrence pattern for a job schedule.
- Keyword Arguments:
hours (Union[int, List[int]]) – The number of hours for the recurrence schedule pattern.
minutes (Union[int, List[int]]) – The number of minutes for the recurrence schedule pattern.
week_days – A list of days of the week for the recurrence schedule pattern. Acceptable values include: “monday”, “tuesday”, “wednesday”, “thursday”, “friday”, “saturday”, “sunday”
month_days (Optional[Union[int, List[int]]]) – A list of days of the month for the recurrence schedule pattern.
Example:
Configuring a JobSchedule to use a RecurrencePattern.¶from azure.ai.ml import load_job from azure.ai.ml.entities import JobSchedule, RecurrencePattern, RecurrenceTrigger pipeline_job = load_job("./sdk/ml/azure-ai-ml/tests/test_configs/command_job/command_job_test_local_env.yml") trigger = RecurrenceTrigger( frequency="week", interval=4, schedule=RecurrencePattern(hours=10, minutes=15, week_days=["Monday", "Tuesday"]), start_time="2023-03-10", ) job_schedule = JobSchedule(name="simple_sdk_create_schedule", trigger=trigger, create_job=pipeline_job)
- class azure.ai.ml.entities.RecurrenceTrigger(*, frequency: str, interval: int, schedule: RecurrencePattern | None = None, start_time: str | datetime | None = None, end_time: str | datetime | None = None, time_zone: str | TimeZone = TimeZone.UTC)[source]¶
Recurrence trigger for a job schedule.
- Keyword Arguments:
start_time (Optional[Union[str, datetime]]) – Specifies the start time of the schedule in ISO 8601 format.
end_time (Optional[Union[str, datetime]]) – Specifies the end time of the schedule in ISO 8601 format. Note that end_time is not supported for compute schedules.
time_zone (Union[str, TimeZone]) – The time zone where the schedule will run. Defaults to UTC(+00:00). Note that this applies to the start_time and end_time.
frequency – Specifies the frequency that the schedule should be triggered with. Possible values include: “minute”, “hour”, “day”, “week”, “month”.
interval (int) – Specifies the interval in conjunction with the frequency that the schedule should be triggered with.
schedule (Optional[RecurrencePattern]) – Specifies the recurrence pattern.
Example:
Configuring a JobSchedule to trigger recurrence every 4 weeks.¶from azure.ai.ml import load_job from azure.ai.ml.entities import JobSchedule, RecurrencePattern, RecurrenceTrigger pipeline_job = load_job("./sdk/ml/azure-ai-ml/tests/test_configs/command_job/command_job_test_local_env.yml") trigger = RecurrenceTrigger( frequency="week", interval=4, schedule=RecurrencePattern(hours=10, minutes=15, week_days=["Monday", "Tuesday"]), start_time="2023-03-10", ) job_schedule = JobSchedule(name="simple_sdk_create_schedule", trigger=trigger, create_job=pipeline_job)
- class azure.ai.ml.entities.ReferenceData(*, input_data: Input, data_context: MonitorDatasetContext | None = None, pre_processing_component: str | None = None, data_window: BaselineDataRange | None = None, data_column_names: Dict[str, str] | None = None)[source]¶
Reference Data
- Parameters:
input_data – The data for which drift will be calculated
data_context – The context of the input dataset. Possible values include: model_inputs, model_outputs, training, test, validation, ground_truth
pre_processing_component (string) – ARM resource ID of the component resource used to preprocess the data.
target_column_name (string) – The name of the target column in the dataset.
data_window – The number of days or a time frame that a single monitor looks back over the target.
- class azure.ai.ml.entities.Registry(*, name: str, location: str, identity: IdentityConfiguration | None = None, tags: Dict[str, str] | None = None, public_network_access: str | None = None, discovery_url: str | None = None, intellectual_property: IntellectualProperty | None = None, managed_resource_group: str | None = None, mlflow_registry_uri: str | None = None, replication_locations: List[RegistryRegionDetails] | None, **kwargs: Any)[source]¶
Azure ML registry.
- Parameters:
name (str) – Name of the registry. Must be globally unique and is immutable.
location (str) – The location this registry resource is located in.
identity (ManagedServiceIdentity) – registry’s System Managed Identity
tags (dict) – Tags of the registry.
public_network_access (str) – Whether to allow public endpoint connectivity.
discovery_url (str) – Backend service base url for the registry.
intellectual_property (IntellectualProperty) – Experimental Intellectual property publisher.
managed_resource_group (str) – Managed resource group created for the registry.
mlflow_registry_uri (str) – Ml flow tracking uri for the registry.
region_details (List[RegistryRegionDetails]) – Details of each region the registry is in.
kwargs (dict) – A dictionary of additional configuration parameters.
- dump(dest: str | PathLike | IO, **kwargs: Any) None[source]¶
Dump the registry spec into a file in yaml format.
- Parameters:
dest (str) – Path to a local file as the target, new file will be created, raises exception if the file exists.
- property base_path: str¶
The base path of the resource.
- Returns:
The base path of the resource.
- Return type:
- property creation_context: SystemData | None¶
The creation context of the resource.
- Returns:
The creation metadata for the resource.
- Return type:
Optional[SystemData]
- class azure.ai.ml.entities.RegistryRegionDetails(*, acr_config: List[str | SystemCreatedAcrAccount] | None = None, location: str | None = None, storage_config: List[str] | SystemCreatedStorageAccount | None = None)[source]¶
Details for each region a registry is in.
- Parameters:
acr_details (List[Union[str, SystemCreatedAcrAccount]]) – List of ACR account details. Each value can either be a single string representing the arm_resource_id of a user-created acr_details object, or a entire SystemCreatedAcrAccount object.
location (str) – The location where the registry exists.
storage_account_details (Union[List[str], SystemCreatedStorageAccount]) – List of storage accounts. Each value can either be a single string representing the arm_resource_id of a user-created storage account, or an entire SystemCreatedStorageAccount object.
- class azure.ai.ml.entities.RequestLogging(*, capture_headers: List[str] | None = None, **kwargs: Any)[source]¶
Note
This is an experimental class, and may change at any time. Please see https://aka.ms/azuremlexperimental for more information.
Request Logging deployment entity.
- class azure.ai.ml.entities.Resource(name: str | None, description: str | None = None, tags: Dict | None = None, properties: Dict | None = None, **kwargs: Any)[source]¶
Base class for entity classes.
Resource is an abstract object that serves as a base for creating resources. It contains common properties and methods for all resources.
This class should not be instantiated directly. Instead, use one of its subclasses.
- Parameters:
- Keyword Arguments:
print_as_yaml (bool) – Specifies if the the resource should print out as a YAML-formatted object. If False, the resource will print out in a more-compact style. By default, the YAML output is only used in Jupyter notebooks. Be aware that some bookkeeping values are shown only in the non-YAML output.
- abstract dump(dest: str | PathLike | IO, **kwargs: Any) Any[source]¶
Dump the object content into a file.
- Parameters:
dest (Union[PathLike, str, IO[AnyStr]]) – The local path or file stream to write the YAML content to. If dest is a file path, a new file will be created. If dest is an open file, the file will be written to directly.
- property base_path: str¶
The base path of the resource.
- Returns:
The base path of the resource.
- Return type:
- property creation_context: SystemData | None¶
The creation context of the resource.
- Returns:
The creation metadata for the resource.
- Return type:
Optional[SystemData]
- class azure.ai.ml.entities.ResourceConfiguration(*, instance_count: int | None = None, instance_type: str | None = None, properties: Dict[str, Any] | None = None, **kwargs: Any)[source]¶
Resource configuration for a job.
This class should not be instantiated directly. Instead, use its subclasses.
- Keyword Arguments:
- class azure.ai.ml.entities.ResourceRequirementsSettings(requests: ResourceSettings | None = None, limits: ResourceSettings | None = None)[source]¶
Resource requirements settings for a container.
- Parameters:
requests (Optional[ResourceSettings]) – The minimum resource requests for a container.
limits (Optional[ResourceSettings]) – The resource limits for a container.
Example:
Configuring ResourceRequirementSettings for a Kubernetes deployment.¶from azure.ai.ml.entities import ( CodeConfiguration, KubernetesOnlineDeployment, ResourceRequirementsSettings, ResourceSettings, ) blue_deployment = KubernetesOnlineDeployment( name="kubernetes_deployment", endpoint_name="online_endpoint_name", model=load_model("./sdk/ml/azure-ai-ml/tests/test_configs/model/model_with_stage.yml"), environment="azureml:AzureML-Minimal:1", code_configuration=CodeConfiguration( code="endpoints/online/model-1/onlinescoring", scoring_script="score.py" ), instance_count=1, resources=ResourceRequirementsSettings( requests=ResourceSettings( cpu="500m", memory="0.5Gi", ), limits=ResourceSettings( cpu="1", memory="1Gi", ), ), )
- class azure.ai.ml.entities.ResourceSettings(cpu: str | None = None, memory: str | None = None, gpu: str | None = None)[source]¶
Resource settings for a container.
This class uses Kubernetes Resource unit formats. For more information, see https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/.
- Parameters:
Example:
Configuring ResourceSettings for a Kubernetes deployment.¶from azure.ai.ml.entities import ( CodeConfiguration, KubernetesOnlineDeployment, ResourceRequirementsSettings, ResourceSettings, ) blue_deployment = KubernetesOnlineDeployment( name="kubernetes_deployment", endpoint_name="online_endpoint_name", model=load_model("./sdk/ml/azure-ai-ml/tests/test_configs/model/model_with_stage.yml"), environment="azureml:AzureML-Minimal:1", code_configuration=CodeConfiguration( code="endpoints/online/model-1/onlinescoring", scoring_script="score.py" ), instance_count=1, resources=ResourceRequirementsSettings( requests=ResourceSettings( cpu="500m", memory="0.5Gi", ), limits=ResourceSettings( cpu="1", memory="1Gi", ), ), )
- class azure.ai.ml.entities.RetrySettings(*, timeout: int | str | None = None, max_retries: int | str | None = None, **kwargs: Any)[source]¶
Parallel RetrySettings.
- Parameters:
timeout (int) – Timeout in seconds for each invocation of the run() method. (optional) This value could be set through PipelineParameter.
max_retries (int) – The number of maximum tries for a failed or timeout mini batch. The range is [1, int.max]. This value could be set through PipelineParameter. A mini batch with dequeue count greater than this won’t be processed again and will be deleted directly.
- class azure.ai.ml.entities.Route(*, port: str | None = None, path: str | None = None)[source]¶
Note
This is an experimental class, and may change at any time. Please see https://aka.ms/azuremlexperimental for more information.
Route.
- class azure.ai.ml.entities.Schedule(*, name: str, trigger: CronTrigger | RecurrenceTrigger | None, display_name: str | None = None, description: str | None = None, tags: Dict | None = None, properties: Dict | None = None, **kwargs: Any)[source]¶
Schedule object used to create and manage schedules.
This class should not be instantiated directly. Instead, please use the subclasses.
- Keyword Arguments:
name (str) – The name of the schedule.
trigger (Union[CronTrigger, RecurrenceTrigger]) – The schedule trigger configuration.
display_name (Optional[str]) – The display name of the schedule.
description (Optional[str]) – The description of the schedule.
tags (Optional[dict]]) – Tag dictionary. Tags can be added, removed, and updated.
properties (Optional[dict[str, str]]) – A dictionary of properties to associate with the schedule.
kwargs (dict) – Additional keyword arguments passed to the Resource constructor.
- dump(dest: str | PathLike | IO, **kwargs: Any) None[source]¶
Dump the schedule content into a file in YAML format.
- Parameters:
dest (Union[PathLike, str, IO[AnyStr]]) – The local path or file stream to write the YAML content to. If dest is a file path, a new file will be created. If dest is an open file, the file will be written to directly.
- Raises:
FileExistsError – Raised if dest is a file path and the file already exists.
IOError – Raised if dest is an open file and the file is not writable.
- property base_path: str¶
The base path of the resource.
- Returns:
The base path of the resource.
- Return type:
- property creation_context: SystemData | None¶
The creation context of the resource.
- Returns:
The creation metadata for the resource.
- Return type:
Optional[SystemData]
- property id: str | None¶
The resource ID.
- Returns:
The global ID of the resource, an Azure Resource Manager (ARM) ID.
- Return type:
Optional[str]
- property is_enabled: bool¶
Specifies if the schedule is enabled or not.
- Returns:
True if the schedule is enabled, False otherwise.
- Return type:
- azure.ai.ml.entities.ScheduleState¶
alias of
ScheduleStatus
- class azure.ai.ml.entities.ScheduleTriggerResult(**kwargs)[source]¶
Schedule trigger result returned by trigger an enabled schedule once.
This class shouldn’t be instantiated directly. Instead, it is used as the return type of schedule trigger.
- class azure.ai.ml.entities.ScriptReference(*, path: str | None = None, command: str | None = None, timeout_minutes: int | None = None)[source]¶
Script reference.
- class azure.ai.ml.entities.SerpConnection(*, api_key: str | None = None, metadata: Dict[Any, Any] | None = None, **kwargs)[source]¶
Note
This is an experimental class, and may change at any time. Please see https://aka.ms/azuremlexperimental for more information.
A connection geared towards a Serp service (Open source search API Service)
- Parameters:
- dump(dest: str | PathLike | IO, **kwargs: Any) None¶
Dump the connection spec into a file in yaml format.
- Parameters:
dest (Union[PathLike, str, IO[AnyStr]]) – The destination to receive this connection’s spec. Must be either a path to a local file, or an already-open file stream. If dest is a file path, a new file will be created, and an exception is raised if the file exists. If dest is an open file, the file will be written to directly, and an exception will be raised if the file is not writable.
- property api_base: str | None¶
Alternate name for the target of the connection, which is used by some connection subclasses.
- Returns:
The target of the connection.
- Return type:
- property api_key: str | None¶
The API key of the connection.
- Returns:
The API key of the connection.
- Return type:
Optional[str]
- property azure_endpoint: str | None¶
Alternate name for the target of the connection, which is used by some connection subclasses.
- Returns:
The target of the connection.
- Return type:
- property base_path: str¶
The base path of the resource.
- Returns:
The base path of the resource.
- Return type:
- property creation_context: SystemData | None¶
The creation context of the resource.
- Returns:
The creation metadata for the resource.
- Return type:
Optional[SystemData]
- property credentials: PatTokenConfiguration | SasTokenConfiguration | UsernamePasswordConfiguration | ManagedIdentityConfiguration | ServicePrincipalConfiguration | AccessKeyConfiguration | ApiKeyConfiguration | NoneCredentialConfiguration | AccountKeyConfiguration | AadCredentialConfiguration¶
Credentials for connection.
- Returns:
Credentials for connection.
- Return type:
Union[ PatTokenConfiguration, SasTokenConfiguration, UsernamePasswordConfiguration, ServicePrincipalConfiguration, AccessKeyConfiguration, NoneCredentialConfiguration, AccountKeyConfiguration, AadCredentialConfiguration, ]
- property endpoint: str | None¶
Alternate name for the target of the connection, which is used by some connection subclasses.
- Returns:
The target of the connection.
- Return type:
- property id: str | None¶
The resource ID.
- Returns:
The global ID of the resource, an Azure Resource Manager (ARM) ID.
- Return type:
Optional[str]
Get the Boolean describing if this connection is shared amongst its cohort within a hub. Only applicable for connections created within a project.
- Return type:
- property metadata: Dict[str, Any] | None¶
The connection’s metadata dictionary. :return: This connection’s metadata. :rtype: Optional[Dict[str, Any]]
- property tags: Dict[str, Any] | None¶
Deprecated. Use metadata instead. :return: This connection’s metadata. :rtype: Optional[Dict[str, Any]]
- property target: str | None¶
Target url for the connection.
- Returns:
Target of the connection.
- Return type:
Optional[str]
- class azure.ai.ml.entities.ServerlessComputeSettings(*, custom_subnet: str | ArmId | None = None, no_public_ip: bool = False)[source]¶
Settings regarding serverless compute(s) in an Azure ML workspace.
- Keyword Arguments:
- Raises:
ValidationError – If the custom_subnet is not formatted as an ARM ID.
- class azure.ai.ml.entities.ServerlessConnection(*, endpoint: str, api_key: str | None = None, metadata: Dict[Any, Any] | None = None, **kwargs)[source]¶
Note
This is an experimental class, and may change at any time. Please see https://aka.ms/azuremlexperimental for more information.
A connection geared towards a MaaS endpoint (Serverless).
- Parameters:
- dump(dest: str | PathLike | IO, **kwargs: Any) None¶
Dump the connection spec into a file in yaml format.
- Parameters:
dest (Union[PathLike, str, IO[AnyStr]]) – The destination to receive this connection’s spec. Must be either a path to a local file, or an already-open file stream. If dest is a file path, a new file will be created, and an exception is raised if the file exists. If dest is an open file, the file will be written to directly, and an exception will be raised if the file is not writable.
- property api_base: str | None¶
Alternate name for the target of the connection, which is used by some connection subclasses.
- Returns:
The target of the connection.
- Return type:
- property api_key: str | None¶
The API key of the connection.
- Returns:
The API key of the connection.
- Return type:
Optional[str]
- property azure_endpoint: str | None¶
Alternate name for the target of the connection, which is used by some connection subclasses.
- Returns:
The target of the connection.
- Return type:
- property base_path: str¶
The base path of the resource.
- Returns:
The base path of the resource.
- Return type:
- property creation_context: SystemData | None¶
The creation context of the resource.
- Returns:
The creation metadata for the resource.
- Return type:
Optional[SystemData]
- property credentials: PatTokenConfiguration | SasTokenConfiguration | UsernamePasswordConfiguration | ManagedIdentityConfiguration | ServicePrincipalConfiguration | AccessKeyConfiguration | ApiKeyConfiguration | NoneCredentialConfiguration | AccountKeyConfiguration | AadCredentialConfiguration¶
Credentials for connection.
- Returns:
Credentials for connection.
- Return type:
Union[ PatTokenConfiguration, SasTokenConfiguration, UsernamePasswordConfiguration, ServicePrincipalConfiguration, AccessKeyConfiguration, NoneCredentialConfiguration, AccountKeyConfiguration, AadCredentialConfiguration, ]
- property endpoint: str | None¶
Alternate name for the target of the connection, which is used by some connection subclasses.
- Returns:
The target of the connection.
- Return type:
- property id: str | None¶
The resource ID.
- Returns:
The global ID of the resource, an Azure Resource Manager (ARM) ID.
- Return type:
Optional[str]
Get the Boolean describing if this connection is shared amongst its cohort within a hub. Only applicable for connections created within a project.
- Return type:
- property metadata: Dict[str, Any] | None¶
The connection’s metadata dictionary. :return: This connection’s metadata. :rtype: Optional[Dict[str, Any]]
- property tags: Dict[str, Any] | None¶
Deprecated. Use metadata instead. :return: This connection’s metadata. :rtype: Optional[Dict[str, Any]]
- property target: str | None¶
Target url for the connection.
- Returns:
Target of the connection.
- Return type:
Optional[str]
- class azure.ai.ml.entities.ServerlessEndpoint(*args: Any, **kwargs: Any)[source]¶
Note
This is an experimental class, and may change at any time. Please see https://aka.ms/azuremlexperimental for more information.
>Serverless Endpoint Definition.
Readonly variables are only populated by the server, and will be ignored when sending a request.
All required parameters must be populated in order to send to server.
- ivar name:
The deployment name. Required.
- vartype name:
str
- ivar auth_mode:
Authentication mode of the endpoint.
- vartype auth_mode:
str
- ivar model_id:
The id of the model to deploy. Required.
- vartype model_id:
str
- ivar location:
Location in which to create endpoint.
- vartype location:
str
- ivar provisioning_state:
Provisioning state of the endpoint. Possible values are: “creating”, “deleting”, “succeeded”, “failed”, “updating”, and “canceled”.
- vartype provisioning_state:
str
- ivar tags:
Tags for the endpoint.
- vartype tags:
dict[str, str]
- ivar properties:
Properties of the endpoint.
- vartype properties:
dict[str, str]
- ivar description:
Descripton of the endpoint.
- vartype description:
str
- ivar scoring_uri:
Scoring uri of the endpoint.
- vartype scoring_uri:
str
- ivar id:
ARM resource id of the endpoint.
- vartype id:
str
- ivar headers:
Headers required to hit the endpoint.
- vartype id:
dict[str, str]
- ivar system_data:
System data of the endpoint.
- vartype system_data:
~azure.ai.ml.entities.SystemData
- as_dict(*, exclude_readonly: bool = False) Dict[str, Any][source]¶
Return a dict that can be JSONify using json.dump.
- clear() None. Remove all items from D.¶
- copy() Model¶
- get(k[, d]) D[k] if k in D, else d. d defaults to None.¶
- items() a set-like object providing a view on D's items¶
- keys() a set-like object providing a view on D's keys¶
- pop(k[, d]) v, remove specified key and return the corresponding value.¶
If key is not found, d is returned if given, otherwise KeyError is raised.
- popitem() (k, v), remove and return some (key, value) pair¶
as a 2-tuple; but raise KeyError if D is empty.
- setdefault(k[, d]) D.get(k,d), also set D[k]=d if k not in D¶
- update([E, ]**F) None. Update D from mapping/iterable E and F.¶
If E present and has a .keys() method, does: for k in E: D[k] = E[k] If E present and lacks .keys() method, does: for (k, v) in E: D[k] = v In either case, this is followed by: for k, v in F.items(): D[k] = v
- values() an object providing a view on D's values¶
- auth_mode: str | None¶
“key”, “aad”. Defaults to “key” if not given.
- Type:
Authentication mode of the endpoint. Possible values are
- provisioning_state: str | None¶
“creating”, “deleting”, “succeeded”, “failed”, “updating”, and “canceled”.
- Type:
Provisioning state of the endpoint. Possible values are
- system_data: SystemData | None¶
System data of the endpoint.
- class azure.ai.ml.entities.ServerlessSparkCompute(*, runtime_version: str, instance_type: str)[source]¶
Serverless Spark compute.
- class azure.ai.ml.entities.ServiceInstance(*, type: str | None = None, port: int | None = None, status: str | None = None, error: str | None = None, endpoint: str | None = None, properties: Dict[str, str] | None = None, **kwargs: Any)[source]¶
Service Instance Result.
- Keyword Arguments:
type (Optional[str]) – The type of service.
port (Optional[int]) – The port used by the service.
status (Optional[str]) – The status of the service.
error (Optional[str]) – The error message.
endpoint (Optional[str]) – The service endpoint.
properties (Optional[dict[str, str]]) – The service instance’s properties.
- class azure.ai.ml.entities.ServicePrincipalConfiguration(*, client_secret: str, **kwargs: str)[source]¶
Service Principal credentials configuration.
- class azure.ai.ml.entities.ServiceTagDestination(*, name: str, protocol: str, port_ranges: str, service_tag: str | None = None, address_prefixes: List[str] | None = None, **kwargs: Any)[source]¶
Class representing a Service Tag outbound rule.
- Parameters:
name (str) – Name of the outbound rule.
service_tag (str) – Service Tag of an Azure service, maps to predefined IP addresses for its service endpoints.
protocol (str) – Allowed transport protocol, can be “TCP”, “UDP”, “ICMP” or “*” for all supported protocols.
port_ranges (str) – A comma-separated list of single ports and/or range of ports, such as “80,1024-65535”. Traffics should be allowed to these port ranges.
address_prefixes (List[str]) – Optional list of CIDR prefixes or IP ranges, when provided, service_tag argument will be ignored and address_prefixes will be used instead.
- Variables:
type (str) – Type of the outbound rule. Set to “ServiceTag” for this class.
Creating a ServiceTagDestination outbound rule object.¶from azure.ai.ml.entities import ServiceTagDestination # Example service tag rule datafactoryrule = ServiceTagDestination( name="datafactory", service_tag="DataFactory", protocol="TCP", port_ranges="80, 8080-8089" ) # Example service tag rule using custom address prefixes customAddressPrefixesRule = ServiceTagDestination( name="customAddressPrefixesRule", address_prefixes=["168.63.129.16", "10.0.0.0/24"], protocol="TCP", port_ranges="80, 443, 8080-8089", )
- class azure.ai.ml.entities.SetupScripts(*, startup_script: ScriptReference | None = None, creation_script: ScriptReference | None = None)[source]¶
Customized setup scripts.
- Keyword Arguments:
startup_script (Optional[ScriptReference]) – The script to be run every time the compute is started.
creation_script (Optional[ScriptReference]) – The script to be run only when the compute is created.
- class azure.ai.ml.entities.Spark(*, component: str | SparkComponent, identity: Dict | ManagedIdentityConfiguration | AmlTokenConfiguration | UserIdentityConfiguration | None = None, driver_cores: int | str | None = None, driver_memory: str | None = None, executor_cores: int | str | None = None, executor_memory: str | None = None, executor_instances: int | str | None = None, dynamic_allocation_enabled: bool | str | None = None, dynamic_allocation_min_executors: int | str | None = None, dynamic_allocation_max_executors: int | str | None = None, conf: Dict[str, str] | None = None, inputs: Dict[str, NodeOutput | Input | str | bool | int | float | Enum] | None = None, outputs: Dict[str, str | Output] | None = None, compute: str | None = None, resources: Dict | SparkResourceConfiguration | None = None, entry: Dict[str, str] | SparkJobEntry | None = None, py_files: List[str] | None = None, jars: List[str] | None = None, files: List[str] | None = None, archives: List[str] | None = None, args: str | None = None, **kwargs: Any)[source]¶
Base class for spark node, used for spark component version consumption.
You should not instantiate this class directly. Instead, you should create it from the builder function: spark.
- Parameters:
component (Union[str, SparkComponent]) – The ID or instance of the Spark component or job to be run during the step.
identity (Union[Dict[str, str], ManagedIdentityConfiguration, AmlTokenConfiguration, UserIdentityConfiguration) – The identity that the Spark job will use while running on compute.
]
- Parameters:
driver_cores (int) – The number of cores to use for the driver process, only in cluster mode.
driver_memory (str) – The amount of memory to use for the driver process, formatted as strings with a size unit suffix (“k”, “m”, “g” or “t”) (e.g. “512m”, “2g”).
executor_cores (int) – The number of cores to use on each executor.
executor_memory (str) – The amount of memory to use per executor process, formatted as strings with a size unit suffix (“k”, “m”, “g” or “t”) (e.g. “512m”, “2g”).
executor_instances (int) – The initial number of executors.
dynamic_allocation_enabled (bool) – Whether to use dynamic resource allocation, which scales the number of executors registered with this application up and down based on the workload.
dynamic_allocation_min_executors (int) – The lower bound for the number of executors if dynamic allocation is enabled.
dynamic_allocation_max_executors (int) – The upper bound for the number of executors if dynamic allocation is enabled.
conf (Dict[str, str]) – A dictionary with pre-defined Spark configurations key and values.
inputs (Dict[str, Union[ str, bool, int, float, Enum, NodeOutput, Input) – A mapping of input names to input data sources used in the job.
]]
- Parameters:
outputs (Dict[str, Union[str, Output]]) – A mapping of output names to output data sources used in the job.
args (str) – The arguments for the job.
compute (str) – The compute resource the job runs on.
resources (Union[Dict, SparkResourceConfiguration]) – The compute resource configuration for the job.
py_files (List[str]) – The list of .zip, .egg or .py files to place on the PYTHONPATH for Python apps.
jars (List[str]) – The list of .JAR files to include on the driver and executor classpaths.
files (List[str]) – The list of files to be placed in the working directory of each executor.
archives (List[str]) – The list of archives to be extracted into the working directory of each executor.
- clear() None. Remove all items from D.¶
- copy() a shallow copy of D¶
- dump(dest: str | PathLike | IO, **kwargs: Any) None¶
Dumps the job content into a file in YAML format.
- Parameters:
dest (Union[PathLike, str, IO[AnyStr]]) – The local path or file stream to write the YAML content to. If dest is a file path, a new file will be created. If dest is an open file, the file will be written to directly.
- Raises:
FileExistsError – Raised if dest is a file path and the file already exists.
IOError – Raised if dest is an open file and the file is not writable.
- fromkeys(value=None, /)¶
Create a new dictionary with keys from iterable and values set to value.
- get(key, default=None, /)¶
Return the value for key if key is in the dictionary, else default.
- items() a set-like object providing a view on D's items¶
- keys() a set-like object providing a view on D's keys¶
- pop(k[, d]) v, remove specified key and return the corresponding value.¶
If the key is not found, return the default if given; otherwise, raise a KeyError.
- popitem()¶
Remove and return a (key, value) pair as a 2-tuple.
Pairs are returned in LIFO (last-in, first-out) order. Raises KeyError if the dict is empty.
- setdefault(key, default=None, /)¶
Insert key with a value of default if key is not in the dictionary.
Return the value for key if key is in the dictionary, else default.
- update([E, ]**F) None. Update D from dict/iterable E and F.¶
If E is present and has a .keys() method, then does: for k in E: D[k] = E[k] If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k]
- values() an object providing a view on D's values¶
- CODE_ID_RE_PATTERN = re.compile('\\/subscriptions\\/(?P<subscription>[\\w,-]+)\\/resourceGroups\\/(?P<resource_group>[\\w,-]+)\\/providers\\/Microsoft\\.MachineLearningServices\\/workspaces\\/(?P<workspace>[\\w,-]+)\\/codes\\/(?P<co)¶
- property base_path: str¶
The base path of the resource.
- Returns:
The base path of the resource.
- Return type:
- property code: PathLike | str | None¶
The local or remote path pointing at source code.
- Return type:
Union[str, PathLike]
- property component: str | SparkComponent¶
The ID or instance of the Spark component or job to be run during the step.
- Return type:
- property creation_context: SystemData | None¶
The creation context of the resource.
- Returns:
The creation metadata for the resource.
- Return type:
Optional[SystemData]
- property id: str | None¶
The resource ID.
- Returns:
The global ID of the resource, an Azure Resource Manager (ARM) ID.
- Return type:
Optional[str]
- property identity: Dict | ManagedIdentityConfiguration | AmlTokenConfiguration | UserIdentityConfiguration | None¶
The identity that the Spark job will use while running on compute.
- Return type:
Union[ManagedIdentityConfiguration, AmlTokenConfiguration, UserIdentityConfiguration]
- property resources: Dict | SparkResourceConfiguration | None¶
The compute resource configuration for the job.
- Return type:
- property status: str | None¶
The status of the job.
Common values returned include “Running”, “Completed”, and “Failed”. All possible values are:
NotStarted - This is a temporary state that client-side Run objects are in before cloud submission.
Starting - The Run has started being processed in the cloud. The caller has a run ID at this point.
Provisioning - On-demand compute is being created for a given job submission.
- Preparing - The run environment is being prepared and is in one of two stages:
Docker image build
conda environment setup
- Queued - The job is queued on the compute target. For example, in BatchAI, the job is in a queued state
while waiting for all the requested nodes to be ready.
Running - The job has started to run on the compute target.
Finalizing - User code execution has completed, and the run is in post-processing stages.
CancelRequested - Cancellation has been requested for the job.
- Completed - The run has completed successfully. This includes both the user code execution and run
post-processing stages.
Failed - The run failed. Usually the Error property on a run will provide details as to why.
Canceled - Follows a cancellation request and indicates that the run is now successfully cancelled.
NotResponding - For runs that have Heartbeats enabled, no heartbeat has been recently sent.
- Returns:
Status of the job.
- Return type:
Optional[str]
- class azure.ai.ml.entities.SparkComponent(*, code: PathLike | str | None = '.', entry: Dict[str, str] | SparkJobEntry | None = None, py_files: List[str] | None = None, jars: List[str] | None = None, files: List[str] | None = None, archives: List[str] | None = None, driver_cores: int | str | None = None, driver_memory: str | None = None, executor_cores: int | str | None = None, executor_memory: str | None = None, executor_instances: int | str | None = None, dynamic_allocation_enabled: bool | str | None = None, dynamic_allocation_min_executors: int | str | None = None, dynamic_allocation_max_executors: int | str | None = None, conf: Dict[str, str] | None = None, environment: Environment | str | None = None, inputs: Dict | None = None, outputs: Dict | None = None, args: str | None = None, additional_includes: List | None = None, **kwargs: Any)[source]¶
Spark component version, used to define a Spark Component or Job.
- Keyword Arguments:
code – The source code to run the job. Can be a local path or “http:”, “https:”, or “azureml:” url pointing to a remote location. Defaults to “.”, indicating the current directory.
entry (Optional[Union[dict[str, str], SparkJobEntry]]) – The file or class entry point.
py_files (Optional[List[str]]) – The list of .zip, .egg or .py files to place on the PYTHONPATH for Python apps. Defaults to None.
jars (Optional[List[str]]) – The list of .JAR files to include on the driver and executor classpaths. Defaults to None.
files (Optional[List[str]]) – The list of files to be placed in the working directory of each executor. Defaults to None.
archives (Optional[List[str]]) – The list of archives to be extracted into the working directory of each executor. Defaults to None.
driver_cores (Optional[int]) – The number of cores to use for the driver process, only in cluster mode.
driver_memory (Optional[str]) – The amount of memory to use for the driver process, formatted as strings with a size unit suffix (“k”, “m”, “g” or “t”) (e.g. “512m”, “2g”).
executor_cores (Optional[int]) – The number of cores to use on each executor.
executor_memory (Optional[str]) – The amount of memory to use per executor process, formatted as strings with a size unit suffix (“k”, “m”, “g” or “t”) (e.g. “512m”, “2g”).
executor_instances (Optional[int]) – The initial number of executors.
dynamic_allocation_enabled (Optional[bool]) – Whether to use dynamic resource allocation, which scales the number of executors registered with this application up and down based on the workload. Defaults to False.
dynamic_allocation_min_executors (Optional[int]) – The lower bound for the number of executors if dynamic allocation is enabled.
dynamic_allocation_max_executors (Optional[int]) – The upper bound for the number of executors if dynamic allocation is enabled.
conf (Optional[dict[str, str]]) – A dictionary with pre-defined Spark configurations key and values. Defaults to None.
environment (Optional[Union[str, Environment]]) – The Azure ML environment to run the job in.
inputs (Optional[dict[str, Union[ NodeOutput, Input, str, bool, int, float, Enum, ]]]) – A mapping of input names to input data sources used in the job. Defaults to None.
outputs (Optional[dict[str, Union[str, Output]]]) – A mapping of output names to output data sources used in the job. Defaults to None.
args (Optional[str]) – The arguments for the job. Defaults to None.
additional_includes (Optional[List[str]]) – A list of shared additional files to be included in the component. Defaults to None.
Example:
Creating SparkComponent.¶from azure.ai.ml.entities import SparkComponent component = SparkComponent( name="add_greeting_column_spark_component", display_name="Aml Spark add greeting column test module", description="Aml Spark add greeting column test module", version="1", inputs={ "file_input": {"type": "uri_file", "mode": "direct"}, }, driver_cores=2, driver_memory="1g", executor_cores=1, executor_memory="1g", executor_instances=1, code="./src", entry={"file": "add_greeting_column.py"}, py_files=["utils.zip"], files=["my_files.txt"], args="--file_input ${{inputs.file_input}}", base_path="./sdk/ml/azure-ai-ml/tests/test_configs/dsl_pipeline/spark_job_in_pipeline", )
- dump(dest: str | PathLike | IO, **kwargs: Any) None¶
Dump the component content into a file in yaml format.
- Parameters:
dest (Union[PathLike, str, IO[AnyStr]]) – The destination to receive this component’s content. Must be either a path to a local file, or an already-open file stream. If dest is a file path, a new file will be created, and an exception is raised if the file exists. If dest is an open file, the file will be written to directly, and an exception will be raised if the file is not writable.
- CODE_ID_RE_PATTERN = re.compile('\\/subscriptions\\/(?P<subscription>[\\w,-]+)\\/resourceGroups\\/(?P<resource_group>[\\w,-]+)\\/providers\\/Microsoft\\.MachineLearningServices\\/workspaces\\/(?P<workspace>[\\w,-]+)\\/codes\\/(?P<co)¶
- property base_path: str¶
The base path of the resource.
- Returns:
The base path of the resource.
- Return type:
- property creation_context: SystemData | None¶
The creation context of the resource.
- Returns:
The creation metadata for the resource.
- Return type:
Optional[SystemData]
- property display_name: str | None¶
Display name of the component.
- Returns:
Display name of the component.
- Return type:
- property environment: Environment | str | None¶
The Azure ML environment to run the Spark component or job in.
- Returns:
The Azure ML environment to run the Spark component or job in.
- Return type:
Optional[Union[str, Environment]]
- property id: str | None¶
The resource ID.
- Returns:
The global ID of the resource, an Azure Resource Manager (ARM) ID.
- Return type:
Optional[str]
- property is_deterministic: bool | None¶
Whether the component is deterministic.
- Returns:
Whether the component is deterministic
- Return type:
- class azure.ai.ml.entities.SparkJob(*, driver_cores: int | str | None = None, driver_memory: str | None = None, executor_cores: int | str | None = None, executor_memory: str | None = None, executor_instances: int | str | None = None, dynamic_allocation_enabled: bool | str | None = None, dynamic_allocation_min_executors: int | str | None = None, dynamic_allocation_max_executors: int | str | None = None, inputs: Dict[str, Input | str | bool | int | float] | None = None, outputs: Dict[str, Output] | None = None, compute: str | None = None, identity: Dict[str, str] | ManagedIdentityConfiguration | AmlTokenConfiguration | UserIdentityConfiguration | None = None, resources: Dict | SparkResourceConfiguration | None = None, **kwargs: Any)[source]¶
A standalone Spark job.
- Keyword Arguments:
driver_cores (Optional[int]) – The number of cores to use for the driver process, only in cluster mode.
driver_memory (Optional[str]) – The amount of memory to use for the driver process, formatted as strings with a size unit suffix (“k”, “m”, “g” or “t”) (e.g. “512m”, “2g”).
executor_cores (Optional[int]) – The number of cores to use on each executor.
executor_memory (Optional[str]) – The amount of memory to use per executor process, formatted as strings with a size unit suffix (“k”, “m”, “g” or “t”) (e.g. “512m”, “2g”).
executor_instances (Optional[int]) – The initial number of executors.
dynamic_allocation_enabled (Optional[bool]) – Whether to use dynamic resource allocation, which scales the number of executors registered with this application up and down based on the workload.
dynamic_allocation_min_executors (Optional[int]) – The lower bound for the number of executors if dynamic allocation is enabled.
dynamic_allocation_max_executors (Optional[int]) – The upper bound for the number of executors if dynamic allocation is enabled.
inputs (Optional[dict[str, Input]]) – The mapping of input data bindings used in the job.
outputs (Optional[dict[str, Output]]) – The mapping of output data bindings used in the job.
compute (Optional[str]) – The compute resource the job runs on.
identity (Optional[Union[dict[str, str], ManagedIdentityConfiguration, AmlTokenConfiguration, UserIdentityConfiguration]]) – The identity that the Spark job will use while running on compute.
Example:
Configuring a SparkJob.¶from azure.ai.ml import Input, Output from azure.ai.ml.entities import SparkJob spark_job = SparkJob( code="./sdk/ml/azure-ai-ml/tests/test_configs/dsl_pipeline/spark_job_in_pipeline/basic_src", entry={"file": "sampleword.py"}, conf={ "spark.driver.cores": 2, "spark.driver.memory": "1g", "spark.executor.cores": 1, "spark.executor.memory": "1g", "spark.executor.instances": 1, }, environment="AzureML-sklearn-1.0-ubuntu20.04-py38-cpu:33", inputs={ "input1": Input( type="uri_file", path="azureml://datastores/workspaceblobstore/paths/python/data.csv", mode="direct" ) }, compute="synapsecompute", outputs={"component_out_path": Output(type="uri_folder")}, args="--input1 ${{inputs.input1}} --output2 ${{outputs.output1}} --my_sample_rate ${{inputs.sample_rate}}", )
- dump(dest: str | PathLike | IO, **kwargs: Any) None¶
Dumps the job content into a file in YAML format.
- Parameters:
dest (Union[PathLike, str, IO[AnyStr]]) – The local path or file stream to write the YAML content to. If dest is a file path, a new file will be created. If dest is an open file, the file will be written to directly.
- Raises:
FileExistsError – Raised if dest is a file path and the file already exists.
IOError – Raised if dest is an open file and the file is not writable.
- filter_conf_fields() Dict[str, str][source]¶
Filters out the fields of the conf attribute that are not among the Spark configuration fields listed in ~azure.ai.ml._schema.job.parameterized_spark.CONF_KEY_MAP and returns them in their own dictionary.
- CODE_ID_RE_PATTERN = re.compile('\\/subscriptions\\/(?P<subscription>[\\w,-]+)\\/resourceGroups\\/(?P<resource_group>[\\w,-]+)\\/providers\\/Microsoft\\.MachineLearningServices\\/workspaces\\/(?P<workspace>[\\w,-]+)\\/codes\\/(?P<co)¶
- property base_path: str¶
The base path of the resource.
- Returns:
The base path of the resource.
- Return type:
- property creation_context: SystemData | None¶
The creation context of the resource.
- Returns:
The creation metadata for the resource.
- Return type:
Optional[SystemData]
- property environment: Environment | str | None¶
The Azure ML environment to run the Spark component or job in.
- Returns:
The Azure ML environment to run the Spark component or job in.
- Return type:
Optional[Union[str, Environment]]
- property id: str | None¶
The resource ID.
- Returns:
The global ID of the resource, an Azure Resource Manager (ARM) ID.
- Return type:
Optional[str]
- property identity: Dict | ManagedIdentityConfiguration | AmlTokenConfiguration | UserIdentityConfiguration | None¶
The identity that the Spark job will use while running on compute.
- Returns:
The identity that the Spark job will use while running on compute.
- Return type:
Optional[Union[ManagedIdentityConfiguration, AmlTokenConfiguration, UserIdentityConfiguration]]
- property resources: Dict | SparkResourceConfiguration | None¶
The compute resource configuration for the job.
- Returns:
The compute resource configuration for the job.
- Return type:
Optional[SparkResourceConfiguration]
- property status: str | None¶
The status of the job.
Common values returned include “Running”, “Completed”, and “Failed”. All possible values are:
NotStarted - This is a temporary state that client-side Run objects are in before cloud submission.
Starting - The Run has started being processed in the cloud. The caller has a run ID at this point.
Provisioning - On-demand compute is being created for a given job submission.
- Preparing - The run environment is being prepared and is in one of two stages:
Docker image build
conda environment setup
- Queued - The job is queued on the compute target. For example, in BatchAI, the job is in a queued state
while waiting for all the requested nodes to be ready.
Running - The job has started to run on the compute target.
Finalizing - User code execution has completed, and the run is in post-processing stages.
CancelRequested - Cancellation has been requested for the job.
- Completed - The run has completed successfully. This includes both the user code execution and run
post-processing stages.
Failed - The run failed. Usually the Error property on a run will provide details as to why.
Canceled - Follows a cancellation request and indicates that the run is now successfully cancelled.
NotResponding - For runs that have Heartbeats enabled, no heartbeat has been recently sent.
- Returns:
Status of the job.
- Return type:
Optional[str]
- class azure.ai.ml.entities.SparkJobEntry(*, entry: str, type: str = 'SparkJobPythonEntry')[source]¶
Entry for Spark job.
- Keyword Arguments:
entry (str) – The file or class entry point.
type (SparkJobEntryType) – The entry type. Accepted values are SparkJobEntryType.SPARK_JOB_FILE_ENTRY or SparkJobEntryType.SPARK_JOB_CLASS_ENTRY. Defaults to SparkJobEntryType.SPARK_JOB_FILE_ENTRY.
Example:
Creating SparkComponent.¶from azure.ai.ml.entities import SparkComponent component = SparkComponent( name="add_greeting_column_spark_component", display_name="Aml Spark add greeting column test module", description="Aml Spark add greeting column test module", version="1", inputs={ "file_input": {"type": "uri_file", "mode": "direct"}, }, driver_cores=2, driver_memory="1g", executor_cores=1, executor_memory="1g", executor_instances=1, code="./src", entry={"file": "add_greeting_column.py"}, py_files=["utils.zip"], files=["my_files.txt"], args="--file_input ${{inputs.file_input}}", base_path="./sdk/ml/azure-ai-ml/tests/test_configs/dsl_pipeline/spark_job_in_pipeline", )
- class azure.ai.ml.entities.SparkJobEntryType[source]¶
Type of Spark job entry. Possibilities are Python file entry or Scala class entry.
- SPARK_JOB_CLASS_ENTRY = 'SparkJobScalaEntry'¶
- SPARK_JOB_FILE_ENTRY = 'SparkJobPythonEntry'¶
- class azure.ai.ml.entities.SparkResourceConfiguration(*, instance_type: str | None = None, runtime_version: str | None = None)[source]¶
Compute resource configuration for Spark component or job.
- Keyword Arguments:
Example:
Configuring a SparkJob with SparkResourceConfiguration.¶from azure.ai.ml import Input, Output from azure.ai.ml.entities import SparkJob, SparkResourceConfiguration from azure.ai.ml.entities._credentials import AmlTokenConfiguration spark_job = SparkJob( code="./tests/test_configs/spark_job/basic_spark_job/src", entry={"file": "./main.py"}, jars=["simple-1.1.1.jar"], identity=AmlTokenConfiguration(), driver_cores=1, driver_memory="2g", executor_cores=2, executor_memory="2g", executor_instances=2, dynamic_allocation_enabled=True, dynamic_allocation_min_executors=1, dynamic_allocation_max_executors=3, name="builder-spark-job", experiment_name="builder-spark-experiment-name", environment="AzureML-sklearn-1.0-ubuntu20.04-py38-cpu:33", inputs={ "input1": Input( type="uri_file", path="azureml://datastores/workspaceblobstore/paths/python/data.csv", mode="direct" ) }, outputs={ "output1": Output( type="uri_file", path="azureml://datastores/workspaceblobstore/spark_titanic_output/titanic.parquet", mode="direct", ) }, resources=SparkResourceConfiguration(instance_type="Standard_E8S_V3", runtime_version="3.3.0"), )
- instance_type_list = ['standard_e4s_v3', 'standard_e8s_v3', 'standard_e16s_v3', 'standard_e32s_v3', 'standard_e64s_v3']¶
- class azure.ai.ml.entities.SshJobService(*, endpoint: str | None = None, nodes: Literal['all'] | None = None, status: str | None = None, port: int | None = None, ssh_public_keys: str | None = None, properties: Dict[str, str] | None = None, **kwargs: Any)[source]¶
SSH job service configuration.
- Variables:
type (str) – Specifies the type of job service. Set automatically to “ssh” for this class.
- Keyword Arguments:
endpoint (Optional[str]) – The endpoint URL.
port (Optional[int]) – The port for the endpoint.
nodes (Optional[Literal["all"]]) – Indicates whether the service has to run in all nodes.
properties (Optional[dict[str, str]]) – Additional properties to set on the endpoint.
status (Optional[str]) – The status of the endpoint.
ssh_public_keys (Optional[str]) – The SSH Public Key to access the job container.
kwargs (dict) – A dictionary of additional configuration parameters.
Example:
Configuring a SshJobService configuration on a command job.¶from azure.ai.ml import command from azure.ai.ml.entities import JupyterLabJobService, SshJobService, TensorBoardJobService, VsCodeJobService node = command( name="interactive-command-job", description="description", environment="AzureML-sklearn-1.0-ubuntu20.04-py38-cpu:33", command="ls", compute="testCompute", services={ "my_ssh": SshJobService(), "my_tensorboard": TensorBoardJobService(log_dir="~/blog"), "my_jupyter_lab": JupyterLabJobService(), "my_vscode": VsCodeJobService(), }, )
- class azure.ai.ml.entities.StaticInputData(*, data_context: MonitorDatasetContext | None = None, target_columns: Dict | None = None, job_type: str | None = None, uri: str | None = None, pre_processing_component_id: str | None = None, window_start: str | None = None, window_end: str | None = None)[source]¶
- Variables:
type – Specifies the type of monitoring input data. Set automatically to “Static” for this class.
type – MonitorInputDataType
- class azure.ai.ml.entities.Sweep(*, trial: CommandComponent | str | None = None, compute: str | None = None, limits: SweepJobLimits | None = None, sampling_algorithm: str | SamplingAlgorithm | None = None, objective: Objective | None = None, early_termination: BanditPolicy | MedianStoppingPolicy | TruncationSelectionPolicy | EarlyTerminationPolicy | str | None = None, search_space: Dict[str, Choice | LogNormal | LogUniform | Normal | QLogNormal | QLogUniform | QNormal | QUniform | Randint | Uniform] | None = None, inputs: Dict[str, Input | str | bool | int | float] | None = None, outputs: Dict[str, str | Output] | None = None, identity: Dict | ManagedIdentityConfiguration | AmlTokenConfiguration | UserIdentityConfiguration | None = None, queue_settings: QueueSettings | None = None, resources: dict | JobResourceConfiguration | None = None, **kwargs: Any)[source]¶
Base class for sweep node.
This class should not be instantiated directly. Instead, it should be created via the builder function: sweep.
- Parameters:
trial (Union[CommandComponent, str]) – The ID or instance of the command component or job to be run for the step.
compute (str) – The compute definition containing the compute information for the step.
limits (SweepJobLimits) – The limits for the sweep node.
sampling_algorithm (str) – The sampling algorithm to use to sample inside the search space. Accepted values are: “random”, “grid”, or “bayesian”.
objective (Objective) – The objective used to determine the target run with the local optimal hyperparameter in search space.
early_termination_policy – The early termination policy of the sweep node.
]
- Parameters:
search_space – The hyperparameter search space to run trials in.
]]
- Parameters:
inputs – Mapping of input data bindings used in the job.
]]
- Parameters:
]
- Parameters:
queue_settings (QueueSettings) – The queue settings for the job.
resources (ResourceConfiguration) – Compute Resource configuration for the job.
limits (SweepJobLimits) – Limits for sweep job.
sampling_algorithm (SamplingAlgorithm) – Sampling algorithm for sweep job.
objective (Objective) – Objective for sweep job.
early_termination (EarlyTerminationPolicy) – Early termination policy for sweep job.
search_space – Search space for sweep job.
queue_settings – Queue settings for sweep job.
resources – Compute Resource configuration for the job.
- clear() None. Remove all items from D.¶
- copy() a shallow copy of D¶
- dump(dest: str | PathLike | IO, **kwargs: Any) None¶
Dumps the job content into a file in YAML format.
- Parameters:
dest (Union[PathLike, str, IO[AnyStr]]) – The local path or file stream to write the YAML content to. If dest is a file path, a new file will be created. If dest is an open file, the file will be written to directly.
- Raises:
FileExistsError – Raised if dest is a file path and the file already exists.
IOError – Raised if dest is an open file and the file is not writable.
- fromkeys(value=None, /)¶
Create a new dictionary with keys from iterable and values set to value.
- get(key, default=None, /)¶
Return the value for key if key is in the dictionary, else default.
- items() a set-like object providing a view on D's items¶
- keys() a set-like object providing a view on D's keys¶
- pop(k[, d]) v, remove specified key and return the corresponding value.¶
If the key is not found, return the default if given; otherwise, raise a KeyError.
- popitem()¶
Remove and return a (key, value) pair as a 2-tuple.
Pairs are returned in LIFO (last-in, first-out) order. Raises KeyError if the dict is empty.
- set_limits(*, max_concurrent_trials: int | None = None, max_total_trials: int | None = None, timeout: int | None = None, trial_timeout: int | None = None) None¶
Set limits for Sweep node. Leave parameters as None if you don’t want to update corresponding values.
- set_objective(*, goal: str | None = None, primary_metric: str | None = None) None¶
Set the sweep object.. Leave parameters as None if you don’t want to update corresponding values.
- set_resources(*, instance_type: str | List[str] | None = None, instance_count: int | None = None, locations: List[str] | None = None, properties: Dict | None = None, docker_args: str | None = None, shm_size: str | None = None) None¶
Set resources for Sweep.
- Keyword Arguments:
instance_type (Optional[Union[str, List[str]]]) – The instance type to use for the job.
instance_count (Optional[int]) – The number of instances to use for the job.
locations (Optional[List[str]]) – The locations to use for the job.
properties (Optional[Dict]) – The properties for the job.
docker_args (Optional[str]) – The docker arguments for the job.
shm_size (Optional[str]) – The shared memory size for the job.
- setdefault(key, default=None, /)¶
Insert key with a value of default if key is not in the dictionary.
Return the value for key if key is in the dictionary, else default.
- update([E, ]**F) None. Update D from dict/iterable E and F.¶
If E is present and has a .keys() method, then does: for k in E: D[k] = E[k] If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k]
- values() an object providing a view on D's values¶
- property base_path: str¶
The base path of the resource.
- Returns:
The base path of the resource.
- Return type:
- property creation_context: SystemData | None¶
The creation context of the resource.
- Returns:
The creation metadata for the resource.
- Return type:
Optional[SystemData]
- property early_termination: str | EarlyTerminationPolicy | None¶
The early termination policy for the sweep job.
- Return type:
Union[str, BanditPolicy, MedianStoppingPolicy, TruncationSelectionPolicy]
- property id: str | None¶
The resource ID.
- Returns:
The global ID of the resource, an Azure Resource Manager (ARM) ID.
- Return type:
Optional[str]
- property limits: SweepJobLimits | None¶
Limits for sweep job.
- Returns:
Limits for sweep job.
- Return type:
- property resources: dict | JobResourceConfiguration | None¶
Resources for sweep job.
- Returns:
Resources for sweep job.
- Return type:
- property sampling_algorithm: str | SamplingAlgorithm | None¶
Sampling algorithm for sweep job.
- Returns:
Sampling algorithm for sweep job.
- Return type:
- property search_space: Dict[str, Choice | LogNormal | LogUniform | Normal | QLogNormal | QLogUniform | QNormal | QUniform | Randint | Uniform] | None¶
Dictionary of the hyperparameter search space.
Each key is the name of a hyperparameter and its value is the parameter expression.
- Return type:
Dict[str, Union[Choice, LogNormal, LogUniform, Normal, QLogNormal, QLogUniform, QNormal, QUniform, Randint, Uniform]]
- property status: str | None¶
The status of the job.
Common values returned include “Running”, “Completed”, and “Failed”. All possible values are:
NotStarted - This is a temporary state that client-side Run objects are in before cloud submission.
Starting - The Run has started being processed in the cloud. The caller has a run ID at this point.
Provisioning - On-demand compute is being created for a given job submission.
- Preparing - The run environment is being prepared and is in one of two stages:
Docker image build
conda environment setup
- Queued - The job is queued on the compute target. For example, in BatchAI, the job is in a queued state
while waiting for all the requested nodes to be ready.
Running - The job has started to run on the compute target.
Finalizing - User code execution has completed, and the run is in post-processing stages.
CancelRequested - Cancellation has been requested for the job.
- Completed - The run has completed successfully. This includes both the user code execution and run
post-processing stages.
Failed - The run failed. Usually the Error property on a run will provide details as to why.
Canceled - Follows a cancellation request and indicates that the run is now successfully cancelled.
NotResponding - For runs that have Heartbeats enabled, no heartbeat has been recently sent.
- Returns:
Status of the job.
- Return type:
Optional[str]
- property studio_url: str | None¶
Azure ML studio endpoint.
- Returns:
The URL to the job details page.
- Return type:
Optional[str]
- property trial: CommandComponent¶
The ID or instance of the command component or job to be run for the step.
- Return type:
- class azure.ai.ml.entities.SynapseSparkCompute(*, name: str, description: str | None = None, tags: Dict[str, str] | None = None, node_count: int | None = None, node_family: str | None = None, node_size: str | None = None, spark_version: str | None = None, identity: IdentityConfiguration | None = None, scale_settings: AutoScaleSettings | None = None, auto_pause_settings: AutoPauseSettings | None = None, **kwargs: Any)[source]¶
Note
This is an experimental class, and may change at any time. Please see https://aka.ms/azuremlexperimental for more information.
SynapseSpark Compute resource.
- Keyword Arguments:
name (str) – The name of the compute.
description (Optional[str]) – The description of the resource. Defaults to None.
tags (Optional[[dict[str, str]]) – The set of resource tags defined as key/value pairs. Defaults to None.
node_count (Optional[int]) – The number of nodes in the compute.
node_family (Optional[str]) – The node family of the compute.
node_size (Optional[str]) – The size of the node.
spark_version (Optional[str]) – The version of Spark to use.
identity (Optional[IdentityConfiguration]) – The configuration of identities that are associated with the compute cluster.
scale_settings (Optional[AutoScaleSettings]) – The scale settings for the compute.
auto_pause_settings (Optional[AutoPauseSettings]) – The auto pause settings for the compute.
kwargs (Optional[dict]) – Additional keyword arguments passed to the parent class.
Example:
Creating Synapse Spark compute.¶from azure.ai.ml.entities import ( AutoPauseSettings, AutoScaleSettings, IdentityConfiguration, ManagedIdentityConfiguration, SynapseSparkCompute, ) synapse_compute = SynapseSparkCompute( name="synapse_name", resource_id="/subscriptions/subscription/resourceGroups/group/providers/Microsoft.Synapse/workspaces/workspace/bigDataPools/pool", identity=IdentityConfiguration( type="UserAssigned", user_assigned_identities=[ ManagedIdentityConfiguration( resource_id="/subscriptions/subscription/resourceGroups/group/providers/Microsoft.ManagedIdentity/userAssignedIdentities/identity" ) ], ), scale_settings=AutoScaleSettings(min_node_count=1, max_node_count=3, enabled=True), auto_pause_settings=AutoPauseSettings(delay_in_minutes=10, enabled=True), )
- dump(dest: str | PathLike | IO, **kwargs: Any) None¶
Dump the compute content into a file in yaml format.
- Parameters:
dest (Union[PathLike, str, IO[AnyStr]]) – The destination to receive this compute’s content. Must be either a path to a local file, or an already-open file stream. If dest is a file path, a new file will be created, and an exception is raised if the file exists. If dest is an open file, the file will be written to directly, and an exception will be raised if the file is not writable.’.
- property base_path: str¶
The base path of the resource.
- Returns:
The base path of the resource.
- Return type:
- property created_on: str | None¶
The compute resource creation timestamp.
- Returns:
The compute resource creation timestamp.
- Return type:
Optional[str]
- property creation_context: SystemData | None¶
The creation context of the resource.
- Returns:
The creation metadata for the resource.
- Return type:
Optional[SystemData]
- property id: str | None¶
The resource ID.
- Returns:
The global ID of the resource, an Azure Resource Manager (ARM) ID.
- Return type:
Optional[str]
- property provisioning_errors: str | None¶
The compute resource provisioning errors.
- Returns:
The compute resource provisioning errors.
- Return type:
Optional[str]
- class azure.ai.ml.entities.SystemCreatedAcrAccount(*, acr_account_sku: str, arm_resource_id: str | None = None)[source]¶
Azure ML ACR account.
- Parameters:
acr_account_sku (str) – The storage account service tier. Currently only Premium is a valid option for registries.
arm_resource_id (str. Default value is None.) – Resource ID of the ACR account.
- class azure.ai.ml.entities.SystemCreatedStorageAccount(*, storage_account_hns: bool, storage_account_type: StorageAccountType | None, arm_resource_id: str | None = None, replicated_ids: List[str] | None = None, replication_count: int = 1)[source]¶
- Parameters:
arm_resource_id (str) – Resource ID of the storage account.
storage_account_hns (bool) – Whether or not this storage account has hierarchical namespaces enabled.
storage_account_type (StorageAccountType) – Allowed values: “Standard_LRS”, “Standard_GRS, “Standard_RAGRS”, “Standard_ZRS”, “Standard_GZRS”, “Standard_RAGZRS”, “Premium_LRS”, “Premium_ZRS”
replication_count (int) – The number of replicas of this storage account that should be created. Defaults to 1. Values less than 1 are invalid.
replicated_ids (List[str]) – If this storage was replicated, then this is a list of all storage IDs with these settings for this registry. Defaults to none for un-replicated storage accounts.
- class azure.ai.ml.entities.SystemData(**kwargs: Any)[source]¶
Metadata related to the creation and most recent modification of a resource.
- Variables:
created_by (str) – The identity that created the resource.
created_by_type (str or CreatedByType) – The type of identity that created the resource. Possible values include: “User”, “Application”, “ManagedIdentity”, “Key”.
created_at (datetime) – The timestamp of resource creation (UTC).
last_modified_by (str) – The identity that last modified the resource.
last_modified_by_type (str or CreatedByType) – The type of identity that last modified the resource. Possible values include: “User”, “Application”, “ManagedIdentity”, “Key”.
last_modified_at (datetime) – The timestamp of resource last modification (UTC).
- Keyword Arguments:
created_by (str) – The identity that created the resource.
created_by_type (Union[str, CreatedByType]) – The type of identity that created the resource. Accepted values are “User”, “Application”, “ManagedIdentity”, “Key”.
created_at (datetime) – The timestamp of resource creation (UTC).
last_modified_by (str) – The identity that last modified the resource.
last_modified_by_type (Union[str, CreatedByType]) – The type of identity that last modified the resource. Accepted values are “User”, “Application”, “ManagedIdentity”, “Key”.
last_modified_at (datetime) – The timestamp of resource last modification in UTC.
- class azure.ai.ml.entities.TargetUtilizationScaleSettings(*, min_instances: int | None = None, max_instances: int | None = None, polling_interval: int | None = None, target_utilization_percentage: int | None = None, **kwargs: Any)[source]¶
Auto scale settings.
- Parameters:
- Variables:
type (str) – Target utilization scale settings type. Set automatically to “target_utilization” for this class.
- class azure.ai.ml.entities.TensorBoardJobService(*, endpoint: str | None = None, nodes: Literal['all'] | None = None, status: str | None = None, port: int | None = None, log_dir: str | None = None, properties: Dict[str, str] | None = None, **kwargs: Any)[source]¶
TensorBoard job service configuration.
- Variables:
type (str) – Specifies the type of job service. Set automatically to “tensor_board” for this class.
- Keyword Arguments:
endpoint (Optional[str]) – The endpoint URL.
port (Optional[int]) – The port for the endpoint.
nodes (Optional[Literal["all"]]) – Indicates whether the service has to run in all nodes.
properties (Optional[dict[str, str]]) – Additional properties to set on the endpoint.
status (Optional[str]) – The status of the endpoint.
log_dir (Optional[str]) – The directory path for the log file.
kwargs (dict) – A dictionary of additional configuration parameters.
Example:
Configuring TensorBoardJobService configuration on a command job.¶from azure.ai.ml import command from azure.ai.ml.entities import JupyterLabJobService, SshJobService, TensorBoardJobService, VsCodeJobService node = command( name="interactive-command-job", description="description", environment="AzureML-sklearn-1.0-ubuntu20.04-py38-cpu:33", command="ls", compute="testCompute", services={ "my_ssh": SshJobService(), "my_tensorboard": TensorBoardJobService(log_dir="~/blog"), "my_jupyter_lab": JupyterLabJobService(), "my_vscode": VsCodeJobService(), }, )
- class azure.ai.ml.entities.TrailingInputData(*, data_context: MonitorDatasetContext | None = None, target_columns: Dict | None = None, job_type: str | None = None, uri: str | None = None, window_size: str | None = None, window_offset: str | None = None, pre_processing_component_id: str | None = None)[source]¶
- Variables:
type – Specifies the type of monitoring input data. Set automatically to “Trailing” for this class.
type – MonitorInputDataType
- class azure.ai.ml.entities.TritonInferencingServer(*, inference_configuration: CodeConfiguration | None = None, **kwargs: Any)[source]¶
Note
This is an experimental class, and may change at any time. Please see https://aka.ms/azuremlexperimental for more information.
Azure ML triton inferencing configurations.
- Parameters:
inference_configuration (azure.ai.ml.entities.CodeConfiguration) – The inference configuration of the inferencing server.
- Variables:
type – The type of the inferencing server.
- class azure.ai.ml.entities.UnsupportedCompute(**kwargs: Any)[source]¶
Unsupported compute resource.
Only used for displaying compute properties for resources not fully supported in the SDK.
- dump(dest: str | PathLike | IO, **kwargs: Any) None¶
Dump the compute content into a file in yaml format.
- Parameters:
dest (Union[PathLike, str, IO[AnyStr]]) – The destination to receive this compute’s content. Must be either a path to a local file, or an already-open file stream. If dest is a file path, a new file will be created, and an exception is raised if the file exists. If dest is an open file, the file will be written to directly, and an exception will be raised if the file is not writable.’.
- property base_path: str¶
The base path of the resource.
- Returns:
The base path of the resource.
- Return type:
- property created_on: str | None¶
The compute resource creation timestamp.
- Returns:
The compute resource creation timestamp.
- Return type:
Optional[str]
- property creation_context: SystemData | None¶
The creation context of the resource.
- Returns:
The creation metadata for the resource.
- Return type:
Optional[SystemData]
- property id: str | None¶
The resource ID.
- Returns:
The global ID of the resource, an Azure Resource Manager (ARM) ID.
- Return type:
Optional[str]
- property provisioning_errors: str | None¶
The compute resource provisioning errors.
- Returns:
The compute resource provisioning errors.
- Return type:
Optional[str]
- class azure.ai.ml.entities.Usage(id: str | None = None, aml_workspace_location: str | None = None, type: str | None = None, unit: str | UsageUnit | None = None, current_value: int | None = None, limit: int | None = None, name: UsageName | None = None)[source]¶
AzureML resource usage.
- Parameters:
id (Optional[str]) – The resource ID.
aml_workspace_location (Optional[str]) – The region of the AzureML workspace specified by the ID.
type (Optional[str]) – The resource type.
unit (Optional[Union[str, UsageUnit]]) – The unit of measurement for usage. Accepted value is “Count”.
current_value (Optional[int]) – The current usage of the resource.
limit (Optional[int]) – The maximum permitted usage for the resource.
name (Optional[UsageName]) – The name of the usage type.
- dump(dest: str | PathLike | IO, **kwargs: Any) None[source]¶
Dumps the job content into a file in YAML format.
- Parameters:
dest (Union[PathLike, str, IO[AnyStr]]) – The local path or file stream to write the YAML content to. If dest is a file path, a new file will be created. If dest is an open file, the file will be written to directly.
- Raises:
FileExistsError if dest is a file path and the file already exists.
- Raises:
IOError if dest is an open file and the file is not writable.
- class azure.ai.ml.entities.UsageName(*, value: str | None = None, localized_value: str | None = None)[source]¶
The usage name.
- class azure.ai.ml.entities.UsageUnit(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source]¶
An enum describing the unit of usage measurement.
- capitalize()¶
Return a capitalized version of the string.
More specifically, make the first character have upper case and the rest lower case.
- casefold()¶
Return a version of the string suitable for caseless comparisons.
- center(width, fillchar=' ', /)¶
Return a centered string of length width.
Padding is done using the specified fill character (default is a space).
- count(sub[, start[, end]]) int¶
Return the number of non-overlapping occurrences of substring sub in string S[start:end]. Optional arguments start and end are interpreted as in slice notation.
- encode(encoding='utf-8', errors='strict')¶
Encode the string using the codec registered for encoding.
- encoding
The encoding in which to encode the string.
- errors
The error handling scheme to use for encoding errors. The default is ‘strict’ meaning that encoding errors raise a UnicodeEncodeError. Other possible values are ‘ignore’, ‘replace’ and ‘xmlcharrefreplace’ as well as any other name registered with codecs.register_error that can handle UnicodeEncodeErrors.
- endswith(suffix[, start[, end]]) bool¶
Return True if S ends with the specified suffix, False otherwise. With optional start, test S beginning at that position. With optional end, stop comparing S at that position. suffix can also be a tuple of strings to try.
- expandtabs(tabsize=8)¶
Return a copy where all tab characters are expanded using spaces.
If tabsize is not given, a tab size of 8 characters is assumed.
- find(sub[, start[, end]]) int¶
Return the lowest index in S where substring sub is found, such that sub is contained within S[start:end]. Optional arguments start and end are interpreted as in slice notation.
Return -1 on failure.
- format(*args, **kwargs) str¶
Return a formatted version of S, using substitutions from args and kwargs. The substitutions are identified by braces (‘{’ and ‘}’).
- format_map(mapping) str¶
Return a formatted version of S, using substitutions from mapping. The substitutions are identified by braces (‘{’ and ‘}’).
- index(sub[, start[, end]]) int¶
Return the lowest index in S where substring sub is found, such that sub is contained within S[start:end]. Optional arguments start and end are interpreted as in slice notation.
Raises ValueError when the substring is not found.
- isalnum()¶
Return True if the string is an alpha-numeric string, False otherwise.
A string is alpha-numeric if all characters in the string are alpha-numeric and there is at least one character in the string.
- isalpha()¶
Return True if the string is an alphabetic string, False otherwise.
A string is alphabetic if all characters in the string are alphabetic and there is at least one character in the string.
- isascii()¶
Return True if all characters in the string are ASCII, False otherwise.
ASCII characters have code points in the range U+0000-U+007F. Empty string is ASCII too.
- isdecimal()¶
Return True if the string is a decimal string, False otherwise.
A string is a decimal string if all characters in the string are decimal and there is at least one character in the string.
- isdigit()¶
Return True if the string is a digit string, False otherwise.
A string is a digit string if all characters in the string are digits and there is at least one character in the string.
- isidentifier()¶
Return True if the string is a valid Python identifier, False otherwise.
Call keyword.iskeyword(s) to test whether string s is a reserved identifier, such as “def” or “class”.
- islower()¶
Return True if the string is a lowercase string, False otherwise.
A string is lowercase if all cased characters in the string are lowercase and there is at least one cased character in the string.
- isnumeric()¶
Return True if the string is a numeric string, False otherwise.
A string is numeric if all characters in the string are numeric and there is at least one character in the string.
- isprintable()¶
Return True if the string is printable, False otherwise.
A string is printable if all of its characters are considered printable in repr() or if it is empty.
- isspace()¶
Return True if the string is a whitespace string, False otherwise.
A string is whitespace if all characters in the string are whitespace and there is at least one character in the string.
- istitle()¶
Return True if the string is a title-cased string, False otherwise.
In a title-cased string, upper- and title-case characters may only follow uncased characters and lowercase characters only cased ones.
- isupper()¶
Return True if the string is an uppercase string, False otherwise.
A string is uppercase if all cased characters in the string are uppercase and there is at least one cased character in the string.
- join(iterable, /)¶
Concatenate any number of strings.
The string whose method is called is inserted in between each given string. The result is returned as a new string.
Example: ‘.’.join([‘ab’, ‘pq’, ‘rs’]) -> ‘ab.pq.rs’
- ljust(width, fillchar=' ', /)¶
Return a left-justified string of length width.
Padding is done using the specified fill character (default is a space).
- lower()¶
Return a copy of the string converted to lowercase.
- lstrip(chars=None, /)¶
Return a copy of the string with leading whitespace removed.
If chars is given and not None, remove characters in chars instead.
- static maketrans()¶
Return a translation table usable for str.translate().
If there is only one argument, it must be a dictionary mapping Unicode ordinals (integers) or characters to Unicode ordinals, strings or None. Character keys will be then converted to ordinals. If there are two arguments, they must be strings of equal length, and in the resulting dictionary, each character in x will be mapped to the character at the same position in y. If there is a third argument, it must be a string, whose characters will be mapped to None in the result.
- partition(sep, /)¶
Partition the string into three parts using the given separator.
This will search for the separator in the string. If the separator is found, returns a 3-tuple containing the part before the separator, the separator itself, and the part after it.
If the separator is not found, returns a 3-tuple containing the original string and two empty strings.
- removeprefix(prefix, /)¶
Return a str with the given prefix string removed if present.
If the string starts with the prefix string, return string[len(prefix):]. Otherwise, return a copy of the original string.
- removesuffix(suffix, /)¶
Return a str with the given suffix string removed if present.
If the string ends with the suffix string and that suffix is not empty, return string[:-len(suffix)]. Otherwise, return a copy of the original string.
- replace(old, new, count=-1, /)¶
Return a copy with all occurrences of substring old replaced by new.
- count
Maximum number of occurrences to replace. -1 (the default value) means replace all occurrences.
If the optional argument count is given, only the first count occurrences are replaced.
- rfind(sub[, start[, end]]) int¶
Return the highest index in S where substring sub is found, such that sub is contained within S[start:end]. Optional arguments start and end are interpreted as in slice notation.
Return -1 on failure.
- rindex(sub[, start[, end]]) int¶
Return the highest index in S where substring sub is found, such that sub is contained within S[start:end]. Optional arguments start and end are interpreted as in slice notation.
Raises ValueError when the substring is not found.
- rjust(width, fillchar=' ', /)¶
Return a right-justified string of length width.
Padding is done using the specified fill character (default is a space).
- rpartition(sep, /)¶
Partition the string into three parts using the given separator.
This will search for the separator in the string, starting at the end. If the separator is found, returns a 3-tuple containing the part before the separator, the separator itself, and the part after it.
If the separator is not found, returns a 3-tuple containing two empty strings and the original string.
- rsplit(sep=None, maxsplit=-1)¶
Return a list of the substrings in the string, using sep as the separator string.
- sep
The separator used to split the string.
When set to None (the default value), will split on any whitespace character (including n r t f and spaces) and will discard empty strings from the result.
- maxsplit
Maximum number of splits. -1 (the default value) means no limit.
Splitting starts at the end of the string and works to the front.
- rstrip(chars=None, /)¶
Return a copy of the string with trailing whitespace removed.
If chars is given and not None, remove characters in chars instead.
- split(sep=None, maxsplit=-1)¶
Return a list of the substrings in the string, using sep as the separator string.
- sep
The separator used to split the string.
When set to None (the default value), will split on any whitespace character (including n r t f and spaces) and will discard empty strings from the result.
- maxsplit
Maximum number of splits. -1 (the default value) means no limit.
Splitting starts at the front of the string and works to the end.
Note, str.split() is mainly useful for data that has been intentionally delimited. With natural text that includes punctuation, consider using the regular expression module.
- splitlines(keepends=False)¶
Return a list of the lines in the string, breaking at line boundaries.
Line breaks are not included in the resulting list unless keepends is given and true.
- startswith(prefix[, start[, end]]) bool¶
Return True if S starts with the specified prefix, False otherwise. With optional start, test S beginning at that position. With optional end, stop comparing S at that position. prefix can also be a tuple of strings to try.
- strip(chars=None, /)¶
Return a copy of the string with leading and trailing whitespace removed.
If chars is given and not None, remove characters in chars instead.
- swapcase()¶
Convert uppercase characters to lowercase and lowercase characters to uppercase.
- title()¶
Return a version of the string where each word is titlecased.
More specifically, words start with uppercased characters and all remaining cased characters have lower case.
- translate(table, /)¶
Replace each character in the string using the given translation table.
- table
Translation table, which must be a mapping of Unicode ordinals to Unicode ordinals, strings, or None.
The table must implement lookup/indexing via __getitem__, for instance a dictionary or list. If this operation raises LookupError, the character is left untouched. Characters mapped to None are deleted.
- upper()¶
Return a copy of the string converted to uppercase.
- zfill(width, /)¶
Pad a numeric string with zeros on the left, to fill a field of the given width.
The string is never truncated.
- COUNT = 'Count'¶
- class azure.ai.ml.entities.UserIdentityConfiguration[source]¶
User identity configuration.
Example:
Configuring a UserIdentityConfiguration for a command().¶from azure.ai.ml import Input, command from azure.ai.ml.constants import AssetTypes from azure.ai.ml.entities import UserIdentityConfiguration job = command( code="./sdk/ml/azure-ai-ml/samples/src", command="python read_data.py --input_data ${{inputs.input_data}}", inputs={"input_data": Input(type=AssetTypes.MLTABLE, path="./sample_data")}, environment="AzureML-sklearn-1.0-ubuntu20.04-py38-cpu:1", compute="cpu-cluster", identity=UserIdentityConfiguration(), )
- class azure.ai.ml.entities.UsernamePasswordConfiguration(*, username: str, password: str)[source]¶
Username and password credentials.
- Parameters:
- class azure.ai.ml.entities.ValidationResult[source]¶
Represents the result of job/asset validation.
This class is used to organize and parse diagnostics from both client & server side before expose them. The result is immutable.
- property error_messages: Dict¶
Return all messages of errors in the validation result.
- Returns:
A dictionary of error messages. The key is the yaml path of the error, and the value is the error message.
- Return type:
Example:
"""For example, if repr(self) is: ```python { "errors": [ { "path": "jobs.job_a.inputs.input_str", "message": "input_str is required", "value": None, }, { "path": "jobs.job_a.inputs.input_str", "message": "input_str must be in the format of xxx", "value": None, }, { "path": "settings.on_init", "message": "On_init job name job_b does not exist in jobs.", "value": None, }, ], "warnings": [ { "path": "jobs.job_a.inputs.input_str", "message": "input_str is required", "value": None, } ] } ``` then the error_messages will be: ```python { "jobs.job_a.inputs.input_str": "input_str is required; input_str must be in the format of xxx", "settings.on_init": "On_init job name job_b does not exist in jobs.", } ``` """
- class azure.ai.ml.entities.VirtualMachineCompute(*, name: str, description: str | None = None, resource_id: str, tags: dict | None = None, ssh_settings: VirtualMachineSshSettings | None = None, **kwargs: Any)[source]¶
Virtual Machine Compute resource.
- Parameters:
name (str) – Name of the compute resource.
description (Optional[str]) – Description of the resource. Defaults to None.
resource_id (str) – ARM resource ID of the underlying compute resource.
tags (Optional[dict]) – A set of tags. Contains resource tags defined as key/value pairs.
ssh_settings (Optional[VirtualMachineSshSettings]) – SSH settings. Defaults to None.
Example:
Configuring a VirtualMachineCompute object.¶from azure.ai.ml.entities import VirtualMachineCompute vm_compute = VirtualMachineCompute( name="vm-compute", resource_id="/subscriptions/123456-1234-1234-1234-123456789/resourceGroups/my-rg/providers/Microsoft.Compute/virtualMachines/my-vm", ssh_settings=ssh_settings, )
- dump(dest: str | PathLike | IO, **kwargs: Any) None¶
Dump the compute content into a file in yaml format.
- Parameters:
dest (Union[PathLike, str, IO[AnyStr]]) – The destination to receive this compute’s content. Must be either a path to a local file, or an already-open file stream. If dest is a file path, a new file will be created, and an exception is raised if the file exists. If dest is an open file, the file will be written to directly, and an exception will be raised if the file is not writable.’.
- property base_path: str¶
The base path of the resource.
- Returns:
The base path of the resource.
- Return type:
- property created_on: str | None¶
The compute resource creation timestamp.
- Returns:
The compute resource creation timestamp.
- Return type:
Optional[str]
- property creation_context: SystemData | None¶
The creation context of the resource.
- Returns:
The creation metadata for the resource.
- Return type:
Optional[SystemData]
- property id: str | None¶
The resource ID.
- Returns:
The global ID of the resource, an Azure Resource Manager (ARM) ID.
- Return type:
Optional[str]
- property provisioning_errors: str | None¶
The compute resource provisioning errors.
- Returns:
The compute resource provisioning errors.
- Return type:
Optional[str]
- class azure.ai.ml.entities.VirtualMachineSshSettings(*, admin_username: str, admin_password: str | None = None, ssh_port: int = 22, ssh_private_key_file: str | None = None)[source]¶
SSH settings for a virtual machine.
- Parameters:
admin_username (str) – The admin user name. Defaults to None.
admin_password (Optional[str]) – The admin user password. Defaults to None. Required if ssh_private_key_file is not specified.
ssh_port (int) – The ssh port number. Default is 22.
ssh_private_key_file (Optional[str]) – Path to the file containing the SSH rsa private key. Use “ssh-keygen -t rsa -b 2048” to generate your SSH key pairs. Required if admin_password is not specified.
Example:
Configuring a VirtualMachineSshSettings object.¶from azure.ai.ml.entities import VirtualMachineSshSettings ssh_settings = VirtualMachineSshSettings( admin_username="azureuser", admin_password="azureuserpassword", ssh_port=8888, ssh_private_key_file="../tests/test_configs/compute/ssh_fake_key.txt", )
- class azure.ai.ml.entities.VmSize(name: str | None = None, family: str | None = None, v_cp_us: int | None = None, gpus: int | None = None, os_vhd_size_mb: int | None = None, max_resource_volume_mb: int | None = None, memory_gb: float | None = None, low_priority_capable: bool | None = None, premium_io: bool | None = None, supported_compute_types: List[str] | None = None)[source]¶
Virtual Machine Size.
- Parameters:
name (Optional[str]) – The virtual machine size name.
family (Optional[str]) – The virtual machine size family name.
v_cp_us (Optional[int]) – The number of vCPUs supported by the virtual machine size.
gpus (Optional[int]) – The number of GPUs supported by the virtual machine size.
os_vhd_size_mb (Optional[int]) – The OS VHD disk size, in MB, allowed by the virtual machine size.
max_resource_volume_mb (Optional[int]) – The resource volume size, in MB, allowed by the virtual machine size.
memory_gb (Optional[float]) – The amount of memory, in GB, supported by the virtual machine size.
low_priority_capable (Optional[bool]) – Specifies if the virtual machine size supports low priority VMs.
premium_io (Optional[bool]) – Specifies if the virtual machine size supports premium IO.
estimated_vm_prices (EstimatedVMPrices) – The estimated price information for using a VM.
supported_compute_types (Optional[list[str]]) – Specifies the compute types supported by the virtual machine size.
- dump(dest: str | PathLike | IO, **kwargs: Any) None[source]¶
Dump the virtual machine size content into a file in yaml format.
- Parameters:
dest (Union[PathLike, str, IO[AnyStr]]) – The destination to receive this virtual machine size’s content. Must be either a path to a local file, or an already-open file stream. If dest is a file path, a new file will be created, and an exception is raised if the file exists. If dest is an open file, the file will be written to directly, and an exception will be raised if the file is not writable.
- class azure.ai.ml.entities.VolumeSettings(*, source: str, target: str)[source]¶
Specifies the Bind Mount settings for a Custom Application.
- class azure.ai.ml.entities.VsCodeJobService(*, endpoint: str | None = None, nodes: Literal['all'] | None = None, status: str | None = None, port: int | None = None, properties: Dict[str, str] | None = None, **kwargs: Any)[source]¶
VS Code job service configuration.
- Variables:
type (str) – Specifies the type of job service. Set automatically to “vs_code” for this class.
- Keyword Arguments:
endpoint (Optional[str]) – The endpoint URL.
port (Optional[int]) – The port for the endpoint.
nodes (Optional[Literal["all"]]) – Indicates whether the service has to run in all nodes.
properties (Optional[dict[str, str]]) – Additional properties to set on the endpoint.
status (Optional[str]) – The status of the endpoint.
kwargs (dict) – A dictionary of additional configuration parameters.
Example:
Configuring a VsCodeJobService configuration on a command job.¶from azure.ai.ml import command from azure.ai.ml.entities import JupyterLabJobService, SshJobService, TensorBoardJobService, VsCodeJobService node = command( name="interactive-command-job", description="description", environment="AzureML-sklearn-1.0-ubuntu20.04-py38-cpu:33", command="ls", compute="testCompute", services={ "my_ssh": SshJobService(), "my_tensorboard": TensorBoardJobService(log_dir="~/blog"), "my_jupyter_lab": JupyterLabJobService(), "my_vscode": VsCodeJobService(), }, )
- class azure.ai.ml.entities.Workspace(*, name: str, description: str | None = None, tags: Dict[str, str] | None = None, display_name: str | None = None, location: str | None = None, resource_group: str | None = None, hbi_workspace: bool = False, storage_account: str | None = None, container_registry: str | None = None, key_vault: str | None = None, application_insights: str | None = None, customer_managed_key: CustomerManagedKey | None = None, image_build_compute: str | None = None, public_network_access: str | None = None, network_acls: NetworkAcls | None = None, identity: IdentityConfiguration | None = None, primary_user_assigned_identity: str | None = None, managed_network: ManagedNetwork | None = None, provision_network_now: bool | None = None, system_datastores_auth_mode: str | None = None, enable_data_isolation: bool = False, allow_roleassignment_on_rg: bool | None = None, hub_id: str | None = None, workspace_hub: str | None = None, serverless_compute: ServerlessComputeSettings | None = None, **kwargs: Any)[source]¶
Azure ML workspace.
- Parameters:
name (str) – Name of the workspace.
description (str) – Description of the workspace.
tags (dict) – Tags of the workspace.
display_name (str) – Display name for the workspace. This is non-unique within the resource group.
location (str) – The location to create the workspace in. If not specified, the same location as the resource group will be used.
resource_group (str) – Name of resource group to create the workspace in.
hbi_workspace (bool) – Whether the customer data is of high business impact (HBI), containing sensitive business information. For more information, see https://docs.microsoft.com/azure/machine-learning/concept-data-encryption#encryption-at-rest.
storage_account (str) – The resource ID of an existing storage account to use instead of creating a new one.
container_registry (str) – The resource ID of an existing container registry to use instead of creating a new one.
key_vault (str) – The resource ID of an existing key vault to use instead of creating a new one.
application_insights (str) – The resource ID of an existing application insights to use instead of creating a new one.
customer_managed_key (CustomerManagedKey) – Key vault details for encrypting data with customer-managed keys. If not specified, Microsoft-managed keys will be used by default.
image_build_compute (str) – The name of the compute target to use for building environment Docker images with the container registry is behind a VNet.
public_network_access (str) – Whether to allow public endpoint connectivity when a workspace is private link enabled.
network_acls (NetworkAcls) – The network access control list (ACL) settings of the workspace.
identity (IdentityConfiguration) – workspace’s Managed Identity (user assigned, or system assigned)
primary_user_assigned_identity (str) – The workspace’s primary user assigned identity
managed_network (ManagedNetwork) – workspace’s Managed Network configuration
provision_network_now (Optional[bool]) – Set to trigger the provisioning of the managed vnet with the default options when creating a workspace with the managed vnet enable, or else it does nothing
system_datastores_auth_mode (str) – The authentication mode for system datastores.
enable_data_isolation (bool) – A flag to determine if workspace has data isolation enabled. The flag can only be set at the creation phase, it can’t be updated.
allow_roleassignment_on_rg (Optional[bool]) – Determine whether allow workspace role assignment on resource group level.
serverless_compute – The serverless compute settings for the workspace.
workspace_hub (Optional[str]) – Deprecated resource ID of an existing workspace hub to help create project workspace. Use the Project class instead now.
kwargs (dict) – A dictionary of additional configuration parameters.
- Type:
Creating a Workspace object.¶from azure.ai.ml.entities import Workspace ws = Workspace(name="sample-ws", location="eastus", description="a sample workspace object")
- dump(dest: str | PathLike | IO, **kwargs: Any) None[source]¶
Dump the workspace spec into a file in yaml format.
- Parameters:
dest (Union[PathLike, str, IO[AnyStr]]) – The destination to receive this workspace’s spec. Must be either a path to a local file, or an already-open file stream. If dest is a file path, a new file will be created, and an exception is raised if the file exists. If dest is an open file, the file will be written to directly, and an exception will be raised if the file is not writable.
- property base_path: str¶
The base path of the resource.
- Returns:
The base path of the resource.
- Return type:
- property creation_context: SystemData | None¶
The creation context of the resource.
- Returns:
The creation metadata for the resource.
- Return type:
Optional[SystemData]
- property discovery_url: str | None¶
Backend service base URLs for the workspace.
- Returns:
Backend service URLs of the workspace
- Return type:
- class azure.ai.ml.entities.WorkspaceConnection(*, type: str, credentials: PatTokenConfiguration | SasTokenConfiguration | UsernamePasswordConfiguration | ManagedIdentityConfiguration | ServicePrincipalConfiguration | AccessKeyConfiguration | ApiKeyConfiguration | NoneCredentialConfiguration | AccountKeyConfiguration | AadCredentialConfiguration, is_shared: bool = True, metadata: Dict[str, Any] | None = None, **kwargs: Any)[source]¶
Azure ML connection provides a secure way to store authentication and configuration information needed to connect and interact with the external resources.
Note: For connections to OpenAI, Cognitive Search, and Cognitive Services, use the respective subclasses (ex: ~azure.ai.ml.entities.OpenAIConnection) instead of instantiating this class directly.
- Parameters:
name (str) – Name of the connection.
target (str) – The URL or ARM resource ID of the external resource.
metadata (Optional[Dict[str, Any]]) – Metadata dictionary.
type (The type of connection, possible values are: "git", "python_feed", "container_registry", "feature_store", "s3", "snowflake", "azure_sql_db", "azure_synapse_analytics", "azure_my_sql_db", "azure_postgres_db", "adls_gen_2", "azure_one_lake", "custom".) – The category of external resource for this connection.
credentials (Union[ PatTokenConfiguration, SasTokenConfiguration, UsernamePasswordConfiguration, ServicePrincipalConfiguration, AccessKeyConfiguration, ApiKeyConfiguration, AccountKeyConfiguration, AadCredentialConfiguration, None ]) – The credentials for authenticating to the external resource. Note that certain connection types (as defined by the type input) only accept certain types of credentials.
is_shared (bool) – For connections in project, this controls whether or not this connection is shared amongst other projects that are shared by the parent hub. Defaults to true.
- dump(dest: str | PathLike | IO, **kwargs: Any) None[source]¶
Dump the connection spec into a file in yaml format.
- Parameters:
dest (Union[PathLike, str, IO[AnyStr]]) – The destination to receive this connection’s spec. Must be either a path to a local file, or an already-open file stream. If dest is a file path, a new file will be created, and an exception is raised if the file exists. If dest is an open file, the file will be written to directly, and an exception will be raised if the file is not writable.
- property api_base: str | None¶
Alternate name for the target of the connection, which is used by some connection subclasses.
- Returns:
The target of the connection.
- Return type:
- property azure_endpoint: str | None¶
Alternate name for the target of the connection, which is used by some connection subclasses.
- Returns:
The target of the connection.
- Return type:
- property base_path: str¶
The base path of the resource.
- Returns:
The base path of the resource.
- Return type:
- property creation_context: SystemData | None¶
The creation context of the resource.
- Returns:
The creation metadata for the resource.
- Return type:
Optional[SystemData]
- property credentials: PatTokenConfiguration | SasTokenConfiguration | UsernamePasswordConfiguration | ManagedIdentityConfiguration | ServicePrincipalConfiguration | AccessKeyConfiguration | ApiKeyConfiguration | NoneCredentialConfiguration | AccountKeyConfiguration | AadCredentialConfiguration¶
Credentials for connection.
- Returns:
Credentials for connection.
- Return type:
Union[ PatTokenConfiguration, SasTokenConfiguration, UsernamePasswordConfiguration, ServicePrincipalConfiguration, AccessKeyConfiguration, NoneCredentialConfiguration, AccountKeyConfiguration, AadCredentialConfiguration, ]
- property endpoint: str | None¶
Alternate name for the target of the connection, which is used by some connection subclasses.
- Returns:
The target of the connection.
- Return type:
- property id: str | None¶
The resource ID.
- Returns:
The global ID of the resource, an Azure Resource Manager (ARM) ID.
- Return type:
Optional[str]
Get the Boolean describing if this connection is shared amongst its cohort within a hub. Only applicable for connections created within a project.
- Return type:
- property metadata: Dict[str, Any] | None¶
The connection’s metadata dictionary. :return: This connection’s metadata. :rtype: Optional[Dict[str, Any]]
- property tags: Dict[str, Any] | None¶
Deprecated. Use metadata instead. :return: This connection’s metadata. :rtype: Optional[Dict[str, Any]]
- property target: str | None¶
Target url for the connection.
- Returns:
Target of the connection.
- Return type:
Optional[str]
- class azure.ai.ml.entities.WorkspaceKeys(*, user_storage_key: str | None = None, user_storage_resource_id: str | None = None, app_insights_instrumentation_key: str | None = None, container_registry_credentials: ContainerRegistryCredential | None = None, notebook_access_keys: NotebookAccessKeys | None = None)[source]¶
Workspace Keys.
- Parameters:
user_storage_key (str) – Key for storage account associated with given workspace
user_storage_resource_id (str) – Resource id of storage account associated with given workspace
app_insights_instrumentation_key (str) – Key for app insights associated with given workspace
container_registry_credentials (ContainerRegistryCredential) – Key for ACR associated with given workspace
notebook_access_keys (NotebookAccessKeys) – Key for notebook resource associated with given workspace
- azure.ai.ml.entities.WorkspaceModelReference¶
alias of
WorkspaceAssetReference