azure.ai.ml.operations package¶
Contains supported operations for Azure Machine Learning SDKv2.
Operations are classes contain logic to interact with backend services, usually auto generated operations call.
- class azure.ai.ml.operations.AzureOpenAIDeploymentOperations(operation_scope: OperationScope, operation_config: OperationConfig, service_client: AzureMachineLearningWorkspaces, connections_operations: WorkspaceConnectionsOperations)[source]¶
AzureOpenAIDeploymentOperations.
You should not instantiate this class directly. Instead, you should create an MLClient instance that instantiates it for you and attaches it as an attribute.
- class azure.ai.ml.operations.BatchDeploymentOperations(operation_scope: OperationScope, operation_config: OperationConfig, service_client_01_2024_preview: AzureMachineLearningWorkspaces, all_operations: OperationsContainer, credentials: TokenCredential | None = None, **kwargs: Any)[source]¶
BatchDeploymentOperations.
You should not instantiate this class directly. Instead, you should create an MLClient instance that instantiates it for you and attaches it as an attribute.
- Parameters:
operation_scope (OperationScope) – Scope variables for the operations classes of an MLClient object.
operation_config (OperationConfig) – Common configuration for operations classes of an MLClient object.
service_client_05_2022 ( AzureMachineLearningWorkspaces) – Service client to allow end users to operate on Azure Machine Learning Workspace resources.
all_operations (OperationsContainer) – All operations classes of an MLClient object.
credentials (TokenCredential) – Credential to use for authentication.
- begin_create_or_update(deployment: DeploymentType, *, skip_script_validation: bool = False, **kwargs: Any) LROPoller[DeploymentType][source]¶
Create or update a batch deployment.
- Parameters:
deployment (BatchDeployment) – The deployment entity.
- Keyword Arguments:
skip_script_validation (bool) – If set to True, the script validation will be skipped. Defaults to False.
- Raises:
ValidationException – Raised if BatchDeployment cannot be successfully validated. Details will be provided in the error message.
AssetException – Raised if BatchDeployment assets (e.g. Data, Code, Model, Environment) cannot be successfully validated. Details will be provided in the error message.
ModelException – Raised if BatchDeployment model cannot be successfully validated. Details will be provided in the error message.
- Returns:
A poller to track the operation status.
- Return type:
Example:
Create example.¶from azure.ai.ml import load_batch_deployment from azure.ai.ml.entities import BatchDeployment deployment_example = load_batch_deployment( source="./sdk/ml/azure-ai-ml/tests/test_configs/deployments/batch/batch_deployment_anon_env_with_image.yaml", params_override=[{"name": f"deployment-{randint(0, 1000)}", "endpoint_name": endpoint_example.name}], ) ml_client.batch_deployments.begin_create_or_update(deployment=deployment_example, skip_script_validation=True)
- begin_delete(name: str, endpoint_name: str) LROPoller[None][source]¶
Delete a batch deployment.
- Parameters:
- Returns:
A poller to track the operation status.
- Return type:
LROPoller[None]
Example:
Delete example.¶ml_client.batch_deployments.begin_delete(deployment_name, endpoint_name)
- get(name: str, endpoint_name: str) BatchDeployment[source]¶
Get a deployment resource.
- Parameters:
- Returns:
A deployment entity
- Return type:
Example:
Get example.¶ml_client.batch_deployments.get(deployment_name, endpoint_name)
- list(endpoint_name: str) ItemPaged[BatchDeployment][source]¶
List a deployment resource.
- Parameters:
endpoint_name (str) – The name of the endpoint
- Returns:
An iterator of deployment entities
- Return type:
Example:
List deployment resource example.¶ml_client.batch_deployments.list(endpoint_name)
- list_jobs(endpoint_name: str, *, name: str | None = None) ItemPaged[BatchJob][source]¶
List jobs under the provided batch endpoint deployment. This is only valid for batch endpoint.
- Parameters:
endpoint_name (str) – Name of endpoint.
- Keyword Arguments:
name (str) – (Optional) Name of deployment.
- Raise:
Exception if endpoint_type is not BATCH_ENDPOINT_TYPE
- Returns:
List of jobs
- Return type:
Example:
List jobs example.¶ml_client.batch_deployments.list_jobs(deployment_name, endpoint_name)
- class azure.ai.ml.operations.BatchEndpointOperations(operation_scope: OperationScope, operation_config: OperationConfig, service_client_10_2023: AzureMachineLearningServices, all_operations: OperationsContainer, credentials: TokenCredential | None = None, **kwargs: Any)[source]¶
BatchEndpointOperations.
You should not instantiate this class directly. Instead, you should create an MLClient instance that instantiates it for you and attaches it as an attribute.
- Parameters:
operation_scope (OperationScope) – Scope variables for the operations classes of an MLClient object.
operation_config (OperationConfig) – Common configuration for operations classes of an MLClient object.
service_client_10_2023 ( AzureMachineLearningWorkspaces) – Service client to allow end users to operate on Azure Machine Learning Workspace resources.
all_operations (OperationsContainer) – All operations classes of an MLClient object.
credentials (TokenCredential) – Credential to use for authentication.
- begin_create_or_update(endpoint: BatchEndpoint) LROPoller[BatchEndpoint][source]¶
Create or update a batch endpoint.
- Parameters:
endpoint (BatchEndpoint) – The endpoint entity.
- Returns:
A poller to track the operation status.
- Return type:
Example:
Create endpoint example.¶from azure.ai.ml.entities import BatchEndpoint endpoint_example = BatchEndpoint(name=endpoint_name_2) ml_client.batch_endpoints.begin_create_or_update(endpoint_example)
- begin_delete(name: str) LROPoller[None][source]¶
Delete a batch Endpoint.
- Parameters:
name (str) – Name of the batch endpoint.
- Returns:
A poller to track the operation status.
- Return type:
LROPoller[None]
Example:
Delete endpoint example.¶ml_client.batch_endpoints.begin_delete(endpoint_name)
- get(name: str) BatchEndpoint[source]¶
Get a Endpoint resource.
- Parameters:
name (str) – Name of the endpoint.
- Returns:
Endpoint object retrieved from the service.
- Return type:
Example:
Get endpoint example.¶ml_client.batch_endpoints.get(endpoint_name)
- invoke(endpoint_name: str, *, deployment_name: str | None = None, inputs: Dict[str, Input] | None = None, **kwargs: Any) BatchJob[source]¶
Invokes the batch endpoint with the provided payload.
- Parameters:
endpoint_name (str) – The endpoint name.
- Keyword Arguments:
deployment_name (str) – (Optional) The name of a specific deployment to invoke. This is optional. By default requests are routed to any of the deployments according to the traffic rules.
inputs (Dict[str, Input]) – (Optional) A dictionary of existing data asset, public uri file or folder to use with the deployment
- Raises:
ValidationException – Raised if deployment cannot be successfully validated. Details will be provided in the error message.
AssetException – Raised if BatchEndpoint assets (e.g. Data, Code, Model, Environment) cannot be successfully validated. Details will be provided in the error message.
ModelException – Raised if BatchEndpoint model cannot be successfully validated. Details will be provided in the error message.
EmptyDirectoryError – Raised if local path provided points to an empty directory.
- Returns:
The invoked batch deployment job.
- Return type:
Example:
Invoke endpoint example.¶ml_client.batch_endpoints.invoke(endpoint_name_2)
- list() ItemPaged[BatchEndpoint][source]¶
List endpoints of the workspace.
- Returns:
A list of endpoints
- Return type:
Example:
List example.¶ml_client.batch_endpoints.list()
- list_jobs(endpoint_name: str) ItemPaged[BatchJob][source]¶
List jobs under the provided batch endpoint deployment. This is only valid for batch endpoint.
- Parameters:
endpoint_name (str) – The endpoint name
- Returns:
List of jobs
- Return type:
Example:
List jobs example.¶ml_client.batch_endpoints.list_jobs(endpoint_name_2)
- class azure.ai.ml.operations.CapabilityHostsOperations(operation_scope: OperationScope, operation_config: OperationConfig, service_client_10_2024: AzureMachineLearningWorkspaces, all_operations: OperationsContainer, credentials: TokenCredential, **kwargs: Any)[source]¶
CapabilityHostsOperations.
You should not instantiate this class directly. Instead, you should create an MLClient instance that instantiates it for you and attaches it as an attribute.
- Parameters:
operation_scope (OperationScope) – Scope variables for the operations classes of an MLClient object.
operation_config (OperationConfig) – Common configuration for operations classes of an MLClient object.
service_client_10_2024 (AzureMachineLearningWorkspaces # pylint: disable=line-too-long) – Service client to allow end users to operate on Azure Machine Learning Workspace resources (ServiceClient102024Preview).
all_operations (OperationsContainer) – All operations classes of an MLClient object.
credentials (TokenCredential) – Credential to use for authentication.
kwargs (Any) – Additional keyword arguments.
Constructor of CapabilityHostsOperations class.
- Parameters:
operation_scope (OperationScope) – Scope variables for the operations classes of an MLClient object.
operation_config (OperationConfig) – Common configuration for operations classes of an MLClient object.
service_client_10_2024 (AzureMachineLearningWorkspaces # pylint: disable=line-too-long) – Service client to allow end users to operate on Azure Machine Learning Workspace resources (ServiceClient102024Preview).
all_operations (OperationsContainer) – All operations classes of an MLClient object.
credentials (TokenCredential) – Credential to use for authentication.
kwargs (Any) – Additional keyword arguments.
- begin_create_or_update(capability_host: CapabilityHost, **kwargs: Any) LROPoller[CapabilityHost][source]¶
Note
This is an experimental method, and may change at any time. Please see https://aka.ms/azuremlexperimental for more information.
Begin the creation of a capability host in a Hub or Project workspace. Note that currently this method can only accept the create operation request and not update operation request.
- Parameters:
capability_host (CapabilityHost) – The CapabilityHost object containing the details of the capability host to create.
- Returns:
An LROPoller object that can be used to track the long-running operation that is creation of capability host.
- Return type:
LROPoller[CapabilityHost] # pylint: disable=line-too-long
Example:
Create example.¶from azure.ai.ml import MLClient from azure.identity import DefaultAzureCredential from azure.ai.ml.entities._workspace._ai_workspaces.capability_host import ( CapabilityHost, ) from azure.ai.ml.constants._workspace import CapabilityHostKind subscription_id = os.environ["AZURE_SUBSCRIPTION_ID"] resource_group = os.environ["RESOURCE_GROUP_NAME"] hub_name = "test-hub" project_name = "test-project" # Create a CapabilityHost in Hub ml_client = MLClient( DefaultAzureCredential(), subscription_id, resource_group, workspace_name=hub_name, ) capability_host = CapabilityHost( name="test-capability-host", description="some description", capability_host_kind=CapabilityHostKind.AGENTS, ) result = ml_client.capability_hosts.begin_create_or_update(capability_host).result() # Create a CapabilityHost in Project ml_client = MLClient( DefaultAzureCredential(), subscription_id, resource_group, workspace_name=project_name, ) capability_host = CapabilityHost( name="test-capability-host", description="some description", capability_host_kind=CapabilityHostKind.AGENTS, ai_services_connections=["connection1"], storage_connections=["projectname/workspaceblobstore"], vector_store_connections=["connection1"], ) result = ml_client.capability_hosts.begin_create_or_update(capability_host).result()
- begin_delete(name: str, **kwargs: Any) LROPoller[None][source]¶
Note
This is an experimental method, and may change at any time. Please see https://aka.ms/azuremlexperimental for more information.
Delete capability host.
- Parameters:
name (str) – capability host name.
- Returns:
A poller for deletion status
- Return type:
LROPoller[None]
Example:
Delete example.¶from azure.ai.ml import MLClient from azure.identity import DefaultAzureCredential subscription_id = os.environ["AZURE_SUBSCRIPTION_ID"] resource_group = os.environ["RESOURCE_GROUP_NAME"] hub_name = "test-hub" project_name = "test-project" # Delete CapabilityHost created in Hub ml_client = MLClient( DefaultAzureCredential(), subscription_id, resource_group, workspace_name=hub_name, ) capability_host = ml_client.capability_hosts.begin_delete(name="test-capability-host") # Delete CapabilityHost created in Project ml_client = MLClient( DefaultAzureCredential(), subscription_id, resource_group, workspace_name=project_name, ) capability_host = ml_client.capability_hosts.begin_delete(name="test-capability-host")
- get(name: str, **kwargs: Any) CapabilityHost[source]¶
Note
This is an experimental method, and may change at any time. Please see https://aka.ms/azuremlexperimental for more information.
Retrieve a capability host resource.
- Parameters:
name (str) – The name of the capability host to retrieve.
- Raises:
ValidationException – Raised if project name or hub name not provided while creation of MLClient object in workspacename param. Details will be provided in the error message.
ValidationException – Raised if Capabilityhost name is not provided. Details will be provided in the error message.
- Returns:
CapabilityHost object.
- Return type:
Example:
Get example.¶from azure.ai.ml import MLClient from azure.identity import DefaultAzureCredential subscription_id = os.environ["AZURE_SUBSCRIPTION_ID"] resource_group = os.environ["RESOURCE_GROUP_NAME"] hub_name = "test-hub" project_name = "test-project" # Get CapabilityHost created in Hub ml_client = MLClient( DefaultAzureCredential(), subscription_id, resource_group, workspace_name=hub_name, ) capability_host = ml_client.capability_hosts.get(name="test-capability-host") # Get CapabilityHost created in Project ml_client = MLClient( DefaultAzureCredential(), subscription_id, resource_group, workspace_name=project_name, ) capability_host = ml_client.capability_hosts.get(name="test-capability-host")
- class azure.ai.ml.operations.ComponentOperations(operation_scope: OperationScope, operation_config: OperationConfig, service_client: AzureMachineLearningWorkspaces | AzureMachineLearningWorkspaces, all_operations: OperationsContainer, preflight_operation: DeploymentsOperations | None = None, **kwargs: Dict)[source]¶
ComponentOperations.
You should not instantiate this class directly. Instead, you should create an MLClient instance that instantiates it for you and attaches it as an attribute.
- Parameters:
operation_scope (OperationScope) – The operation scope.
operation_config (OperationConfig) – The operation configuration.
service_client (Union[ AzureMachineLearningWorkspaces, AzureMachineLearningWorkspaces]) – The service client for API operations.
all_operations (OperationsContainer) – The container for all available operations.
preflight_operation (Optional[DeploymentsOperations]) – The preflight operation for deployments.
kwargs (Dict) – Additional keyword arguments.
- archive(name: str, version: str | None = None, label: str | None = None, **kwargs: Any) None[source]¶
Archive a component.
- Parameters:
Example:
Archive component example.¶ml_client.components.archive(name=component_example.name)
- create_or_update(component: Component, version: str | None = None, *, skip_validation: bool = False, **kwargs: Any) Component[source]¶
Create or update a specified component. if there’re inline defined entities, e.g. Environment, Code, they’ll be created together with the component.
- Parameters:
- Keyword Arguments:
skip_validation (bool) – whether to skip validation before creating/updating the component, defaults to False
- Raises:
ValidationException – Raised if Component cannot be successfully validated. Details will be provided in the error message.
AssetException – Raised if Component assets (e.g. Data, Code, Model, Environment) cannot be successfully validated. Details will be provided in the error message.
ComponentException – Raised if Component type is unsupported. Details will be provided in the error message.
ModelException – Raised if Component model cannot be successfully validated. Details will be provided in the error message.
EmptyDirectoryError – Raised if local path provided points to an empty directory.
- Returns:
The specified component object.
- Return type:
Example:
Create component example.¶from azure.ai.ml import load_component from azure.ai.ml.entities._component.component import Component component_example = load_component( source="./sdk/ml/azure-ai-ml/tests/test_configs/components/helloworld_component.yml", params_override=[{"version": "1.0.2"}], ) component = ml_client.components.create_or_update(component_example)
- download(name: str, download_path: PathLike | str = '.', *, version: str | None = None) None[source]¶
Note
This is an experimental method, and may change at any time. Please see https://aka.ms/azuremlexperimental for more information.
Download the specified component and its dependencies to local. Local component can be used to create the component in another workspace or for offline development.
- Parameters:
- Keyword Arguments:
version (Optional[str]) – Version of the component.
- Raises:
~OSError – Raised if download_path is pointing to an existing directory that is not empty. identified and retrieved. Details will be provided in the error message.
- Returns:
The specified component object.
- Return type:
- get(name: str, version: str | None = None, label: str | None = None) Component[source]¶
Returns information about the specified component.
- Parameters:
- Raises:
ValidationException – Raised if Component cannot be successfully identified and retrieved. Details will be provided in the error message.
- Returns:
The specified component object.
- Return type:
Example:
Get component example.¶ml_client.components.get(name=component_example.name, version="1.0.2")
- list(name: str | None = None, *, list_view_type: ListViewType = ListViewType.ACTIVE_ONLY) Iterable[Component][source]¶
List specific component or components of the workspace.
- Parameters:
name (Optional[str]) – Component name, if not set, list all components of the workspace
- Keyword Arguments:
list_view_type – View type for including/excluding (for example) archived components. Default: ACTIVE_ONLY.
- Returns:
An iterator like instance of component objects
- Return type:
Example:
List component example.¶print(ml_client.components.list())
- prepare_for_sign(component: Component)[source]¶
Note
This is an experimental method, and may change at any time. Please see https://aka.ms/azuremlexperimental for more information.
>
- restore(name: str, version: str | None = None, label: str | None = None, **kwargs: Any) None[source]¶
Restore an archived component.
- Parameters:
Example:
Restore component example.¶ml_client.components.restore(name=component_example.name)
- validate(component: Component | LambdaType, raise_on_failure: bool = False, **kwargs: Any) ValidationResult[source]¶
Note
This is an experimental method, and may change at any time. Please see https://aka.ms/azuremlexperimental for more information.
validate a specified component. if there are inline defined entities, e.g. Environment, Code, they won’t be created.
- Parameters:
- Returns:
All validation errors
- Return type:
- class azure.ai.ml.operations.ComputeOperations(operation_scope: OperationScope, operation_config: OperationConfig, service_client: AzureMachineLearningWorkspaces, service_client_2024: AzureMachineLearningWorkspaces, **kwargs: Dict)[source]¶
ComputeOperations.
This class should not be instantiated directly. Instead, use the compute attribute of an MLClient object.
- Parameters:
operation_scope (OperationScope) – Scope variables for the operations classes of an MLClient object.
operation_config (OperationConfig) – Common configuration for operations classes of an MLClient object.
service_client (AzureMachineLearningWorkspaces) – Service client to allow end users to operate on Azure Machine Learning Workspace resources.
- begin_attach(compute: Compute, **kwargs: Any) LROPoller[Compute][source]¶
Attach a compute resource to the workspace.
- Parameters:
compute (Compute) – The compute resource definition.
- Returns:
An instance of LROPoller that returns a Compute object once the long-running operation is complete.
- Return type:
Example:
Attaching a compute resource to the workspace.¶from azure.ai.ml.entities import AmlCompute compute_obj = AmlCompute( name=compute_name_2, tags={"key1": "value1", "key2": "value2"}, min_instances=0, max_instances=10, idle_time_before_scale_down=100, ) attached_compute = ml_client.compute.begin_attach(compute_obj)
- begin_create_or_update(compute: Compute) LROPoller[Compute][source]¶
Create and register a compute resource.
- Parameters:
compute (Compute) – The compute resource definition.
- Returns:
An instance of LROPoller that returns a Compute object once the long-running operation is complete.
- Return type:
Example:
Creating and registering a compute resource.¶from azure.ai.ml.entities import AmlCompute compute_obj = AmlCompute( name=compute_name_1, tags={"key1": "value1", "key2": "value2"}, min_instances=0, max_instances=10, idle_time_before_scale_down=100, ) registered_compute = ml_client.compute.begin_create_or_update(compute_obj)
- begin_delete(name: str, *, action: str = 'Delete') LROPoller[None][source]¶
Delete or detach a compute resource.
- Parameters:
name (str) – The name of the compute resource.
- Keyword Arguments:
action – Action to perform. Possible values: [“Delete”, “Detach”]. Defaults to “Delete”.
- Returns:
A poller to track the operation status.
- Return type:
LROPoller[None]
Example:
Delete compute example.¶ml_client.compute.begin_delete(compute_name_1, action="Detach") ml_client.compute.begin_delete(compute_name_2)
- begin_restart(name: str) LROPoller[None][source]¶
Restart a compute instance.
- Parameters:
name (str) – The name of the compute instance.
- Returns:
A poller to track the operation status.
- Return type:
Example:
Restarting a stopped compute instance.¶ml_client.compute.begin_restart(ci_name)
- begin_start(name: str) LROPoller[None][source]¶
Start a compute instance.
- Parameters:
name (str) – The name of the compute instance.
- Returns:
A poller to track the operation status.
- Return type:
Example:
Starting a compute instance.¶ml_client.compute.begin_start(ci_name)
- begin_stop(name: str) LROPoller[None][source]¶
Stop a compute instance.
- Parameters:
name (str) – The name of the compute instance.
- Returns:
A poller to track the operation status.
- Return type:
Example:
Stopping a compute instance.¶ml_client.compute.begin_stop(ci_name)
- begin_update(compute: Compute) LROPoller[Compute][source]¶
Update a compute resource. Currently only valid for AmlCompute resource types.
- Parameters:
compute (Compute) – The compute resource definition.
- Returns:
An instance of LROPoller that returns a Compute object once the long-running operation is complete.
- Return type:
Example:
Updating an AmlCompute resource.¶compute_obj = ml_client.compute.get("cpu-cluster") compute_obj.idle_time_before_scale_down = 200 updated_compute = ml_client.compute.begin_update(compute_obj)
- enable_sso(*, name: str, enable_sso: bool = True) None[source]¶
Note
This is an experimental method, and may change at any time. Please see https://aka.ms/azuremlexperimental for more information.
enable sso for a compute instance.
- get(name: str) Compute[source]¶
Get a compute resource.
- Parameters:
name (str) – Name of the compute resource.
- Returns:
A Compute object.
- Return type:
Example:
Retrieving a compute resource from a workspace.¶cpu_cluster = ml_client.compute.get("cpu-cluster")
- list(*, compute_type: str | None = None) Iterable[Compute][source]¶
List computes of the workspace.
- Keyword Arguments:
compute_type (Optional[str]) – The type of the compute to be listed, case-insensitive. Defaults to AMLCompute.
- Returns:
An iterator like instance of Compute objects.
- Return type:
Example:
Retrieving a list of the AzureML Kubernetes compute resources in a workspace.¶compute_list = ml_client.compute.list(compute_type="AMLK8s") # cspell:disable-line
- list_nodes(name: str) Iterable[AmlComputeNodeInfo][source]¶
Retrieve a list of a compute resource’s nodes.
- Parameters:
name (str) – Name of the compute resource.
- Returns:
An iterator-like instance of AmlComputeNodeInfo objects.
- Return type:
Example:
Retrieving a list of nodes from a compute resource.¶node_list = ml_client.compute.list_nodes(name="cpu-cluster")
- list_sizes(*, location: str | None = None, compute_type: str | None = None) Iterable[VmSize][source]¶
List the supported VM sizes in a location.
- Keyword Arguments:
- Returns:
An iterator over virtual machine size objects.
- Return type:
Iterable[VmSize]
Example:
Listing the supported VM sizes in the workspace location.¶size_list = ml_client.compute.list_sizes()
- list_usage(*, location: str | None = None) Iterable[Usage][source]¶
List the current usage information as well as AzureML resource limits for the given subscription and location.
- Keyword Arguments:
location (Optional[str]) – The location for which resource usage is queried. Defaults to workspace location.
- Returns:
An iterator over current usage info objects.
- Return type:
Example:
Listing resource usage for the workspace location.¶usage_list = ml_client.compute.list_usage()
- class azure.ai.ml.operations.DataOperations(operation_scope: OperationScope, operation_config: OperationConfig, service_client: AzureMachineLearningWorkspaces | AzureMachineLearningWorkspaces, service_client_012024_preview: AzureMachineLearningWorkspaces, datastore_operations: DatastoreOperations, **kwargs: Any)[source]¶
DataOperations.
You should not instantiate this class directly. Instead, you should create an MLClient instance that instantiates it for you and attaches it as an attribute.
- Parameters:
operation_scope (OperationScope) – Scope variables for the operations classes of an MLClient object.
operation_config (OperationConfig) – Common configuration for operations classes of an MLClient object.
service_client (Union[ AzureMachineLearningWorkspaces, AzureMachineLearningWorkspaces]) – Service client to allow end users to operate on Azure Machine Learning Workspace resources (ServiceClient042023Preview or ServiceClient102021Dataplane).
datastore_operations (DatastoreOperations) – Represents a client for performing operations on Datastores.
- archive(name: str, version: str | None = None, label: str | None = None, **kwargs: Any) None[source]¶
Archive a data asset.
- Parameters:
- Returns:
None
Example:
Archive data asset example.¶ml_client.data.archive("data_asset_name")
- create_or_update(data: Data) Data[source]¶
Returns created or updated data asset.
If not already in storage, asset will be uploaded to the workspace’s blob storage.
- Parameters:
data (azure.ai.ml.entities.Data) – Data asset object.
- Raises:
AssetPathException – Raised when the Data artifact path is already linked to another asset
ValidationException – Raised if Data cannot be successfully validated. Details will be provided in the error message.
EmptyDirectoryError – Raised if local path provided points to an empty directory.
- Returns:
Data asset object.
- Return type:
Example:
Create data assets example.¶from azure.ai.ml.entities import Data data_asset_example = Data(name=data_asset_name, version="2.0", path="./sdk/ml/azure-ai-ml/samples/src") ml_client.data.create_or_update(data_asset_example)
- get(name: str, version: str | None = None, label: str | None = None) Data[source]¶
Get the specified data asset.
- Parameters:
- Raises:
ValidationException – Raised if Data cannot be successfully identified and retrieved. Details will be provided in the error message.
- Returns:
Data asset object.
- Return type:
Example:
Get data assets example.¶ml_client.data.get(name="data_asset_name", version="2.0")
- import_data(data_import: DataImport, **kwargs: Any) PipelineJob[source]¶
Note
This is an experimental method, and may change at any time. Please see https://aka.ms/azuremlexperimental for more information.
Returns the data import job that is creating the data asset.
- Parameters:
data_import (azure.ai.ml.entities.DataImport) – DataImport object.
- Returns:
data import job object.
- Return type:
Example:
Import data assets example.¶from azure.ai.ml.entities._data_import.data_import import DataImport from azure.ai.ml.entities._inputs_outputs.external_data import Database database_example = Database(query="SELECT ID FROM DataTable", connection="azureml:my_azuresqldb_connection") data_import_example = DataImport( name="data_asset_name", path="azureml://datastores/workspaceblobstore/paths/", source=database_example ) ml_client.data.import_data(data_import_example)
- list(name: str | None = None, *, list_view_type: ListViewType = ListViewType.ACTIVE_ONLY) ItemPaged[Data][source]¶
List the data assets of the workspace.
- Parameters:
name (Optional[str]) – Name of a specific data asset, optional.
- Keyword Arguments:
list_view_type – View type for including/excluding (for example) archived data assets. Default: ACTIVE_ONLY.
- Returns:
An iterator like instance of Data objects
- Return type:
Example:
List data assets example.¶ml_client.data.list(name="data_asset_name")
- list_materialization_status(name: str, *, list_view_type: ListViewType = ListViewType.ACTIVE_ONLY, **kwargs: Any) Iterable[PipelineJob][source]¶
List materialization jobs of the asset.
- Parameters:
name (str) – name of asset being created by the materialization jobs.
- Keyword Arguments:
list_view_type (Optional[ListViewType]) – View type for including/excluding (for example) archived jobs. Default: ACTIVE_ONLY.
- Returns:
An iterator like instance of Job objects.
- Return type:
Example:
List materialization jobs example.¶ml_client.data.list_materialization_status("data_asset_name")
- mount(path: str, mount_point: str | None = None, mode: str = 'ro_mount', debug: bool = False, persistent: bool = False, **_kwargs) None[source]¶
Note
This is an experimental method, and may change at any time. Please see https://aka.ms/azuremlexperimental for more information.
Mount a data asset to a local path, so that you can access data inside it under a local path with any tools of your choice.
- Parameters:
path (str) – The data asset path to mount, in the form of azureml:<name> or azureml:<name>:<version>.
mount_point (str) – A local path used as mount point.
mode (str) – Mount mode. Only ro_mount (read-only) is supported for data asset mount.
debug (bool) – Whether to enable verbose logging.
persistent (bool) – Whether to persist the mount after reboot. Applies only when running on Compute Instance, where the ‘CI_NAME’ environment variable is set.”
- Returns:
None
- restore(name: str, version: str | None = None, label: str | None = None, **kwargs: Any) None[source]¶
Restore an archived data asset.
- Parameters:
- Returns:
None
Example:
Restore data asset example.¶ml_client.data.restore("data_asset_name")
Note
This is an experimental method, and may change at any time. Please see https://aka.ms/azuremlexperimental for more information.
Share a data asset from workspace to registry.
- Parameters:
- Keyword Arguments:
- Returns:
Data asset object.
- Return type:
Example:
Share data asset example.¶ml_client.data.share( name="data_asset_name", version="2.0", registry_name="my-registry", share_with_name="transformed-nyc-taxi-data-shared-from-ws", share_with_version="2.0", )
- class azure.ai.ml.operations.DatastoreOperations(operation_scope: OperationScope, operation_config: OperationConfig, serviceclient_2024_01_01_preview: AzureMachineLearningWorkspaces, serviceclient_2024_07_01_preview: AzureMachineLearningWorkspaces, **kwargs: Dict)[source]¶
Represents a client for performing operations on Datastores.
You should not instantiate this class directly. Instead, you should create MLClient and use this client via the property MLClient.datastores
- Parameters:
operation_scope (OperationScope) – Scope variables for the operations classes of an MLClient object.
operation_config (OperationConfig) – Common configuration for operations classes of an MLClient object.
serviceclient_2024_01_01_preview (AzureMachineLearningWorkspaces) – Service client to allow end users to operate on Azure Machine Learning Workspace resources.
serviceclient_2024_07_01_preview (AzureMachineLearningWorkspaces) – Service client to allow end users to operate on Azure Machine Learning Workspace resources.
- create_or_update(datastore: Datastore) Datastore[source]¶
Attaches the passed in datastore to the workspace or updates the datastore if it already exists.
- Parameters:
datastore (Datastore) – The configuration of the datastore to attach.
- Returns:
The attached datastore.
- Return type:
Example:
Create datastore example.¶from azure.ai.ml.entities import AzureBlobDatastore datastore_example = AzureBlobDatastore( name="azure_blob_datastore", account_name="sdkvnextclidcdnrc7zb7xyy", # cspell:disable-line container_name="testblob", ) ml_client.datastores.create_or_update(datastore_example)
- delete(name: str) None[source]¶
Deletes a datastore reference with the given name from the workspace. This method does not delete the actual datastore or underlying data in the datastore.
- Parameters:
name (str) – Name of the datastore
Example:
Delete datastore example.¶ml_client.datastores.delete("azure_blob_datastore")
- get(name: str, *, include_secrets: bool = False) Datastore[source]¶
Returns information about the datastore referenced by the given name.
- Parameters:
name (str) – Datastore name
- Keyword Arguments:
include_secrets (bool) – Include datastore secrets in the returned datastore, defaults to False
- Returns:
Datastore with the specified name.
- Return type:
Example:
Get datastore example.¶ml_client.datastores.get("azure_blob_datastore")
- get_default(*, include_secrets: bool = False) Datastore[source]¶
Returns the workspace’s default datastore.
- Keyword Arguments:
include_secrets (bool) – Include datastore secrets in the returned datastore, defaults to False
- Returns:
The default datastore.
- Return type:
Example:
Get default datastore example.¶ml_client.datastores.get_default()
- list(*, include_secrets: bool = False) Iterable[Datastore][source]¶
Lists all datastores and associated information within a workspace.
- Keyword Arguments:
include_secrets (bool) – Include datastore secrets in returned datastores, defaults to False
- Returns:
An iterator like instance of Datastore objects
- Return type:
Example:
List datastore example.¶ml_client.datastores.list()
- mount(path: str, mount_point: str | None = None, mode: str = 'ro_mount', debug: bool = False, persistent: bool = False, **_kwargs) None[source]¶
Note
This is an experimental method, and may change at any time. Please see https://aka.ms/azuremlexperimental for more information.
Mount a datastore to a local path, so that you can access data inside it under a local path with any tools of your choice.
- Parameters:
path (str) – The data store path to mount, in the form of <name> or azureml://datastores/<name>.
mount_point (str) – A local path used as mount point.
mode (str) – Mount mode, either ro_mount (read-only) or rw_mount (read-write).
debug (bool) – Whether to enable verbose logging.
persistent (bool) – Whether to persist the mount after reboot. Applies only when running on Compute Instance, where the ‘CI_NAME’ environment variable is set.”
- Returns:
None
- class azure.ai.ml.operations.EnvironmentOperations(operation_scope: OperationScope, operation_config: OperationConfig, service_client: AzureMachineLearningWorkspaces | AzureMachineLearningWorkspaces, all_operations: OperationsContainer, **kwargs: Any)[source]¶
EnvironmentOperations.
You should not instantiate this class directly. Instead, you should create an MLClient instance that instantiates it for you and attaches it as an attribute.
- Parameters:
operation_scope (OperationScope) – Scope variables for the operations classes of an MLClient object.
operation_config (OperationConfig) – Common configuration for operations classes of an MLClient object.
service_client (Union[ AzureMachineLearningWorkspaces, AzureMachineLearningWorkspaces]) – Service client to allow end users to operate on Azure Machine Learning Workspace resources (ServiceClient042023Preview or ServiceClient102021Dataplane).
all_operations (OperationsContainer) – All operations classes of an MLClient object.
- archive(name: str, version: str | None = None, label: str | None = None, **kwargs: Any) None[source]¶
Archive an environment or an environment version.
- Parameters:
Example:
Archive example.¶ml_client.environments.archive("create-environment", "2.0")
- create_or_update(environment: Environment) Environment[source]¶
Returns created or updated environment asset.
- Parameters:
environment (Environment) – Environment object
- Raises:
ValidationException – Raised if Environment cannot be successfully validated. Details will be provided in the error message.
EmptyDirectoryError – Raised if local path provided points to an empty directory.
- Returns:
Created or updated Environment object
- Return type:
Example:
Create environment.¶from azure.ai.ml.entities import BuildContext, Environment env_docker_context = Environment( build=BuildContext( path="./sdk/ml/azure-ai-ml/tests/test_configs/environment/environment_files", dockerfile_path="DockerfileNonDefault", ), name="create-environment", version="2.0", description="Environment created from a Docker context.", ) ml_client.environments.create_or_update(env_docker_context)
- get(name: str, version: str | None = None, label: str | None = None) Environment[source]¶
Returns the specified environment asset.
- Parameters:
- Raises:
ValidationException – Raised if Environment cannot be successfully validated. Details will be provided in the error message.
- Returns:
Environment object
- Return type:
Example:
Get example.¶ml_client.environments.get("create-environment", "2.0")
- list(name: str | None = None, *, list_view_type: ListViewType = ListViewType.ACTIVE_ONLY) Iterable[Environment][source]¶
List all environment assets in workspace.
- Parameters:
name (Optional[str]) – Name of the environment.
- Keyword Arguments:
list_view_type – View type for including/excluding (for example) archived environments. Default: ACTIVE_ONLY.
- Returns:
An iterator like instance of Environment objects.
- Return type:
Example:
List example.¶ml_client.environments.list()
- restore(name: str, version: str | None = None, label: str | None = None, **kwargs: Any) None[source]¶
Restore an archived environment version.
- Parameters:
Example:
Restore example.¶ml_client.environments.restore("create-environment", "2.0")
Note
This is an experimental method, and may change at any time. Please see https://aka.ms/azuremlexperimental for more information.
Share a environment asset from workspace to registry.
- Parameters:
- Keyword Arguments:
- Returns:
Environment asset object.
- Return type:
- class azure.ai.ml.operations.EvaluatorOperations(operation_scope: OperationScope, operation_config: OperationConfig, service_client: AzureMachineLearningWorkspaces | AzureMachineLearningWorkspaces, datastore_operations: DatastoreOperations, all_operations: OperationsContainer | None = None, **kwargs)[source]¶
EvaluatorOperations.
You should not instantiate this class directly. Instead, you should create an MLClient instance that instantiates it for you and attaches it as an attribute.
- Parameters:
operation_scope (OperationScope) – Scope variables for the operations classes of an MLClient object.
operation_config (OperationConfig) – Common configuration for operations classes of an MLClient object.
service_client (Union[ azure.ai.ml._restclient.v2023_04_01_preview._azure_machine_learning_workspaces.AzureMachineLearningWorkspaces, azure.ai.ml._restclient.v2021_10_01_dataplanepreview._azure_machine_learning_workspaces. AzureMachineLearningWorkspaces]) – Service client to allow end users to operate on Azure Machine Learning Workspace resources (ServiceClient082023Preview or ServiceClient102021Dataplane).
datastore_operations (DatastoreOperations) – Represents a client for performing operations on Datastores.
all_operations (OperationsContainer) – All operations classes of an MLClient object.
kwargs (dict) – A dictionary of additional configuration parameters.
- create_or_update(model: Model | WorkspaceAssetReference, **kwargs: Any) Model[source]¶
Returns created or updated model asset.
- Parameters:
model (Model) – Model asset object.
- Raises:
AssetPathException – Raised when the Model artifact path is already linked to another asset
ValidationException – Raised if Model cannot be successfully validated. Details will be provided in the error message.
EmptyDirectoryError – Raised if local path provided points to an empty directory.
- Returns:
Model asset object.
- Return type:
- download(name: str, version: str, download_path: PathLike | str = '.', **kwargs: Any) None[source]¶
Download files related to a model.
- Parameters:
- Raises:
ResourceNotFoundError – if can’t find a model matching provided name.
- get(name: str, *, version: str | None = None, label: str | None = None, **kwargs) Model[source]¶
Returns information about the specified model asset.
- Parameters:
name (str) – Name of the model.
- Keyword Arguments:
- Raises:
ValidationException – Raised if Model cannot be successfully validated. Details will be provided in the error message.
- Returns:
Model asset object.
- Return type:
- list(name: str, stage: str | None = None, *, list_view_type: ListViewType = ListViewType.ACTIVE_ONLY, **kwargs: Any) Iterable[Model][source]¶
List all model assets in workspace.
- Parameters:
- Keyword Arguments:
list_view_type (ListViewType) – View type for including/excluding (for example) archived models. Defaults to
ListViewType.ACTIVE_ONLY.- Returns:
An iterator like instance of Model objects
- Return type:
- class azure.ai.ml.operations.FeatureSetOperations(operation_scope: OperationScope, operation_config: OperationConfig, service_client: AzureMachineLearningServices, service_client_for_jobs: AzureMachineLearningWorkspaces, datastore_operations: DatastoreOperations, **kwargs: Dict)[source]¶
FeatureSetOperations.
You should not instantiate this class directly. Instead, you should create an MLClient instance that instantiates it for you and attaches it as an attribute.
- begin_backfill(*, name: str, version: str, feature_window_start_time: datetime | None = None, feature_window_end_time: datetime | None = None, display_name: str | None = None, description: str | None = None, tags: Dict[str, str] | None = None, compute_resource: MaterializationComputeResource | None = None, spark_configuration: Dict[str, str] | None = None, data_status: List[str | DataAvailabilityStatus] | None = None, job_id: str | None = None, **kwargs: Dict) LROPoller[FeatureSetBackfillMetadata][source]¶
Backfill.
- Keyword Arguments:
name (str) – Feature set name. This is case-sensitive.
version (str) – Version identifier. This is case-sensitive.
feature_window_start_time (datetime) – Start time of the feature window to be materialized.
feature_window_end_time (datetime) – End time of the feature window to be materialized.
display_name (str) – Specifies description.
description (str) – Specifies description.
compute_resource (MaterializationComputeResource) – Specifies the compute resource settings.
spark_configuration (dict[str, str]) – Specifies the spark compute settings.
data_status (list[str or DataAvailabilityStatus]) – Specifies the data status that you want to backfill.
job_id (str) – The job id.
- Returns:
An instance of LROPoller that returns ~azure.ai.ml.entities.FeatureSetBackfillMetadata
- Return type:
- begin_create_or_update(featureset: FeatureSet, **kwargs: Dict) LROPoller[FeatureSet][source]¶
Create or update FeatureSet
- Parameters:
featureset (FeatureSet) – FeatureSet definition.
- Returns:
An instance of LROPoller that returns a FeatureSet.
- Return type:
- get(name: str, version: str, **kwargs: Dict) FeatureSet[source]¶
Get the specified FeatureSet asset.
- Parameters:
- Raises:
ValidationException – Raised if FeatureSet cannot be successfully identified and retrieved. Details will be provided in the error message.
HttpResponseError – Raised if the corresponding name and version cannot be retrieved from the service.
- Returns:
FeatureSet asset object.
- Return type:
- get_feature(feature_set_name: str, version: str, *, feature_name: str, **kwargs: Dict) Feature | None[source]¶
Get Feature
- Parameters:
- Keyword Arguments:
- Returns:
Feature object
- Return type:
- list(name: str | None = None, *, list_view_type: ListViewType = ListViewType.ACTIVE_ONLY, **kwargs: Dict) ItemPaged[FeatureSet][source]¶
List the FeatureSet assets of the workspace.
- Parameters:
name (Optional[str]) – Name of a specific FeatureSet asset, optional.
- Keyword Arguments:
list_view_type – View type for including/excluding (for example) archived FeatureSet assets. Defaults to ACTIVE_ONLY.
- Returns:
An iterator like instance of FeatureSet objects
- Return type:
- list_features(feature_set_name: str, version: str, *, feature_name: str | None = None, description: str | None = None, tags: str | None = None, **kwargs: Dict) ItemPaged[Feature][source]¶
List features
- Parameters:
- Keyword Arguments:
- Returns:
An iterator like instance of Feature objects
- Return type:
- list_materialization_operations(name: str, version: str, *, feature_window_start_time: str | datetime | None = None, feature_window_end_time: str | datetime | None = None, filters: str | None = None, **kwargs: Dict) ItemPaged[FeatureSetMaterializationMetadata][source]¶
List Materialization operation.
- Parameters:
- Keyword Arguments:
feature_window_start_time (Union[str, datetime]) – Start time of the feature window to filter materialization jobs.
feature_window_end_time (Union[str, datetime]) – End time of the feature window to filter materialization jobs.
filters (str) – Comma-separated list of tag names (and optionally values). Example: tag1,tag2=value2.
- Returns:
An iterator like instance of ~azure.ai.ml.entities.FeatureSetMaterializationMetadata objects
- Return type:
- class azure.ai.ml.operations.FeatureStoreEntityOperations(operation_scope: OperationScope, operation_config: OperationConfig, service_client: AzureMachineLearningServices, **kwargs: Dict)[source]¶
FeatureStoreEntityOperations.
You should not instantiate this class directly. Instead, you should create an MLClient instance that instantiates it for you and attaches it as an attribute.
- begin_create_or_update(feature_store_entity: FeatureStoreEntity, **kwargs: Dict) LROPoller[FeatureStoreEntity][source]¶
Create or update FeatureStoreEntity
- Parameters:
feature_store_entity (FeatureStoreEntity) – FeatureStoreEntity definition.
- Returns:
An instance of LROPoller that returns a FeatureStoreEntity.
- Return type:
- get(name: str, version: str, **kwargs: Dict) FeatureStoreEntity[source]¶
Get the specified FeatureStoreEntity asset.
- Parameters:
- Raises:
ValidationException – Raised if FeatureStoreEntity cannot be successfully identified and retrieved. Details will be provided in the error message.
- Returns:
FeatureStoreEntity asset object.
- Return type:
- list(name: str | None = None, *, list_view_type: ListViewType = ListViewType.ACTIVE_ONLY, **kwargs: Dict) ItemPaged[FeatureStoreEntity][source]¶
List the FeatureStoreEntity assets of the workspace.
- Parameters:
name (Optional[str]) – Name of a specific FeatureStoreEntity asset, optional.
- Keyword Arguments:
list_view_type (Optional[ListViewType]) – View type for including/excluding (for example) archived FeatureStoreEntity assets. Default: ACTIVE_ONLY.
- Returns:
An iterator like instance of FeatureStoreEntity objects
- Return type:
- class azure.ai.ml.operations.FeatureStoreOperations(operation_scope: OperationScope, service_client: AzureMachineLearningWorkspaces, all_operations: OperationsContainer, credentials: TokenCredential | None = None, **kwargs: Dict)[source]¶
FeatureStoreOperations.
You should not instantiate this class directly. Instead, you should create an MLClient instance that instantiates it for you and attaches it as an attribute.
- begin_create(feature_store: FeatureStore, *, grant_materialization_permissions: bool = True, update_dependent_resources: bool = False, **kwargs: Dict) LROPoller[FeatureStore][source]¶
Create a new FeatureStore.
Returns the feature store if already exists.
- Parameters:
feature_store (FeatureStore) – FeatureStore definition.
- Keyword Arguments:
grant_materialization_permissions (bool) – Whether or not to grant materialization permissions. Defaults to True.
update_dependent_resources – Whether or not to update dependent resources. Defaults to False.
- Returns:
An instance of LROPoller that returns a FeatureStore.
- Return type:
- begin_delete(name: str, *, delete_dependent_resources: bool = False, **kwargs: Any) LROPoller[None][source]¶
Delete a FeatureStore.
- Parameters:
name (str) – Name of the FeatureStore
- Keyword Arguments:
delete_dependent_resources (bool) – Whether to delete resources associated with the feature store, i.e., container registry, storage account, key vault, and application insights. The default is False. Set to True to delete these resources.
- Returns:
A poller to track the operation status.
- Return type:
LROPoller[None]
- begin_provision_network(*, feature_store_name: str | None = None, include_spark: bool = False, **kwargs: Any) LROPoller[ManagedNetworkProvisionStatus][source]¶
Triggers the feature store to provision the managed network. Specifying spark enabled as true prepares the feature store managed network for supporting Spark.
- Keyword Arguments:
- Returns:
An instance of LROPoller.
- Return type:
- begin_update(feature_store: FeatureStore, *, grant_materialization_permissions: bool = True, update_dependent_resources: bool = False, **kwargs: Any) LROPoller[FeatureStore][source]¶
- Update friendly name, description, online store connection, offline store connection, materialization
identities or tags of a feature store.
- Parameters:
feature_store (FeatureStore) – FeatureStore resource.
- Keyword Arguments:
grant_materialization_permissions (bool) – Whether or not to grant materialization permissions. Defaults to True.
update_dependent_resources – gives your consent to update the feature store dependent resources. Note that updating the feature store attached Azure Container Registry resource may break lineage of previous jobs or your ability to rerun earlier jobs in this feature store. Also, updating the feature store attached Azure Application Insights resource may break lineage of deployed inference endpoints this feature store. Only set this argument if you are sure that you want to perform this operation. If this argument is not set, the command to update Azure Container Registry and Azure Application Insights will fail.
application_insights (Optional[str]) – Application insights resource for feature store. Defaults to None.
container_registry (Optional[str]) – Container registry resource for feature store. Defaults to None.
- Returns:
An instance of LROPoller that returns a FeatureStore.
- Return type:
- get(name: str, **kwargs: Any) FeatureStore[source]¶
Get a feature store by name.
- Parameters:
name (str) – Name of the feature store.
- Raises:
HttpResponseError – Raised if the corresponding name and version cannot be retrieved from the service.
- Returns:
The feature store with the provided name.
- Return type:
- list(*, scope: str = 'resource_group', **kwargs: Dict) Iterable[FeatureStore][source]¶
List all feature stores that the user has access to in the current resource group or subscription.
- Keyword Arguments:
scope (str) – scope of the listing, “resource_group” or “subscription”, defaults to “resource_group”
- Returns:
An iterator like instance of FeatureStore objects
- Return type:
- class azure.ai.ml.operations.IndexOperations(*, operation_scope: OperationScope, operation_config: OperationConfig, credential: TokenCredential, datastore_operations: DatastoreOperations, all_operations: OperationsContainer, **kwargs: Any)[source]¶
Represents a client for performing operations on index assets.
You should not instantiate this class directly. Instead, you should create MLClient and use this client via the property MLClient.index
- build_index(*, name: str, embeddings_model_config: ModelConfiguration, data_source_citation_url: str | None = None, tokens_per_chunk: int | None = None, token_overlap_across_chunks: int | None = None, input_glob: str | None = None, document_path_replacement_regex: str | None = None, index_config: AzureAISearchConfig | None = None, input_source: IndexDataSource | str, input_source_credential: ManagedIdentityConfiguration | UserIdentityConfiguration | None = None) Index | Job[source]¶
Builds an index on the cloud using the Azure AI Resources service.
- Keyword Arguments:
name (str) – The name of the index to be created.
embeddings_model_config (ModelConfiguration) – Model config for the embedding model.
data_source_citation_url (Optional[str]) – The URL of the data source.
tokens_per_chunk (Optional[int]) – The size of chunks to be used for indexing.
token_overlap_across_chunks (Optional[int]) – The amount of overlap between chunks.
input_glob (Optional[str]) – The glob pattern to be used for indexing.
document_path_replacement_regex (Optional[str]) – The regex pattern for replacing document paths.
index_config (Optional[AzureAISearchConfig]) – The configuration for the ACS output.
input_source (Union[IndexDataSource, str]) – The input source for the index.
input_source_credential (Optional[Union[ManagedIdentityConfiguration, UserIdentityConfiguration]]) – The identity to be used for the index.
- Returns:
If the source_input is a GitSource, returns a created DataIndex Job object.
- Return type:
- Raises:
ValueError – If the source_input is not type ~typing.Str or ~azure.ai.ml.entities._indexes.LocalSource.
- create_or_update(index: Index, **kwargs) Index[source]¶
Returns created or updated index asset.
If not already in storage, asset will be uploaded to the workspace’s default datastore.
- get(name: str, *, version: str | None = None, label: str | None = None, **kwargs) Index[source]¶
Returns information about the specified index asset.
- Parameters:
name (str) – Name of the index asset.
- Keyword Arguments:
- Raises:
ValidationException – Raised if Index cannot be successfully validated. Details will be provided in the error message.
- Returns:
Index asset object.
- Return type:
- list(name: str | None = None, *, list_view_type: ListViewType = ListViewType.ACTIVE_ONLY, **kwargs) Iterable[Index][source]¶
List all Index assets in workspace.
If an Index is specified by name, list all version of that Index.
- Parameters:
name (Optional[str]) – Name of the model.
- Keyword Arguments:
list_view_type (ListViewType) – View type for including/excluding (for example) archived models. Defaults to
ListViewType.ACTIVE_ONLY.- Returns:
An iterator like instance of Index objects
- Return type:
- class azure.ai.ml.operations.JobOperations(operation_scope: OperationScope, operation_config: OperationConfig, service_client_02_2023_preview: AzureMachineLearningWorkspaces, all_operations: OperationsContainer, credential: TokenCredential, **kwargs: Any)[source]¶
Initiates an instance of JobOperations
This class should not be instantiated directly. Instead, use the jobs attribute of an MLClient object.
- Parameters:
operation_scope (OperationScope) – Scope variables for the operations classes of an MLClient object.
operation_config (OperationConfig) – Common configuration for operations classes of an MLClient object.
service_client_02_2023_preview (AzureMachineLearningWorkspaces) – Service client to allow end users to operate on Azure Machine Learning Workspace resources.
all_operations (OperationsContainer) – All operations classes of an MLClient object.
credential (TokenCredential) – Credential to use for authentication.
- archive(name: str) None[source]¶
Archives a job.
- Parameters:
name (str) – The name of the job.
- Raises:
azure.core.exceptions.ResourceNotFoundError – Raised if no job with the given name can be found.
Example:
Archiving a job.¶ml_client.jobs.archive(name=job_name)
- begin_cancel(name: str, **kwargs: Any) LROPoller[None][source]¶
Cancels a job.
- Parameters:
name (str) – The name of the job.
- Raises:
azure.core.exceptions.ResourceNotFoundError – Raised if no job with the given name can be found.
- Returns:
A poller to track the operation status.
- Return type:
LROPoller[None]
Example:
Canceling the job named “iris-dataset-job-1” and checking the poller for status.¶cancel_poller = ml_client.jobs.begin_cancel(job_name) print(cancel_poller.result())
- create_or_update(job: Job, *, description: str | None = None, compute: str | None = None, tags: dict | None = None, experiment_name: str | None = None, skip_validation: bool = False, **kwargs: Any) Job[source]¶
Creates or updates a job. If entities such as Environment or Code are defined inline, they’ll be created together with the job.
- Parameters:
job (Job) – The job object.
- Keyword Arguments:
description (Optional[str]) – The job description.
compute (Optional[str]) – The compute target for the job.
tags (Optional[dict]) – The tags for the job.
experiment_name (Optional[str]) – The name of the experiment the job will be created under. If None is provided, job will be created under experiment ‘Default’.
skip_validation (bool) – Specifies whether or not to skip validation before creating or updating the job. Note that validation for dependent resources such as an anonymous component will not be skipped. Defaults to False.
- Raises:
Union[UserErrorException, ValidationException] – Raised if Job cannot be successfully validated. Details will be provided in the error message.
AssetException – Raised if Job assets (e.g. Data, Code, Model, Environment) cannot be successfully validated. Details will be provided in the error message.
ModelException – Raised if Job model cannot be successfully validated. Details will be provided in the error message.
JobException – Raised if Job object or attributes correctly formatted. Details will be provided in the error message.
EmptyDirectoryError – Raised if local path provided points to an empty directory.
DockerEngineNotAvailableError – Raised if Docker Engine is not available for local job.
- Returns:
Created or updated job.
- Return type:
Example:
Creating a new job and then updating its compute.¶from azure.ai.ml import load_job created_job = ml_client.jobs.create_or_update( name=job_name, job=load_job( "./sdk/ml/azure-ai-ml/tests/test_configs/command_job/command_job_test_local_env.yml", params_override=[{"name": job_name}, {"compute": "cpucluster"}], ), )
- download(name: str, *, download_path: PathLike | str = '.', output_name: str | None = None, all: bool = False) None[source]¶
Downloads the logs and output of a job.
- Parameters:
name (str) – The name of a job.
- Keyword Arguments:
- Raises:
JobException – Raised if Job is not yet in a terminal state. Details will be provided in the error message.
MlException – Raised if logs and outputs cannot be successfully downloaded. Details will be provided in the error message.
Example:
Downloading all logs and named outputs of the job “job-1” into local directory “job-1-logs”.¶ml_client.jobs.download(name=job_name, download_path="./job-1-logs", all=True)
- get(name: str) Job[source]¶
Gets a job resource.
- Parameters:
name (str) – The name of the job.
- Raises:
azure.core.exceptions.ResourceNotFoundError – Raised if no job with the given name can be found.
UserErrorException – Raised if the name parameter is not a string.
- Returns:
Job object retrieved from the service.
- Return type:
Example:
Retrieving a job named “iris-dataset-job-1”.¶retrieved_job = ml_client.jobs.get(job_name)
- list(*, parent_job_name: str | None = None, list_view_type: ListViewType = ListViewType.ACTIVE_ONLY, **kwargs: Any) Iterable[Job][source]¶
Lists jobs in the workspace.
- Keyword Arguments:
parent_job_name (Optional[str]) – When provided, only returns jobs that are children of the named job. Defaults to None, listing all jobs in the workspace.
list_view_type (ListViewType) – The view type for including/excluding archived jobs. Defaults to ~azure.mgt.machinelearningservices.models.ListViewType.ACTIVE_ONLY, excluding archived jobs.
- Returns:
An iterator-like instance of Job objects.
- Return type:
Example:
Retrieving a list of the archived jobs in a workspace with parent job named “iris-dataset-jobs”.¶from azure.ai.ml._restclient.v2023_04_01_preview.models import ListViewType list_of_jobs = ml_client.jobs.list(parent_job_name=job_name, list_view_type=ListViewType.ARCHIVED_ONLY)
- restore(name: str) None[source]¶
Restores an archived job.
- Parameters:
name (str) – The name of the job.
- Raises:
azure.core.exceptions.ResourceNotFoundError – Raised if no job with the given name can be found.
Example:
Restoring an archived job.¶ml_client.jobs.restore(name=job_name)
- show_services(name: str, node_index: int = 0) Dict[str, ServiceInstance] | None[source]¶
Gets services associated with a job’s node.
- Parameters:
- Returns:
The services associated with the job for the given node.
- Return type:
Example:
Retrieving the services associated with a job’s 1st node.¶job_services = ml_client.jobs.show_services(job_name)
- stream(name: str) None[source]¶
Streams the logs of a running job.
- Parameters:
name (str) – The name of the job.
- Raises:
azure.core.exceptions.ResourceNotFoundError – Raised if no job with the given name can be found.
Example:
Streaming a running job.¶running_job = ml_client.jobs.create_or_update( load_job( "./sdk/ml/azure-ai-ml/tests/test_configs/command_job/command_job_test_local_env.yml", params_override=[{"name": job_name}, {"compute": "cpucluster"}], ) ) ml_client.jobs.stream(running_job.name)
- validate(job: Job, *, raise_on_failure: bool = False, **kwargs: Any) ValidationResult[source]¶
Validates a Job object before submitting to the service. Anonymous assets may be created if there are inline defined entities such as Component, Environment, and Code. Only pipeline jobs are supported for validation currently.
- Parameters:
job (Job) – The job object to be validated.
- Keyword Arguments:
raise_on_failure (bool) – Specifies if an error should be raised if validation fails. Defaults to False.
- Returns:
A ValidationResult object containing all found errors.
- Return type:
Example:
Validating a PipelineJob object and printing out the found errors.¶from azure.ai.ml import load_job from azure.ai.ml.entities import PipelineJob pipeline_job: PipelineJob = load_job( "./sdk/ml/azure-ai-ml/tests/test_configs/pipeline_jobs/invalid/combo.yml", params_override=[{"name": job_name}, {"compute": "cpucluster"}], ) print(ml_client.jobs.validate(pipeline_job).error_messages)
- class azure.ai.ml.operations.MarketplaceSubscriptionOperations(operation_scope: OperationScope, operation_config: OperationConfig, service_client: AzureMachineLearningWorkspaces)[source]¶
MarketplaceSubscriptionOperations.
You should not instantiate this class directly. Instead, you should create an MLClient instance that instantiates it for you and attaches it as an attribute.
- begin_create_or_update(marketplace_subscription: MarketplaceSubscription, **kwargs) LROPoller[MarketplaceSubscription][source]¶
Note
This is an experimental method, and may change at any time. Please see https://aka.ms/azuremlexperimental for more information.
Create or update a Marketplace Subscription.
- Parameters:
marketplace_subscription (MarketplaceSubscription) – The marketplace subscription entity.
- Returns:
A poller to track the operation status
- Return type:
- begin_delete(name: str, **kwargs) LROPoller[None][source]¶
Note
This is an experimental method, and may change at any time. Please see https://aka.ms/azuremlexperimental for more information.
Delete a Marketplace Subscription.
- get(name: str, **kwargs) MarketplaceSubscription[source]¶
Note
This is an experimental method, and may change at any time. Please see https://aka.ms/azuremlexperimental for more information.
Get a Marketplace Subscription resource.
- Parameters:
name (str) – Name of the marketplace subscription.
- Returns:
Marketplace subscription object retrieved from the service.
- Return type:
- list(**kwargs) Iterable[MarketplaceSubscription][source]¶
Note
This is an experimental method, and may change at any time. Please see https://aka.ms/azuremlexperimental for more information.
List marketplace subscriptions of the workspace.
- Returns:
A list of marketplace subscriptions
- Return type:
- class azure.ai.ml.operations.ModelOperations(operation_scope: OperationScope, operation_config: OperationConfig, service_client: AzureMachineLearningWorkspaces | AzureMachineLearningWorkspaces, datastore_operations: DatastoreOperations, all_operations: OperationsContainer | None = None, **kwargs)[source]¶
ModelOperations.
You should not instantiate this class directly. Instead, you should create an MLClient instance that instantiates it for you and attaches it as an attribute.
- Parameters:
operation_scope (OperationScope) – Scope variables for the operations classes of an MLClient object.
operation_config (OperationConfig) – Common configuration for operations classes of an MLClient object.
service_client (Union[ azure.ai.ml._restclient.v2023_04_01_preview._azure_machine_learning_workspaces.AzureMachineLearningWorkspaces, azure.ai.ml._restclient.v2021_10_01_dataplanepreview._azure_machine_learning_workspaces. AzureMachineLearningWorkspaces]) – Service client to allow end users to operate on Azure Machine Learning Workspace resources (ServiceClient082023Preview or ServiceClient102021Dataplane).
datastore_operations (DatastoreOperations) – Represents a client for performing operations on Datastores.
all_operations (OperationsContainer) – All operations classes of an MLClient object.
- archive(name: str, version: str | None = None, label: str | None = None, **kwargs: Any) None[source]¶
Archive a model asset.
- Parameters:
Example:
Archive a model.¶ml_client.models.archive(name="model1", version="5")
- create_or_update(model: Model | WorkspaceAssetReference) Model[source]¶
Returns created or updated model asset.
- Parameters:
model (Model) – Model asset object.
- Raises:
AssetPathException – Raised when the Model artifact path is already linked to another asset
ValidationException – Raised if Model cannot be successfully validated. Details will be provided in the error message.
EmptyDirectoryError – Raised if local path provided points to an empty directory.
- Returns:
Model asset object.
- Return type:
- download(name: str, version: str, download_path: PathLike | str = '.') None[source]¶
Download files related to a model.
- Parameters:
- Raises:
ResourceNotFoundError – if can’t find a model matching provided name.
- get(name: str, version: str | None = None, label: str | None = None) Model[source]¶
Returns information about the specified model asset.
- Parameters:
- Raises:
ValidationException – Raised if Model cannot be successfully validated. Details will be provided in the error message.
- Returns:
Model asset object.
- Return type:
- list(name: str | None = None, stage: str | None = None, *, list_view_type: ListViewType = ListViewType.ACTIVE_ONLY) Iterable[Model][source]¶
List all model assets in workspace.
- Parameters:
- Keyword Arguments:
list_view_type (ListViewType) – View type for including/excluding (for example) archived models. Defaults to
ListViewType.ACTIVE_ONLY.- Returns:
An iterator like instance of Model objects
- Return type:
- package(name: str, version: str, package_request: ModelPackage, **kwargs: Any) Environment[source]¶
Note
This is an experimental method, and may change at any time. Please see https://aka.ms/azuremlexperimental for more information.
Package a model asset
- Parameters:
name (str) – Name of model asset.
version (str) – Version of model asset.
package_request (ModelPackage) – Model package request.
- Returns:
Environment object
- Return type:
- restore(name: str, version: str | None = None, label: str | None = None, **kwargs: Any) None[source]¶
Restore an archived model asset.
- Parameters:
Example:
Restore an archived model.¶ml_client.models.restore(name="model1", version="5")
Note
This is an experimental method, and may change at any time. Please see https://aka.ms/azuremlexperimental for more information.
Share a model asset from workspace to registry.
- Parameters:
- Keyword Arguments:
- Returns:
Model asset object.
- Return type:
- class azure.ai.ml.operations.OnlineDeploymentOperations(operation_scope: OperationScope, operation_config: OperationConfig, service_client_04_2023_preview: AzureMachineLearningWorkspaces, all_operations: OperationsContainer, local_deployment_helper: _LocalDeploymentHelper, credentials: TokenCredential | None = None, **kwargs: Dict)[source]¶
OnlineDeploymentOperations.
You should not instantiate this class directly. Instead, you should create an MLClient instance that instantiates it for you and attaches it as an attribute.
- begin_create_or_update(deployment: OnlineDeployment, *, local: bool = False, vscode_debug: bool = False, skip_script_validation: bool = False, local_enable_gpu: bool = False, **kwargs: Any) LROPoller[OnlineDeployment][source]¶
Create or update a deployment.
- Parameters:
deployment (OnlineDeployment) – the deployment entity
- Keyword Arguments:
local (bool) – Whether deployment should be created locally, defaults to False
vscode_debug (bool) – Whether to open VSCode instance to debug local deployment, defaults to False
skip_script_validation (bool) – Whether or not to skip validation of the deployment script. Defaults to False.
local_enable_gpu (bool) – enable local container to access gpu
- Raises:
ValidationException – Raised if OnlineDeployment cannot be successfully validated. Details will be provided in the error message.
AssetException – Raised if OnlineDeployment assets (e.g. Data, Code, Model, Environment) cannot be successfully validated. Details will be provided in the error message.
ModelException – Raised if OnlineDeployment model cannot be successfully validated. Details will be provided in the error message.
DeploymentException – Raised if OnlineDeployment type is unsupported. Details will be provided in the error message.
LocalEndpointNotFoundError – Raised if local endpoint resource does not exist.
LocalEndpointInFailedStateError – Raised if local endpoint is in a failed state.
InvalidLocalEndpointError – Raised if Docker image cannot be found for local deployment.
LocalEndpointImageBuildError – Raised if Docker image cannot be successfully built for local deployment.
RequiredLocalArtifactsNotFoundError – Raised if local artifacts cannot be found for local deployment.
InvalidVSCodeRequestError – Raised if VS Debug is invoked with a remote endpoint. VSCode debug is only supported for local endpoints.
LocalDeploymentGPUNotAvailable – Raised if Nvidia GPU is not available in the system and local_enable_gpu is set while local deployment
VSCodeCommandNotFound – Raised if VSCode instance cannot be instantiated.
- Returns:
A poller to track the operation status
- Return type:
- begin_delete(name: str, endpoint_name: str, *, local: bool | None = False) LROPoller[None][source]¶
Delete a deployment.
- Parameters:
- Keyword Arguments:
local (Optional[bool]) – Whether deployment should be retrieved from local docker environment, defaults to False
- Raises:
LocalEndpointNotFoundError – Raised if local endpoint resource does not exist.
- Returns:
A poller to track the operation status
- Return type:
LROPoller[None]
- get(name: str, endpoint_name: str, *, local: bool | None = False) OnlineDeployment[source]¶
Get a deployment resource.
- Parameters:
- Keyword Arguments:
local (Optional[bool]) – Whether deployment should be retrieved from local docker environment, defaults to False
- Raises:
LocalEndpointNotFoundError – Raised if local endpoint resource does not exist.
- Returns:
a deployment entity
- Return type:
- get_logs(name: str, endpoint_name: str, lines: int, *, container_type: str | None = None, local: bool = False) str[source]¶
Retrive the logs from online deployment.
- Parameters:
- Keyword Arguments:
container_type – The type of container to retrieve logs from. Possible values include: “StorageInitializer”, “InferenceServer”, defaults to None
local (bool) – [description], defaults to False
- Returns:
the logs
- Return type:
- list(endpoint_name: str, *, local: bool = False) ItemPaged[OnlineDeployment][source]¶
List a deployment resource.
- Parameters:
endpoint_name (str) – The name of the endpoint
- Keyword Arguments:
local (bool) – Whether deployment should be retrieved from local docker environment, defaults to False
- Returns:
an iterator of deployment entities
- Return type:
Iterable[OnlineDeployment]
- class azure.ai.ml.operations.OnlineEndpointOperations(operation_scope: OperationScope, operation_config: OperationConfig, service_client_02_2022_preview: AzureMachineLearningWorkspaces, all_operations: OperationsContainer, local_endpoint_helper: _LocalEndpointHelper, credentials: TokenCredential | None = None, **kwargs: Dict)[source]¶
OnlineEndpointOperations.
You should not instantiate this class directly. Instead, you should create an MLClient instance that instantiates it for you and attaches it as an attribute.
- begin_create_or_update(endpoint: OnlineEndpoint, *, local: bool = False) LROPoller[OnlineEndpoint][source]¶
Create or update an endpoint.
- Parameters:
endpoint (OnlineEndpoint) – The endpoint entity.
- Keyword Arguments:
local (bool) – Whether to interact with the endpoint in local Docker environment. Defaults to False.
- Raises:
ValidationException – Raised if OnlineEndpoint cannot be successfully validated. Details will be provided in the error message.
AssetException – Raised if OnlineEndpoint assets (e.g. Data, Code, Model, Environment) cannot be successfully validated. Details will be provided in the error message.
ModelException – Raised if OnlineEndpoint model cannot be successfully validated. Details will be provided in the error message.
EmptyDirectoryError – Raised if local path provided points to an empty directory.
LocalEndpointNotFoundError – Raised if local endpoint resource does not exist.
- Returns:
A poller to track the operation status if remote, else returns None if local.
- Return type:
- begin_delete(name: str | None = None, *, local: bool = False) LROPoller[None][source]¶
Delete an Online Endpoint.
- Parameters:
name (str) – Name of the endpoint.
- Keyword Arguments:
local (bool) – Whether to interact with the endpoint in local Docker environment. Defaults to False.
- Raises:
LocalEndpointNotFoundError – Raised if local endpoint resource does not exist.
- Returns:
A poller to track the operation status if remote, else returns None if local.
- Return type:
LROPoller[None]
- begin_regenerate_keys(name: str, *, key_type: str = 'primary') LROPoller[None][source]¶
Regenerate keys for endpoint.
- get(name: str, *, local: bool = False) OnlineEndpoint[source]¶
Get a Endpoint resource.
- Parameters:
name (str) – Name of the endpoint.
- Keyword Arguments:
local (Optional[bool]) – Indicates whether to interact with endpoints in local Docker environment. Defaults to False.
- Raises:
LocalEndpointNotFoundError – Raised if local endpoint resource does not exist.
- Returns:
Endpoint object retrieved from the service.
- Return type:
- get_keys(name: str) EndpointAuthKeys | EndpointAuthToken | EndpointAadToken[source]¶
Get the auth credentials.
- Parameters:
name (str) – The endpoint name
- Raise:
Exception if cannot get online credentials
- Returns:
Depending on the auth mode in the endpoint, returns either keys or token
- Return type:
Union[EndpointAuthKeys, EndpointAuthToken]
- invoke(endpoint_name: str, *, request_file: str | None = None, deployment_name: str | None = None, input_data: str | Data | None = None, params_override: Any = None, local: bool = False, **kwargs: Any) str[source]¶
Invokes the endpoint with the provided payload.
- Parameters:
endpoint_name (str) – The endpoint name
- Keyword Arguments:
request_file (Optional[str]) – File containing the request payload. This is only valid for online endpoint.
deployment_name (Optional[str]) – Name of a specific deployment to invoke. This is optional. By default requests are routed to any of the deployments according to the traffic rules.
input_data (Optional[Union[str, Data]]) – To use a pre-registered data asset, pass str in format
params_override (Any) – A dictionary of payload parameters to override and their desired values.
local (Optional[bool]) – Indicates whether to interact with endpoints in local Docker environment. Defaults to False.
- Raises:
LocalEndpointNotFoundError – Raised if local endpoint resource does not exist.
MultipleLocalDeploymentsFoundError – Raised if there are multiple deployments and no deployment_name is specified.
InvalidLocalEndpointError – Raised if local endpoint is None.
- Returns:
Prediction output for online endpoint.
- Return type:
- list(*, local: bool = False) ItemPaged[OnlineEndpoint][source]¶
List endpoints of the workspace.
- Keyword Arguments:
local – (Optional) Flag to indicate whether to interact with endpoints in local Docker environment. Default: False
- Returns:
A list of endpoints
- Return type:
- class azure.ai.ml.operations.RegistryOperations(operation_scope: OperationScope, service_client: AzureMachineLearningWorkspaces, all_operations: OperationsContainer, credentials: TokenCredential | None = None, **kwargs: Dict)[source]¶
RegistryOperations.
You should not instantiate this class directly. Instead, you should create an MLClient instance that instantiates it for you and attaches it as an attribute.
- begin_create(registry: Registry, **kwargs: Dict) LROPoller[Registry][source]¶
Create a new Azure Machine Learning Registry, or try to update if it already exists.
Note: Due to service limitations we have to sleep for an additional 30~45 seconds AFTER the LRO Poller concludes before the registry will be consistently deleted from the perspective of subsequent operations. If a deletion is required for subsequent operations to work properly, callers should implement that logic until the service has been fixed to return a reliable LRO.
- Parameters:
registry (Registry) – Registry definition.
- Returns:
A poller to track the operation status.
- Return type:
LROPoller
- begin_delete(*, name: str, **kwargs: Dict) LROPoller[None][source]¶
Delete a registry if it exists. Returns nothing on a successful operation.
- Keyword Arguments:
name (str) – Name of the registry
- Returns:
A poller to track the operation status.
- Return type:
LROPoller
- get(name: str | None = None) Registry[source]¶
Get a registry by name.
- Parameters:
name (str) – Name of the registry.
- Raises:
ValidationException – Raised if Registry name cannot be successfully validated. Details will be provided in the error message.
HttpResponseError – Raised if the corresponding name and version cannot be retrieved from the service.
- Returns:
The registry with the provided name.
- Return type:
- class azure.ai.ml.operations.ScheduleOperations(operation_scope: OperationScope, operation_config: OperationConfig, service_client_06_2023_preview: AzureMachineLearningWorkspaces, service_client_01_2024_preview: AzureMachineLearningWorkspaces, all_operations: OperationsContainer, credential: TokenCredential, **kwargs: Any)[source]¶
You should not instantiate this class directly. Instead, you should create an MLClient instance that instantiates it for you and attaches it as an attribute.
- begin_create_or_update(schedule: Schedule, **kwargs: Any) LROPoller[Schedule][source]¶
Create or update schedule.
- begin_delete(name: str, **kwargs: Any) LROPoller[None][source]¶
Delete schedule.
- Parameters:
name (str) – Schedule name.
- Returns:
A poller for deletion status
- Return type:
LROPoller[None]
- begin_disable(name: str, **kwargs: Any) LROPoller[Schedule][source]¶
Disable a schedule.
- Parameters:
name (str) – Schedule name.
- Returns:
An instance of LROPoller that returns Schedule if no_wait=True, or Schedule if no_wait=False
- Return type:
LROPoller
- begin_enable(name: str, **kwargs: Any) LROPoller[Schedule][source]¶
Enable a schedule.
- Parameters:
name (str) – Schedule name.
- Returns:
An instance of LROPoller that returns Schedule
- Return type:
LROPoller
- list(*, list_view_type: ScheduleListViewType = ScheduleListViewType.ENABLED_ONLY, **kwargs: Any) Iterable[Schedule][source]¶
List schedules in specified workspace.
- Keyword Arguments:
list_view_type – View type for including/excluding (for example) archived schedules. Default: ENABLED_ONLY.
- Returns:
An iterator to list Schedule.
- Return type:
Iterable[Schedule]
- class azure.ai.ml.operations.ServerlessEndpointOperations(operation_scope: OperationScope, operation_config: OperationConfig, service_client: AzureMachineLearningWorkspaces, all_operations: OperationsContainer)[source]¶
ServerlessEndpointOperations.
You should not instantiate this class directly. Instead, you should create an MLClient instance that instantiates it for you and attaches it as an attribute.
- begin_create_or_update(endpoint: ServerlessEndpoint, **kwargs) LROPoller[ServerlessEndpoint][source]¶
Note
This is an experimental method, and may change at any time. Please see https://aka.ms/azuremlexperimental for more information.
Create or update a serverless endpoint.
- Parameters:
endpoint (ServerlessEndpoint) – The serverless endpoint entity.
- Raises:
ValidationException – Raised if ServerlessEndpoint cannot be successfully validated. Details will be provided in the error message.
- Returns:
A poller to track the operation status
- Return type:
- begin_delete(name: str, **kwargs) LROPoller[None][source]¶
Note
This is an experimental method, and may change at any time. Please see https://aka.ms/azuremlexperimental for more information.
Delete a Serverless Endpoint.
- begin_regenerate_keys(name: str, *, key_type: str = 'primary', **kwargs) LROPoller[EndpointAuthKeys][source]¶
Note
This is an experimental method, and may change at any time. Please see https://aka.ms/azuremlexperimental for more information.
Regenerate keys for a serverless endpoint.
- Parameters:
name (str) – The endpoint name.
- Keyword Arguments:
key_type (str) – One of “primary”, “secondary”. Defaults to “primary”.
- Raises:
ValidationException – Raised if key_type is not “primary” or “secondary”
- Returns:
A poller to track the operation status.
- Return type:
- get(name: str, **kwargs) ServerlessEndpoint[source]¶
Note
This is an experimental method, and may change at any time. Please see https://aka.ms/azuremlexperimental for more information.
Get a Serverless Endpoint resource.
- Parameters:
name (str) – Name of the serverless endpoint.
- Returns:
Serverless endpoint object retrieved from the service.
- Return type:
- get_keys(name: str, **kwargs) EndpointAuthKeys[source]¶
Note
This is an experimental method, and may change at any time. Please see https://aka.ms/azuremlexperimental for more information.
Get serveless endpoint auth keys.
- Parameters:
name (str) – The serverless endpoint name
- Returns:
Returns the keys of the serverless endpoint
- Return type:
- list(**kwargs) Iterable[ServerlessEndpoint][source]¶
Note
This is an experimental method, and may change at any time. Please see https://aka.ms/azuremlexperimental for more information.
List serverless endpoints of the workspace.
- Returns:
A list of serverless endpoints
- Return type:
- class azure.ai.ml.operations.WorkspaceConnectionsOperations(operation_scope: OperationScope, operation_config: OperationConfig, service_client: AzureMachineLearningWorkspaces, all_operations: OperationsContainer, credentials: TokenCredential | None = None, **kwargs: Dict)[source]¶
WorkspaceConnectionsOperations.
You should not instantiate this class directly. Instead, you should create an MLClient instance that instantiates it for you and attaches it as an attribute.
- create_or_update(workspace_connection: WorkspaceConnection, *, populate_secrets: bool = False, **kwargs: Any) WorkspaceConnection[source]¶
Create or update a connection.
- Parameters:
workspace_connection (WorkspaceConnection) – Definition of a Workspace Connection or one of its subclasses or object which can be translated to a connection.
- Keyword Arguments:
populate_secrets (bool) – If true, make a secondary API call to try filling in the workspace connections credentials. Currently only works for api key-based credentials. Defaults to False.
- Returns:
Created or update connection.
- Return type:
- delete(name: str, **kwargs: Any) None[source]¶
Delete the connection.
- Parameters:
name (str) – Name of the connection.
- get(name: str, *, populate_secrets: bool = False, **kwargs: Dict) WorkspaceConnection[source]¶
Get a connection by name.
- Parameters:
name (str) – Name of the connection.
- Keyword Arguments:
populate_secrets (bool) – If true, make a secondary API call to try filling in the workspace connections credentials. Currently only works for api key-based credentials. Defaults to False.
- Raises:
HttpResponseError – Raised if the corresponding name and version cannot be retrieved from the service.
- Returns:
The connection with the provided name.
- Return type:
- list(connection_type: str | None = None, *, populate_secrets: bool = False, include_data_connections: bool = False, **kwargs: Any) Iterable[WorkspaceConnection][source]¶
List all connections for a workspace.
- Parameters:
connection_type (Optional[str]) – Type of connection to list.
- Keyword Arguments:
- Returns:
An iterator like instance of connection objects
- Return type:
Iterable[WorkspaceConnection]
- class azure.ai.ml.operations.WorkspaceOperations(operation_scope: OperationScope, service_client: AzureMachineLearningWorkspaces, all_operations: OperationsContainer, credentials: TokenCredential | None = None, **kwargs: Any)[source]¶
Handles workspaces and its subclasses, hubs and projects.
You should not instantiate this class directly. Instead, you should create an MLClient instance that instantiates it for you and attaches it as an attribute.
- begin_create(workspace: Workspace, update_dependent_resources: bool = False, **kwargs: Any) LROPoller[Workspace][source]¶
Create a new Azure Machine Learning Workspace.
Returns the workspace if already exists.
- Parameters:
workspace (Workspace) – Workspace definition.
update_dependent_resources (boolean) – Whether to update dependent resources, defaults to False.
- Returns:
An instance of LROPoller that returns a Workspace.
- Return type:
Example:
Begin create for a workspace.¶from azure.ai.ml.entities import Workspace ws = Workspace( name="test-ws1", description="a test workspace", tags={"purpose": "demo"}, location="eastus", resource_group=resource_group, ) ws = ml_client.workspaces.begin_create(workspace=ws).result()
- begin_delete(name: str, *, delete_dependent_resources: bool, permanently_delete: bool = False, **kwargs: Dict) LROPoller[None][source]¶
Delete a workspace.
- Parameters:
name (str) – Name of the workspace
- Keyword Arguments:
delete_dependent_resources (bool) – Whether to delete resources associated with the workspace, i.e., container registry, storage account, key vault, application insights, log analytics. The default is False. Set to True to delete these resources.
permanently_delete (bool) – Workspaces are soft-deleted by default to allow recovery of workspace data. Set this flag to true to override the soft-delete behavior and permanently delete your workspace.
- Returns:
A poller to track the operation status.
- Return type:
LROPoller[None]
Example:
Begin permanent (force) deletion for a workspace and delete dependent resources.¶ml_client.workspaces.begin_delete(name="test-ws", delete_dependent_resources=True, permanently_delete=True)
- begin_diagnose(name: str, **kwargs: Dict) LROPoller[DiagnoseResponseResultValue][source]¶
Diagnose workspace setup problems.
If your workspace is not working as expected, you can run this diagnosis to check if the workspace has been broken. For private endpoint workspace, it will also help check if the network setup to this workspace and its dependent resource has problems or not.
- Parameters:
name (str) – Name of the workspace
- Returns:
A poller to track the operation status.
- Return type:
Example:
Begin diagnose operation for a workspace.¶diagnose_result = ml_client.workspaces.begin_diagnose(name="test-ws1").result()
- begin_provision_network(*, workspace_name: str | None = None, include_spark: bool = False, **kwargs: Any) LROPoller[ManagedNetworkProvisionStatus][source]¶
Triggers the workspace to provision the managed network. Specifying spark enabled as true prepares the workspace managed network for supporting Spark.
- Keyword Arguments:
workspace_name (str) – Name of the workspace.
include_spark – Whether the workspace managed network should prepare to support Spark.
- Returns:
An instance of LROPoller.
- Return type:
Example:
Begin provision network for a workspace with managed network.¶ml_client.workspaces.begin_provision_network(workspace_name="test-ws1", include_spark=False)
- begin_sync_keys(name: str | None = None) LROPoller[None][source]¶
Triggers the workspace to immediately synchronize keys. If keys for any resource in the workspace are changed, it can take around an hour for them to automatically be updated. This function enables keys to be updated upon request. An example scenario is needing immediate access to storage after regenerating storage keys.
- Parameters:
name (str) – Name of the workspace.
- Returns:
An instance of LROPoller that returns either None or the sync keys result.
- Return type:
LROPoller[None]
Example:
Begin sync keys for the workspace with the given name.¶ml_client.workspaces.begin_sync_keys(name="test-ws1")
- begin_update(workspace: Workspace, *, update_dependent_resources: bool = False, **kwargs: Any) LROPoller[Workspace][source]¶
Updates a Azure Machine Learning Workspace.
- Parameters:
workspace (Workspace) – Workspace definition.
- Keyword Arguments:
update_dependent_resources (boolean) – Whether to update dependent resources, defaults to False.
- Returns:
An instance of LROPoller that returns a Workspace.
- Return type:
Example:
Begin update for a workspace.¶ws = ml_client.workspaces.get(name="test-ws1") ws.description = "a different description" ws = ml_client.workspaces.begin_update(workspace=ws).result()
- get(name: str | None = None, **kwargs: Dict) Workspace | None[source]¶
Get a Workspace by name.
- Parameters:
name (str) – Name of the workspace.
- Returns:
The workspace with the provided name.
- Return type:
Example:
Get the workspace with the given name.¶workspace = ml_client.workspaces.get(name="test-ws1")
- get_keys(name: str | None = None) WorkspaceKeys | None[source]¶
Get WorkspaceKeys by workspace name.
- Parameters:
name (str) – Name of the workspace.
- Returns:
Keys of workspace dependent resources.
- Return type:
Example:
Get the workspace keys for the workspace with the given name.¶ws_keys = ml_client.workspaces.get_keys(name="test-ws1")
- list(*, scope: str = 'resource_group', filtered_kinds: str | List[str] | None = None) Iterable[Workspace][source]¶
List all Workspaces that the user has access to in the current resource group or subscription.
- Keyword Arguments:
scope (str) – scope of the listing, “resource_group” or “subscription”, defaults to “resource_group”
filtered_kinds – The kinds of workspaces to list. If not provided, all workspaces varieties will be listed. Accepts either a single kind, or a list of them. Valid kind options include: “default”, “project”, and “hub”.
- Returns:
An iterator like instance of Workspace objects
- Return type:
Example:
List the workspaces by resource group or subscription.¶from azure.ai.ml.constants import Scope # list workspaces in the resource group set in ml_client workspaces = ml_client.workspaces.list() workspaces = ml_client.workspaces.list(scope=Scope.RESOURCE_GROUP) # list workspaces in the subscription set in ml_client workspaces = ml_client.workspaces.list(scope=Scope.SUBSCRIPTION)
- class azure.ai.ml.operations.WorkspaceOutboundRuleOperations(operation_scope: OperationScope, service_client: AzureMachineLearningWorkspaces, all_operations: OperationsContainer, credentials: TokenCredential = None, **kwargs: Dict)[source]¶
WorkspaceOutboundRuleOperations.
You should not instantiate this class directly. Instead, you should create an MLClient instance that instantiates it for you and attaches it as an attribute.
- begin_create(workspace_name: str, rule: OutboundRule, **kwargs: Any) LROPoller[OutboundRule][source]¶
Create a Workspace OutboundRule.
- Parameters:
workspace_name (str) – Name of the workspace.
rule (OutboundRule) – OutboundRule definition (FqdnDestination, PrivateEndpointDestination, or ServiceTagDestination).
- Returns:
An instance of LROPoller that returns an OutboundRule.
- Return type:
Example:
Create an FQDN outbound rule for a workspace with the given name, similar can be done for PrivateEndpointDestination or ServiceTagDestination.¶from azure.ai.ml.entities import FqdnDestination fqdn_rule = FqdnDestination(name="rulename", destination="google.com") rule = ml_client.workspace_outbound_rules.begin_create(workspace_name="test-ws", rule=fqdn_rule).result()
- begin_remove(workspace_name: str, outbound_rule_name: str, **kwargs: Any) LROPoller[None][source]¶
Remove a Workspace OutboundRule.
- Parameters:
- Returns:
An Iterable of OutboundRule.
- Return type:
Iterable[OutboundRule]
Example:
Remove the outbound rule for a workspace with the given name.¶ml_client.workspace_outbound_rules.begin_remove(workspace_name="test-ws", outbound_rule_name="rulename")
- begin_update(workspace_name: str, rule: OutboundRule, **kwargs: Any) LROPoller[OutboundRule][source]¶
Update a Workspace OutboundRule.
- Parameters:
workspace_name (str) – Name of the workspace.
rule (OutboundRule) – OutboundRule definition (FqdnDestination, PrivateEndpointDestination, or ServiceTagDestination).
- Returns:
An instance of LROPoller that returns an OutboundRule.
- Return type:
Example:
Update an FQDN outbound rule for a workspace with the given name, similar can be done for PrivateEndpointDestination or ServiceTagDestination.¶from azure.ai.ml.entities import FqdnDestination fqdn_rule = FqdnDestination(name="rulename", destination="linkedin.com") rule = ml_client.workspace_outbound_rules.begin_update(workspace_name="test-ws", rule=fqdn_rule).result()
- get(workspace_name: str, outbound_rule_name: str, **kwargs: Any) OutboundRule[source]¶
Get a workspace OutboundRule by name.
- Parameters:
- Returns:
The OutboundRule with the provided name for the workspace.
- Return type:
Example:
Get the outbound rule for a workspace with the given name.¶rule = ml_client.workspace_outbound_rules.get(workspace_name="test-ws", outbound_rule_name="sample-rule")
- list(workspace_name: str, **kwargs: Any) Iterable[OutboundRule][source]¶
List Workspace OutboundRules.
- Parameters:
workspace_name (str) – Name of the workspace.
- Returns:
An Iterable of OutboundRule.
- Return type:
Iterable[OutboundRule]
Example:
List the outbound rule for a workspace with the given name.¶rules = ml_client.workspace_outbound_rules.list(workspace_name="test-ws")