azure.ai.ml package¶
- class azure.ai.ml.Input(*, type: str, path: str | None = None, mode: str | None = None, optional: bool | None = None, description: str | None = None, **kwargs: Any)[source]¶
- class azure.ai.ml.Input(*, type: Literal['number'] = 'number', default: float | None = None, min: float | None = None, max: float | None = None, optional: bool | None = None, description: str | None = None, **kwargs: Any)
- class azure.ai.ml.Input(*, type: Literal['integer'] = 'integer', default: int | None = None, min: int | None = None, max: int | None = None, optional: bool | None = None, description: str | None = None, **kwargs: Any)
- class azure.ai.ml.Input(*, type: Literal['string'] = 'string', default: str | None = None, optional: bool | None = None, description: str | None = None, path: str | None = None, **kwargs: Any)
- class azure.ai.ml.Input(*, type: Literal['boolean'] = 'boolean', default: bool | None = None, optional: bool | None = None, description: str | None = None, **kwargs: Any)
Initialize an Input object.
- Keyword Arguments:
type (str) – The type of the data input. Accepted values are ‘uri_folder’, ‘uri_file’, ‘mltable’, ‘mlflow_model’, ‘custom_model’, ‘integer’, ‘number’, ‘string’, and ‘boolean’. Defaults to ‘uri_folder’.
path (Optional[str]) – The path to the input data. Paths can be local paths, remote data uris, or a registered AzureML asset ID.
mode (Optional[str]) – The access mode of the data input. Accepted values are: * ‘ro_mount’: Mount the data to the compute target as read-only, * ‘download’: Download the data to the compute target, * ‘direct’: Pass in the URI as a string to be accessed at runtime
path_on_compute (Optional[str]) – The access path of the data input for compute
default (Union[str, int, float, bool]) – The default value of the input. If a default is set, the input data will be optional.
min (Union[int, float]) – The minimum value for the input. If a value smaller than the minimum is passed to the job, the job execution will fail.
max (Union[int, float]) – The maximum value for the input. If a value larger than the maximum is passed to a job, the job execution will fail.
optional (Optional[bool]) – Specifies if the input is optional.
description (Optional[str]) – Description of the input
datastore (str) – The datastore to upload local files to.
intellectual_property (IntellectualProperty) – Intellectual property for the input.
- Raises:
ValidationException – Raised if Input cannot be successfully validated. Details will be provided in the error message.
Example:
Creating a CommandJob with two inputs.¶from azure.ai.ml import Input, Output from azure.ai.ml.entities import CommandJob, CommandJobLimits command_job = CommandJob( code="./src", command="python train.py --ss {search_space.ss}", inputs={ "input1": Input(path="trial.csv", mode="ro_mount", description="trial input data"), "input_2": Input( path="azureml:list_data_v2_test:2", type="uri_folder", description="registered data asset" ), }, outputs={"default": Output(path="./foo")}, compute="trial", environment="AzureML-sklearn-1.0-ubuntu20.04-py38-cpu:33", limits=CommandJobLimits(timeout=120), )
Base class for Input & Output class.
This class is introduced to support literal output in the future.
- Parameters:
type (str) – The type of the Input/Output.
- class azure.ai.ml.MLClient(credential: TokenCredential, subscription_id: str | None = None, resource_group_name: str | None = None, workspace_name: str | None = None, registry_name: str | None = None, **kwargs: Any)[source]¶
A client class to interact with Azure ML services.
Use this client to manage Azure ML resources such as workspaces, jobs, models, and so on.
- Parameters:
credential (TokenCredential) – The credential to use for authentication.
subscription_id (Optional[str]) – The Azure subscription ID. Optional for registry assets only. Defaults to None.
resource_group_name (Optional[str]) – The Azure resource group. Optional for registry assets only. Defaults to None.
workspace_name (Optional[str]) – The workspace to use in the client. Optional only for operations that are not workspace-dependent. Defaults to None.
registry_name (Optional[str]) – The registry to use in the client. Optional only for operations that are not workspace-dependent. Defaults to None.
- Keyword Arguments:
show_progress (Optional[bool]) – Specifies whether or not to display progress bars for long-running operations (e.g. customers may consider setting this to False if not using this SDK in an interactive setup). Defaults to True.
enable_telemetry (Optional[bool]) – Specifies whether or not to enable telemetry. Will be overridden to False if not in a Jupyter Notebook. Defaults to True if in a Jupyter Notebook.
cloud (Optional[str]) – The cloud name to use. Defaults to “AzureCloud”.
- Raises:
ValueError – Raised if credential is None.
ValidationException – Raised if both workspace_name and registry_name are provided.
Example:
Creating the MLClient with Azure Identity credentials.¶from azure.ai.ml import MLClient from azure.identity import AzureAuthorityHosts, DefaultAzureCredential ml_client = MLClient( subscription_id=subscription_id, resource_group_name=resource_group, workspace_name=workspace_name, credential=DefaultAzureCredential(), )
Example:
When using sovereign domains (i.e. any cloud other than AZURE_PUBLIC_CLOUD), you must pass in the cloud name in kwargs and you must use an authority with DefaultAzureCredential.¶from azure.ai.ml import MLClient from azure.identity import AzureAuthorityHosts, DefaultAzureCredential kwargs = {"cloud": "AzureChinaCloud"} ml_client = MLClient( subscription_id=subscription_id, resource_group_name=resource_group, credential=DefaultAzureCredential(authority=AzureAuthorityHosts.AZURE_CHINA), **kwargs, )
- begin_create_or_update(entity: R, **kwargs) LROPoller[R][source]¶
Creates or updates an Azure ML resource asynchronously.
- Parameters:
entity (Union[Workspace , Registry, Compute, OnlineDeployment , OnlineEndpoint, BatchDeployment , BatchEndpoint, Schedule]) – The resource to create or update.
- Returns:
The resource after create/update operation.
- Return type:
azure.core.polling.LROPoller[Union[Workspace , Registry, Compute, OnlineDeployment , OnlineEndpoint, BatchDeployment , BatchEndpoint, Schedule]]
- classmethod from_config(credential: TokenCredential, *, path: PathLike | str | None = None, file_name=None, **kwargs) MLClient[source]¶
Returns a client from an existing Azure Machine Learning Workspace using a file configuration.
This method provides a simple way to reuse the same workspace across multiple Python notebooks or projects. You can save a workspace’s Azure Resource Manager (ARM) properties in a JSON configuration file using this format:
{ "subscription_id": "<subscription-id>", "resource_group": "<resource-group>", "workspace_name": "<workspace-name>" }
Then, you can use this method to load the same workspace in different Python notebooks or projects without retyping the workspace ARM properties. Note that from_config accepts the same kwargs as the main ~azure.ai.ml.MLClient constructor such as cloud.
- Parameters:
credential (TokenCredential) – The credential object for the workspace.
- Keyword Arguments:
path (Optional[Union[os.PathLike, str]]) – The path to the configuration file or starting directory to search for the configuration file within. Defaults to None, indicating the current directory will be used.
file_name (Optional[str]) – The configuration file name to search for when path is a directory path. Defaults to “config.json”.
cloud (Optional[str]) – The cloud name to use. Defaults to “AzureCloud”.
- Raises:
ValidationException – Raised if “config.json”, or file_name if overridden, cannot be found in directory. Details will be provided in the error message.
- Returns:
The client for an existing Azure ML Workspace.
- Return type:
Example:
Creating an MLClient from a file named “config.json” in directory “src”.¶from azure.ai.ml import MLClient client = MLClient.from_config(credential=DefaultAzureCredential(), path="./sdk/ml/azure-ai-ml/samples/src")
Creating an MLClient from a file named “team_workspace_configuration.json” in the current directory.¶from azure.ai.ml import MLClient client = MLClient.from_config( credential=DefaultAzureCredential(), file_name="./sdk/ml/azure-ai-ml/samples/team_workspace_configuration.json", )
- R = ~R¶
- T = ~T¶
- property azure_openai_deployments: AzureOpenAIDeploymentOperations¶
//aka.ms/azuremlexperimental for more information.
A collection of Azure OpenAI deployment related operations.
- Returns:
Azure OpenAI deployment operations.
- Return type:
- Type:
Note
This is an experimental method, and may change at any time. Please see https
- property batch_deployments: BatchDeploymentOperations¶
A collection of batch deployment related operations.
- Returns:
Batch Deployment operations.
- Return type:
- property batch_endpoints: BatchEndpointOperations¶
A collection of batch endpoint related operations.
- Returns:
Batch Endpoint operations
- Return type:
- property capability_hosts: CapabilityHostsOperations¶
//aka.ms/azuremlexperimental for more information.
A collection of capability hosts related operations.
- Returns:
Capability hosts operations
- Return type:
- Type:
Note
This is an experimental method, and may change at any time. Please see https
- property components: ComponentOperations¶
A collection of component related operations.
- Returns:
Component operations.
- Return type:
- property compute: ComputeOperations¶
A collection of compute related operations.
- Returns:
Compute operations
- Return type:
- property connections: WorkspaceConnectionsOperations¶
A collection of connection related operations.
- Returns:
Connections operations
- Return type:
- property data: DataOperations¶
A collection of data related operations.
- Returns:
Data operations.
- Return type:
- property datastores: DatastoreOperations¶
A collection of datastore related operations.
- Returns:
Datastore operations.
- Return type:
- property environments: EnvironmentOperations¶
A collection of environment related operations.
- Returns:
Environment operations.
- Return type:
- property evaluators: EvaluatorOperations¶
//aka.ms/azuremlexperimental for more information.
A collection of model related operations.
- Returns:
Model operations
- Return type:
- Type:
Note
This is an experimental method, and may change at any time. Please see https
- property feature_sets: FeatureSetOperations¶
A collection of feature set related operations.
- Returns:
FeatureSet operations
- Return type:
- property feature_store_entities: FeatureStoreEntityOperations¶
A collection of feature store entity related operations.
- Returns:
FeatureStoreEntity operations
- Return type:
- property feature_stores: FeatureStoreOperations¶
A collection of feature store related operations.
- Returns:
FeatureStore operations
- Return type:
- property indexes: IndexOperations¶
//aka.ms/azuremlexperimental for more information.
A collection of index related operations.
- Returns:
Index operations.
- Return type:
- Type:
Note
This is an experimental method, and may change at any time. Please see https
- property jobs: JobOperations¶
A collection of job related operations.
- Returns:
Job operations
- Return type:
- property marketplace_subscriptions: MarketplaceSubscriptionOperations¶
//aka.ms/azuremlexperimental for more information.
A collection of marketplace subscription related operations.
- Returns:
Marketplace subscription operations.
- Return type:
- Type:
Note
This is an experimental method, and may change at any time. Please see https
- property models: ModelOperations¶
A collection of model related operations.
- Returns:
Model operations
- Return type:
- property online_deployments: OnlineDeploymentOperations¶
A collection of online deployment related operations.
- Returns:
Online Deployment operations
- Return type:
- property online_endpoints: OnlineEndpointOperations¶
A collection of online endpoint related operations.
- Returns:
Online Endpoint operations
- Return type:
- property registries: RegistryOperations¶
A collection of registry-related operations.
- Returns:
Registry operations
- Return type:
- property resource_group_name: str¶
Get the resource group name of an MLClient object.
- Returns:
An Azure resource group name.
- Return type:
- property schedules: ScheduleOperations¶
A collection of schedule related operations.
- Returns:
Schedule operations.
- Return type:
- property serverless_endpoints: ServerlessEndpointOperations¶
//aka.ms/azuremlexperimental for more information.
A collection of serverless endpoint related operations.
- Returns:
Serverless endpoint operations.
- Return type:
- Type:
Note
This is an experimental method, and may change at any time. Please see https
- property subscription_id: str¶
Get the subscription ID of an MLClient object.
- Returns:
An Azure subscription ID.
- Return type:
- property workspace_name: str | None¶
The name of the workspace where workspace-dependent operations will be executed.
- Returns:
The name of the default workspace.
- Return type:
Optional[str]
- property workspace_outbound_rules: WorkspaceOutboundRuleOperations¶
A collection of workspace outbound rule related operations.
- Returns:
Workspace outbound rule operations
- Return type:
- property workspaces: WorkspaceOperations¶
A collection of workspace-related operations. Also manages workspace sub-classes like projects and hubs.
- Returns:
Workspace operations
- Return type:
- class azure.ai.ml.MpiDistribution(*, process_count_per_instance: int | None = None, **kwargs: Any)[source]¶
MPI distribution configuration.
- Keyword Arguments:
process_count_per_instance (Optional[int]) – The number of processes per node.
- Variables:
type (str) – Specifies the type of distribution. Set automatically to “mpi” for this class.
Example:
Configuring a CommandComponent with an MpiDistribution.¶from azure.ai.ml import MpiDistribution from azure.ai.ml.entities import CommandComponent component = CommandComponent( name="microsoftsamples_mpi", description="This is the MPI command component", inputs={ "component_in_number": {"description": "A number", "type": "number", "default": 10.99}, "component_in_path": {"description": "A path", "type": "uri_folder"}, }, outputs={"component_out_path": {"type": "uri_folder"}}, command="echo Hello World & echo ${{inputs.component_in_number}} & echo ${{inputs.component_in_path}} " "& echo ${{outputs.component_out_path}}", environment="AzureML-sklearn-1.0-ubuntu20.04-py38-cpu:33", distribution=MpiDistribution( process_count_per_instance=2, ), instance_count=2, )
- class azure.ai.ml.Output(*, type: str, path: str | None = None, mode: str | None = None, description: str | None = None, **kwargs: Any)[source]¶
- class azure.ai.ml.Output(type: Literal['uri_file'] = 'uri_file', path: str | None = None, mode: str | None = None, description: str | None = None)
Define an output.
- Keyword Arguments:
type (str) – The type of the data output. Accepted values are ‘uri_folder’, ‘uri_file’, ‘mltable’, ‘mlflow_model’, ‘custom_model’, and user-defined types. Defaults to ‘uri_folder’.
path (Optional[str]) – The remote path where the output should be stored.
mode (Optional[str]) – The access mode of the data output. Accepted values are * ‘rw_mount’: Read-write mount the data * ‘upload’: Upload the data from the compute target * ‘direct’: Pass in the URI as a string
path_on_compute (Optional[str]) – The access path of the data output for compute
description (Optional[str]) – The description of the output.
name (str) – The name to be used to register the output as a Data or Model asset. A name can be set without setting a version.
version (str) – The version used to register the output as a Data or Model asset. A version can be set only when name is set.
is_control (bool) – Determine if the output is a control output.
early_available (bool) – Mark the output for early node orchestration.
intellectual_property (Union[ IntellectualProperty, dict]) – Intellectual property associated with the output. It can be an instance of IntellectualProperty or a dictionary that will be used to create an instance.
Example:
Creating a CommandJob with a folder output.¶from azure.ai.ml import Input, Output from azure.ai.ml.entities import CommandJob, CommandJobLimits command_job = CommandJob( code="./src", command="python train.py --ss {search_space.ss}", inputs={ "input1": Input(path="trial.csv", mode="ro_mount", description="trial input data"), "input_2": Input( path="azureml:list_data_v2_test:2", type="uri_folder", description="registered data asset" ), }, outputs={"default": Output(path="./foo")}, compute="trial", environment="AzureML-sklearn-1.0-ubuntu20.04-py38-cpu:33", limits=CommandJobLimits(timeout=120), )
- class azure.ai.ml.PyTorchDistribution(*, process_count_per_instance: int | None = None, **kwargs: Any)[source]¶
PyTorch distribution configuration.
- Keyword Arguments:
process_count_per_instance (Optional[int]) – The number of processes per node.
- Variables:
type (str) – Specifies the type of distribution. Set automatically to “pytorch” for this class.
Example:
Configuring a CommandComponent with a PyTorchDistribution.¶from azure.ai.ml import PyTorchDistribution from azure.ai.ml.entities import CommandComponent component = CommandComponent( name="microsoftsamples_torch", description="This is the PyTorch command component", inputs={ "component_in_number": {"description": "A number", "type": "number", "default": 10.99}, "component_in_path": {"description": "A path", "type": "uri_folder"}, }, outputs={"component_out_path": {"type": "uri_folder"}}, command="echo Hello World & echo ${{inputs.component_in_number}} & echo ${{inputs.component_in_path}} " "& echo ${{outputs.component_out_path}}", environment="AzureML-sklearn-1.0-ubuntu20.04-py38-cpu:33", distribution=PyTorchDistribution( process_count_per_instance=2, ), instance_count=2, )
- class azure.ai.ml.RayDistribution(*, port: int | None = None, address: str | None = None, include_dashboard: bool | None = None, dashboard_port: int | None = None, head_node_additional_args: str | None = None, worker_node_additional_args: str | None = None, **kwargs: Any)[source]¶
Note
This is an experimental class, and may change at any time. Please see https://aka.ms/azuremlexperimental for more information.
Ray distribution configuration.
- Variables:
port (int) – The port of the head ray process.
address (str) – The address of Ray head node.
include_dashboard (bool) – Provide this argument to start the Ray dashboard GUI.
dashboard_port (int) – The port to bind the dashboard server to.
head_node_additional_args (str) – Additional arguments passed to ray start in head node.
worker_node_additional_args (str) – Additional arguments passed to ray start in worker node.
type (str) – Specifies the type of distribution. Set automatically to “Ray” for this class.
- class azure.ai.ml.TensorFlowDistribution(*, parameter_server_count: int | None = 0, worker_count: int | None = None, **kwargs: Any)[source]¶
TensorFlow distribution configuration.
- Keyword Arguments:
- Variables:
Example:
Configuring a CommandComponent with a TensorFlowDistribution.¶from azure.ai.ml import TensorFlowDistribution from azure.ai.ml.entities import CommandComponent component = CommandComponent( name="microsoftsamples_tf", description="This is the TF command component", inputs={ "component_in_number": {"description": "A number", "type": "number", "default": 10.99}, "component_in_path": {"description": "A path", "type": "uri_folder"}, }, outputs={"component_out_path": {"type": "uri_folder"}}, command="echo Hello World & echo ${{inputs.component_in_number}} & echo ${{inputs.component_in_path}} " "& echo ${{outputs.component_out_path}}", environment="AzureML-sklearn-1.0-ubuntu20.04-py38-cpu:33", distribution=TensorFlowDistribution( parameter_server_count=1, worker_count=2, ), instance_count=2, )
- azure.ai.ml.command(*, name: str | None = None, description: str | None = None, tags: Dict | None = None, properties: Dict | None = None, display_name: str | None = None, command: str | None = None, experiment_name: str | None = None, environment: str | Environment | None = None, environment_variables: Dict | None = None, distribution: Dict | MpiDistribution | TensorFlowDistribution | PyTorchDistribution | RayDistribution | DistributionConfiguration | None = None, compute: str | None = None, inputs: Dict | None = None, outputs: Dict | None = None, instance_count: int | None = None, instance_type: str | None = None, locations: List[str] | None = None, docker_args: str | None = None, shm_size: str | None = None, timeout: int | None = None, code: PathLike | str | None = None, identity: ManagedIdentityConfiguration | AmlTokenConfiguration | UserIdentityConfiguration | None = None, is_deterministic: bool = True, services: Dict[str, JobService | JupyterLabJobService | SshJobService | TensorBoardJobService | VsCodeJobService] | None = None, job_tier: str | None = None, priority: str | None = None, **kwargs: Any) Command[source]¶
Creates a Command object which can be used inside a dsl.pipeline function or used as a standalone Command job.
- Keyword Arguments:
name (Optional[str]) – The name of the Command job or component.
description (Optional[str]) – The description of the Command. Defaults to None.
tags (Optional[dict[str, str]]) – Tag dictionary. Tags can be added, removed, and updated. Defaults to None.
properties (Optional[dict[str, str]]) – The job property dictionary. Defaults to None.
display_name (Optional[str]) – The display name of the job. Defaults to a randomly generated name.
command (Optional[str]) – The command to be executed. Defaults to None.
experiment_name (Optional[str]) – The name of the experiment that the job will be created under. Defaults to current directory name.
environment (Optional[Union[str, Environment]]) – The environment that the job will run in.
environment_variables (Optional[dict[str, str]]) – A dictionary of environment variable names and values. These environment variables are set on the process where user script is being executed. Defaults to None.
distribution (Optional[Union[dict, PyTorchDistribution, MpiDistribution, TensorFlowDistribution, RayDistribution]]) – The configuration for distributed jobs. Defaults to None.
compute (Optional[str]) – The compute target the job will run on. Defaults to default compute.
inputs (Optional[dict[str, Union[Input, str, bool, int, float, Enum]]]) – A mapping of input names to input data sources used in the job. Defaults to None.
outputs (Optional[dict[str, Union[str, Output]]]) – A mapping of output names to output data sources used in the job. Defaults to None.
instance_count (Optional[int]) – The number of instances or nodes to be used by the compute target. Defaults to 1.
instance_type (Optional[str]) – The type of VM to be used by the compute target.
locations (Optional[List[str]]) – The list of locations where the job will run.
docker_args (Optional[str]) – Extra arguments to pass to the Docker run command. This would override any parameters that have already been set by the system, or in this section. This parameter is only supported for Azure ML compute types. Defaults to None.
shm_size (Optional[str]) – The size of the Docker container’s shared memory block. This should be in the format of (number)(unit) where the number has to be greater than 0 and the unit can be one of b(bytes), k(kilobytes), m(megabytes), or g(gigabytes).
timeout (Optional[int]) – The number, in seconds, after which the job will be cancelled.
code (Optional[Union[str, os.PathLike]]) – The source code to run the job. Can be a local path or “http:”, “https:”, or “azureml:” url pointing to a remote location.
identity (Optional[Union[ ManagedIdentityConfiguration, AmlTokenConfiguration, UserIdentityConfiguration]]) – The identity that the command job will use while running on compute.
is_deterministic (bool) – Specifies whether the Command will return the same output given the same input. Defaults to True. When True, if a Command Component is deterministic and has been run before in the current workspace with the same input and settings, it will reuse results from a previously submitted job when used as a node or step in a pipeline. In that scenario, no compute resources will be used.
services (Optional[dict[str, Union[JobService, JupyterLabJobService, SshJobService, TensorBoardJobService, VsCodeJobService]]]) – The interactive services for the node. Defaults to None. This is an experimental parameter, and may change at any time. Please see https://aka.ms/azuremlexperimental for more information.
job_tier (Optional[str]) – The job tier. Accepted values are “Spot”, “Basic”, “Standard”, or “Premium”.
priority (Optional[str]) – The priority of the job on the compute. Accepted values are “low”, “medium”, and “high”. Defaults to “medium”.
- Returns:
A Command object.
- Return type:
Example:
Creating a Command Job using the command() builder method.¶from azure.ai.ml import Input, Output, command train_func = command( environment="AzureML-sklearn-1.0-ubuntu20.04-py38-cpu:33", command='echo "hello world"', distribution={"type": "Pytorch", "process_count_per_instance": 2}, inputs={ "training_data": Input(type="uri_folder"), "max_epochs": 20, "learning_rate": 1.8, "learning_rate_schedule": "time-based", }, outputs={"model_output": Output(type="uri_folder")}, )
- azure.ai.ml.load_batch_deployment(source: str | PathLike | IO, *, relative_origin: str | None = None, params_override: List[Dict] | None = None, **kwargs: Any) BatchDeployment[source]¶
Construct a batch deployment object from yaml file.
- Parameters:
source (Union[PathLike, str, io.TextIOWrapper]) – The local yaml source of a batch deployment object. Must be either a path to a local file, or an already-open file. If the source is a path, it will be open and read. An exception is raised if the file does not exist. If the source is an open file, the file will be read directly, and an exception is raised if the file is not readable.
- Keyword Arguments:
relative_origin (str) – The origin to be used when deducing the relative locations of files referenced in the parsed yaml. Defaults to the inputted source’s directory if it is a file or file path input. Defaults to “./” if the source is a stream input with no name value.
params_override (List[Dict]) – Fields to overwrite on top of the yaml file. Format is [{“field1”: “value1”}, {“field2”: “value2”}]
- Returns:
Constructed batch deployment object.
- Return type:
- azure.ai.ml.load_batch_endpoint(source: str | PathLike | IO, relative_origin: str | None = None, *, params_override: List[Dict] | None = None, **kwargs: Any) BatchEndpoint[source]¶
Construct a batch endpoint object from yaml file.
- Parameters:
source (Union[PathLike, str, io.TextIOWrapper]) – The local yaml source of a batch endpoint object. Must be either a path to a local file, or an already-open file. If the source is a path, it will be open and read. An exception is raised if the file does not exist. If the source is an open file, the file will be read directly, and an exception is raised if the file is not readable.
relative_origin (str) – The origin to be used when deducing the relative locations of files referenced in the parsed yaml. Defaults to the inputted source’s directory if it is a file or file path input. Defaults to “./” if the source is a stream input with no name value.
- Keyword Arguments:
params_override (List[Dict]) – Fields to overwrite on top of the yaml file. Format is [{“field1”: “value1”}, {“field2”: “value2”}]
- Returns:
Constructed batch endpoint object.
- Return type:
- azure.ai.ml.load_capability_host(source: str | PathLike | IO, *, relative_origin: str | None = None, params_override: List[Dict] | None = None, **kwargs: Any) CapabilityHost[source]¶
Constructs a CapabilityHost object from a YAML file.
- Parameters:
source (Union[PathLike, str, io.TextIOWrapper]) – A path to a local YAML file or an already-open file object containing a capabilityhost configuration. If the source is a path, it will be opened and read. If the source is an open file, the file will be read directly.
- Keyword Arguments:
relative_origin (Optional[str]) – The root directory for the YAML. This directory will be used as the origin for deducing the relative locations of files referenced in the parsed YAML. Defaults to the same directory as source if source is a file or file path input. Defaults to “./” if the source is a stream input with no name value.
params_override (List[Dict]) – Fields to overwrite on top of the yaml file. Format is [{“field1”: “value1”}, {“field2”: “value2”}]
- Raises:
ValidationException – Raised if CapabilityHost cannot be successfully validated. Details will be provided in the error message.
- Returns:
Loaded CapabilityHost object.
- Return type:
Example:
Loading a capabilityhost from a YAML config file.¶from azure.ai.ml import load_capability_host capability_host = load_capability_host( source="./sdk/ml/azure-ai-ml/tests/test_configs/workspace/ai_workspaces/test_capability_host_hub.yml" )
- azure.ai.ml.load_component(source: PathLike | str | IO | None = None, *, relative_origin: str | None = None, params_override: List[Dict] | None = None, **kwargs: Any) CommandComponent | ParallelComponent | PipelineComponent[source]¶
Load component from local or remote to a component function.
- Parameters:
source (Union[PathLike, str, io.TextIOWrapper]) – The local yaml source of a component. Must be either a path to a local file, or an already-open file. If the source is a path, it will be open and read. An exception is raised if the file does not exist. If the source is an open file, the file will be read directly, and an exception is raised if the file is not readable.
- Keyword Arguments:
relative_origin (str) – The origin to be used when deducing the relative locations of files referenced in the parsed yaml. Defaults to the inputted source’s directory if it is a file or file path input. Defaults to “./” if the source is a stream input with no name value.
params_override (List[Dict]) – Fields to overwrite on top of the yaml file. Format is [{“field1”: “value1”}, {“field2”: “value2”}]
- Returns:
A Component object
- Return type:
Union[CommandComponent, ParallelComponent, PipelineComponent]
Example:
Loading a Component object from a YAML file, overriding its version to “1.0.2”, and registering it remotely.¶from azure.ai.ml import load_component component = load_component( source="./sdk/ml/azure-ai-ml/tests/test_configs/components/helloworld_component.yml", params_override=[{"version": "1.0.2"}], ) registered_component = ml_client.components.create_or_update(component)
- azure.ai.ml.load_compute(source: str | PathLike | IO, *, relative_origin: str | None = None, params_override: List[Dict[str, str]] | None = None, **kwargs: Any) Compute[source]¶
Construct a compute object from a yaml file.
- Parameters:
source (Union[PathLike, str, io.TextIOWrapper]) – The local yaml source of a compute. Must be either a path to a local file, or an already-open file. If the source is a path, it will be open and read. An exception is raised if the file does not exist. If the source is an open file, the file will be read directly, and an exception is raised if the file is not readable.
- Keyword Arguments:
relative_origin (Optional[str]) – The origin to be used when deducing the relative locations of files referenced in the parsed yaml. Defaults to the inputted source’s directory if it is a file or file path input. Defaults to “./” if the source is a stream input with no name value.
params_override (Optional[List[Dict[str, str]]) – Optional parameters to override in the loaded yaml.
- Returns:
Loaded compute object.
- Return type:
Example:
Loading a Compute object from a YAML file and overriding its description.¶from azure.ai.ml import load_compute compute = load_compute( "../tests/test_configs/compute/compute-vm.yaml", params_override=[{"description": "loaded from compute-vm.yaml"}], )
- azure.ai.ml.load_connection(source: str | PathLike | IO, *, relative_origin: str | None = None, params_override: List[Dict] | None = None, **kwargs: Any) WorkspaceConnection[source]¶
Construct a connection object from yaml file.
- Parameters:
source (Union[PathLike, str, io.TextIOWrapper]) – The local yaml source of a connection object. Must be either a path to a local file, or an already-open file. If the source is a path, it will be open and read. An exception is raised if the file does not exist. If the source is an open file, the file will be read directly, and an exception is raised if the file is not readable.
- Keyword Arguments:
relative_origin (str) – The origin to be used when deducing the relative locations of files referenced in the parsed yaml. Defaults to the inputted source’s directory if it is a file or file path input. Defaults to “./” if the source is a stream input with no name value.
params_override (List[Dict]) – Fields to overwrite on top of the yaml file. Format is [{“field1”: “value1”}, {“field2”: “value2”}]
- Returns:
Constructed connection object.
- Return type:
Connection
- azure.ai.ml.load_data(source: str | PathLike | IO, *, relative_origin: str | None = None, params_override: List[Dict] | None = None, **kwargs: Any) Data[source]¶
Construct a data object from yaml file.
- Parameters:
source (Union[PathLike, str, io.TextIOWrapper]) – The local yaml source of a data object. Must be either a path to a local file, or an already-open file. If the source is a path, it will be open and read. An exception is raised if the file does not exist. If the source is an open file, the file will be read directly, and an exception is raised if the file is not readable.
- Keyword Arguments:
relative_origin (str) – The origin to be used when deducing the relative locations of files referenced in the parsed yaml. Defaults to the inputted source’s directory if it is a file or file path input. Defaults to “./” if the source is a stream input with no name value.
params_override (List[Dict]) – Fields to overwrite on top of the yaml file. Format is [{“field1”: “value1”}, {“field2”: “value2”}]
- Raises:
ValidationException – Raised if Data cannot be successfully validated. Details will be provided in the error message.
- Returns:
Constructed Data or DataImport object.
- Return type:
- azure.ai.ml.load_datastore(source: str | PathLike | IO, *, relative_origin: str | None = None, params_override: List[Dict] | None = None, **kwargs: Any) Datastore[source]¶
Construct a datastore object from a yaml file.
- Parameters:
source (Union[PathLike, str, io.TextIOWrapper]) – The local yaml source of a datastore. Must be either a path to a local file, or an already-open file. If the source is a path, it will be open and read. An exception is raised if the file does not exist. If the source is an open file, the file will be read directly, and an exception is raised if the file is not readable.
- Keyword Arguments:
relative_origin (str) – The origin to be used when deducing the relative locations of files referenced in the parsed yaml. Defaults to the inputted source’s directory if it is a file or file path input. Defaults to “./” if the source is a stream input with no name value.
params_override (List[Dict]) – Fields to overwrite on top of the yaml file. Format is [{“field1”: “value1”}, {“field2”: “value2”}]
- Raises:
ValidationException – Raised if Datastore cannot be successfully validated. Details will be provided in the error message.
- Returns:
Loaded datastore object.
- Return type:
- azure.ai.ml.load_environment(source: str | PathLike | IO, *, relative_origin: str | None = None, params_override: List[Dict] | None = None, **kwargs: Any) Environment[source]¶
Construct a environment object from yaml file.
- Parameters:
source (Union[PathLike, str, io.TextIOWrapper]) – The local yaml source of an environment. Must be either a path to a local file, or an already-open file. If the source is a path, it will be open and read. An exception is raised if the file does not exist. If the source is an open file, the file will be read directly, and an exception is raised if the file is not readable.
- Keyword Arguments:
relative_origin (str) – The origin to be used when deducing the relative locations of files referenced in the parsed yaml. Defaults to the inputted source’s directory if it is a file or file path input. Defaults to “./” if the source is a stream input with no name value.
params_override (List[Dict]) – Fields to overwrite on top of the yaml file. Format is [{“field1”: “value1”}, {“field2”: “value2”}]
- Raises:
ValidationException – Raised if Environment cannot be successfully validated. Details will be provided in the error message.
- Returns:
Constructed environment object.
- Return type:
- azure.ai.ml.load_feature_set(source: str | PathLike | IO, *, relative_origin: str | None = None, params_override: List[Dict] | None = None, **kwargs: Any) FeatureSet[source]¶
Construct a FeatureSet object from yaml file.
- Parameters:
source (Union[PathLike, str, io.TextIOWrapper]) – The local yaml source of a FeatureSet object. Must be either a path to a local file, or an already-open file. If the source is a path, it will be open and read. An exception is raised if the file does not exist. If the source is an open file, the file will be read directly, and an exception is raised if the file is not readable.
- Keyword Arguments:
relative_origin (str) – The origin to be used when deducing the relative locations of files referenced in the parsed yaml. Defaults to the inputted source’s directory if it is a file or file path input. Defaults to “./” if the source is a stream input with no name value.
params_override (List[Dict]) – Fields to overwrite on top of the yaml file. Format is [{“field1”: “value1”}, {“field2”: “value2”}]
- Raises:
ValidationException – Raised if FeatureSet cannot be successfully validated. Details will be provided in the error message.
- Returns:
Constructed FeatureSet object.
- Return type:
- azure.ai.ml.load_feature_store(source: str | PathLike | IO, *, relative_origin: str | None = None, params_override: List[Dict] | None = None, **kwargs: Any) FeatureStore[source]¶
Load a feature store object from a yaml file.
- Parameters:
source (Union[PathLike, str, io.TextIOWrapper]) – The local yaml source of a feature store. Must be either a path to a local file, or an already-open file. If the source is a path, it will be open and read. An exception is raised if the file does not exist. If the source is an open file, the file will be read directly, and an exception is raised if the file is not readable.
- Keyword Arguments:
relative_origin (str) – The origin to be used when deducing the relative locations of files referenced in the parsed yaml. Defaults to the inputted source’s directory if it is a file or file path input. Defaults to “./” if the source is a stream input with no name value.
params_override (List[Dict]) – Fields to overwrite on top of the yaml file. Format is [{“field1”: “value1”}, {“field2”: “value2”}]
- Returns:
Loaded feature store object.
- Return type:
- azure.ai.ml.load_feature_store_entity(source: str | PathLike | IO, *, relative_origin: str | None = None, params_override: List[Dict] | None = None, **kwargs: Any) FeatureStoreEntity[source]¶
Construct a FeatureStoreEntity object from yaml file.
- Parameters:
source (Union[PathLike, str, io.TextIOWrapper]) – The local yaml source of a FeatureStoreEntity object. Must be either a path to a local file, or an already-open file. If the source is a path, it will be open and read. An exception is raised if the file does not exist. If the source is an open file, the file will be read directly, and an exception is raised if the file is not readable.
- Keyword Arguments:
relative_origin (str) – The origin to be used when deducing the relative locations of files referenced in the parsed yaml. Defaults to the inputted source’s directory if it is a file or file path input. Defaults to “./” if the source is a stream input with no name value.
params_override (List[Dict]) – Fields to overwrite on top of the yaml file. Format is [{“field1”: “value1”}, {“field2”: “value2”}]
- Raises:
ValidationException – Raised if FeatureStoreEntity cannot be successfully validated. Details will be provided in the error message.
- Returns:
Constructed FeatureStoreEntity object.
- Return type:
- azure.ai.ml.load_index(source: str | PathLike | IO, *, relative_origin: str | None = None, params_override: List[Dict] | None = None, **kwargs: Any) Index[source]¶
Note
This is an experimental method, and may change at any time. Please see https://aka.ms/azuremlexperimental for more information.
Constructs a Index object from a YAML file.
- Parameters:
source (Union[PathLike, str, io.TextIOWrapper]) – A path to a local YAML file or an already-open file object containing an index configuration. If the source is a path, it will be opened and read. If the source is an open file, the file will be read directly.
- Keyword Arguments:
relative_origin (Optional[str]) – The root directory for the YAML. This directory will be used as the origin for deducing the relative locations of files referenced in the parsed YAML. Defaults to the same directory as source if source is a file or file path input. Defaults to “./” if the source is a stream input with no name value.
params_override (List[Dict]) – Fields to overwrite on top of the yaml file. Format is [{“field1”: “value1”}, {“field2”: “value2”}]
- Raises:
ValidationException – Raised if Index cannot be successfully validated. Details will be provided in the error message.
- Returns:
A loaded Index object.
- Return type:
- azure.ai.ml.load_job(source: str | PathLike | IO, *, relative_origin: str | None = None, params_override: List[Dict] | None = None, **kwargs: Any) Job[source]¶
Constructs a Job object from a YAML file.
- Parameters:
source (Union[PathLike, str, io.TextIOWrapper]) – A path to a local YAML file or an already-open file object containing a job configuration. If the source is a path, it will be opened and read. If the source is an open file, the file will be read directly.
- Keyword Arguments:
relative_origin (Optional[str]) – The root directory for the YAML. This directory will be used as the origin for deducing the relative locations of files referenced in the parsed YAML. Defaults to the same directory as source if source is a file or file path input. Defaults to “./” if the source is a stream input with no name value.
params_override (List[Dict]) – Fields to overwrite on top of the yaml file. Format is [{“field1”: “value1”}, {“field2”: “value2”}]
- Raises:
ValidationException – Raised if Job cannot be successfully validated. Details will be provided in the error message.
- Returns:
A loaded Job object.
- Return type:
Example:
Loading a Job from a YAML config file.¶from azure.ai.ml import load_job job = load_job(source="./sdk/ml/azure-ai-ml/tests/test_configs/command_job/command_job_test_local_env.yml")
- azure.ai.ml.load_marketplace_subscription(source: str | PathLike | IO, *, relative_origin: str | None = None, **kwargs: Any) MarketplaceSubscription[source]¶
Note
This is an experimental method, and may change at any time. Please see https://aka.ms/azuremlexperimental for more information.
>
- azure.ai.ml.load_model(source: str | PathLike | IO, *, relative_origin: str | None = None, params_override: List[Dict] | None = None, **kwargs: Any) Model[source]¶
Constructs a Model object from a YAML file.
- Parameters:
source (Union[PathLike, str, io.TextIOWrapper]) – A path to a local YAML file or an already-open file object containing a job configuration. If the source is a path, it will be opened and read. If the source is an open file, the file will be read directly.
- Keyword Arguments:
relative_origin (Optional[str]) – The root directory for the YAML. This directory will be used as the origin for deducing the relative locations of files referenced in the parsed YAML. Defaults to the same directory as source if source is a file or file path input. Defaults to “./” if the source is a stream input with no name value.
params_override (List[Dict]) – Fields to overwrite on top of the yaml file. Format is [{“field1”: “value1”}, {“field2”: “value2”}]
- Raises:
ValidationException – Raised if Job cannot be successfully validated. Details will be provided in the error message.
- Returns:
A loaded Model object.
- Return type:
Example:
Loading a Model from a YAML config file, overriding the name and version parameters.¶from azure.ai.ml import load_model model = load_model( source="./sdk/ml/azure-ai-ml/tests/test_configs/model/model_with_stage.yml", params_override=[{"name": "new_model_name"}, {"version": "1"}], )
- azure.ai.ml.load_model_package(source: str | PathLike | IO, *, relative_origin: str | None = None, params_override: List[Dict] | None = None, **kwargs: Any) ModelPackage[source]¶
Note
This is an experimental method, and may change at any time. Please see https://aka.ms/azuremlexperimental for more information.
Constructs a ModelPackage object from a YAML file.
- Parameters:
source (Union[PathLike, str, io.TextIOWrapper]) – A path to a local YAML file or an already-open file object containing a job configuration. If the source is a path, it will be opened and read. If the source is an open file, the file will be read directly.
- Keyword Arguments:
relative_origin (Optional[str]) – The root directory for the YAML. This directory will be used as the origin for deducing the relative locations of files referenced in the parsed YAML. Defaults to the same directory as source if source is a file or file path input. Defaults to “./” if the source is a stream input with no name value.
params_override (List[Dict]) – Fields to overwrite on top of the yaml file. Format is [{“field1”: “value1”}, {“field2”: “value2”}]
- Raises:
ValidationException – Raised if Job cannot be successfully validated. Details will be provided in the error message.
- Returns:
A loaded ModelPackage object.
- Return type:
Example:
Loading a ModelPackage from a YAML config file.¶from azure.ai.ml import load_model_package model_package = load_model_package( "./sdk/ml/azure-ai-ml/tests/test_configs/model_package/model_package_simple.yml" )
- azure.ai.ml.load_online_deployment(source: str | PathLike | IO, *, relative_origin: str | None = None, params_override: List[Dict] | None = None, **kwargs: Any) OnlineDeployment[source]¶
Construct a online deployment object from yaml file.
- Parameters:
source (Union[PathLike, str, io.TextIOWrapper]) – The local yaml source of an online deployment object. Must be either a path to a local file, or an already-open file. If the source is a path, it will be open and read. An exception is raised if the file does not exist. If the source is an open file, the file will be read directly, and an exception is raised if the file is not readable.
- Keyword Arguments:
relative_origin (str) – The origin to be used when deducing the relative locations of files referenced in the parsed yaml. Defaults to the inputted source’s directory if it is a file or file path input. Defaults to “./” if the source is a stream input with no name value.
params_override (List[Dict]) – Fields to overwrite on top of the yaml file. Format is [{“field1”: “value1”}, {“field2”: “value2”}]
- Raises:
ValidationException – Raised if Online Deployment cannot be successfully validated. Details will be provided in the error message.
- Returns:
Constructed online deployment object.
- Return type:
- azure.ai.ml.load_online_endpoint(source: str | PathLike | IO, *, relative_origin: str | None = None, params_override: List[Dict] | None = None, **kwargs: Any) OnlineEndpoint[source]¶
Construct a online endpoint object from yaml file.
- Parameters:
source (Union[PathLike, str, io.TextIOWrapper]) – The local yaml source of an online endpoint object. Must be either a path to a local file, or an already-open file. If the source is a path, it will be open and read. An exception is raised if the file does not exist. If the source is an open file, the file will be read directly, and an exception is raised if the file is not readable.
- Keyword Arguments:
relative_origin (str) – The origin to be used when deducing the relative locations of files referenced in the parsed yaml. Defaults to the inputted source’s directory if it is a file or file path input. Defaults to “./” if the source is a stream input with no name value.
params_override (List[Dict]) – Fields to overwrite on top of the yaml file. Format is [{“field1”: “value1”}, {“field2”: “value2”}]
- Raises:
ValidationException – Raised if Online Endpoint cannot be successfully validated. Details will be provided in the error message.
- Returns:
Constructed online endpoint object.
- Return type:
- azure.ai.ml.load_registry(source: str | PathLike | IO, *, relative_origin: str | None = None, params_override: List[Dict] | None = None, **kwargs: Any) Registry[source]¶
Load a registry object from a yaml file.
- Parameters:
source (Union[PathLike, str, io.TextIOWrapper]) – The local yaml source of a registry. Must be either a path to a local file, or an already-open file. If the source is a path, it will be open and read. An exception is raised if the file does not exist. If the source is an open file, the file will be read directly, and an exception is raised if the file is not readable.
- Keyword Arguments:
relative_origin (str) – The origin to be used when deducing the relative locations of files referenced in the parsed yaml. Defaults to the inputted source’s directory if it is a file or file path input. Defaults to “./” if the source is a stream input with no name value.
params_override (List[Dict]) – Fields to overwrite on top of the yaml file. Format is [{“field1”: “value1”}, {“field2”: “value2”}]
- Returns:
Loaded registry object.
- Return type:
- azure.ai.ml.load_serverless_endpoint(source: str | PathLike | IO, *, relative_origin: str | None = None, **kwargs: Any) ServerlessEndpoint[source]¶
Note
This is an experimental method, and may change at any time. Please see https://aka.ms/azuremlexperimental for more information.
>
- azure.ai.ml.load_workspace(source: str | PathLike | IO, *, relative_origin: str | None = None, params_override: List[Dict] | None = None, **kwargs: Any) Workspace[source]¶
Load a workspace object from a yaml file. This includes workspace sub-classes like hubs and projects.
- Parameters:
source (Union[PathLike, str, io.TextIOWrapper]) – The local yaml source of a workspace. Must be either a path to a local file, or an already-open file. If the source is a path, it will be open and read. An exception is raised if the file does not exist. If the source is an open file, the file will be read directly, and an exception is raised if the file is not readable.
- Keyword Arguments:
relative_origin (str) – The origin to be used when deducing the relative locations of files referenced in the parsed yaml. Defaults to the inputted source’s directory if it is a file or file path input. Defaults to “./” if the source is a stream input with no name value.
params_override (List[Dict]) – Fields to overwrite on top of the yaml file. Format is [{“field1”: “value1”}, {“field2”: “value2”}]
- Returns:
Loaded workspace object.
- Return type:
Example:
Loading a Workspace from a YAML config file.¶from azure.ai.ml import load_workspace ws = load_workspace( "../tests/test_configs/workspace/workspace_min.yaml", params_override=[{"description": "loaded from workspace_min.yaml"}], )
- azure.ai.ml.load_workspace_connection(source: str | PathLike | IO, *, relative_origin: str | None = None, **kwargs: Any) WorkspaceConnection[source]¶
Deprecated - use ‘load_connection’ instead. Construct a connection object from yaml file.
- Parameters:
source (Union[PathLike, str, io.TextIOWrapper]) – The local yaml source of a connection object. Must be either a path to a local file, or an already-open file. If the source is a path, it will be open and read. An exception is raised if the file does not exist. If the source is an open file, the file will be read directly, and an exception is raised if the file is not readable.
- Keyword Arguments:
relative_origin (str) – The origin to be used when deducing the relative locations of files referenced in the parsed yaml. Defaults to the inputted source’s directory if it is a file or file path input. Defaults to “./” if the source is a stream input with no name value.
- Returns:
Constructed connection object.
- Return type:
Connection
- azure.ai.ml.spark(*, experiment_name: str | None = None, name: str | None = None, display_name: str | None = None, description: str | None = None, tags: Dict | None = None, code: PathLike | str | None = None, entry: Dict[str, str] | SparkJobEntry | None = None, py_files: List[str] | None = None, jars: List[str] | None = None, files: List[str] | None = None, archives: List[str] | None = None, identity: Dict[str, str] | ManagedIdentity | AmlToken | UserIdentity | None = None, driver_cores: int | None = None, driver_memory: str | None = None, executor_cores: int | None = None, executor_memory: str | None = None, executor_instances: int | None = None, dynamic_allocation_enabled: bool | None = None, dynamic_allocation_min_executors: int | None = None, dynamic_allocation_max_executors: int | None = None, conf: Dict[str, str] | None = None, environment: str | Environment | None = None, inputs: Dict | None = None, outputs: Dict | None = None, args: str | None = None, compute: str | None = None, resources: Dict | SparkResourceConfiguration | None = None, **kwargs: Any) Spark[source]¶
Creates a Spark object which can be used inside a dsl.pipeline function or used as a standalone Spark job.
- Keyword Arguments:
experiment_name (Optional[str]) – The name of the experiment the job will be created under.
name (Optional[str]) – The name of the job.
display_name (Optional[str]) – The job display name.
description (Optional[str]) – The description of the job. Defaults to None.
tags (Optional[dict[str, str]]) – The dictionary of tags for the job. Tags can be added, removed, and updated. Defaults to None.
code – The source code to run the job. Can be a local path or “http:”, “https:”, or “azureml:” url pointing to a remote location.
entry (Optional[Union[dict[str, str], SparkJobEntry]]) – The file or class entry point.
py_files (Optional[List[str]]) – The list of .zip, .egg or .py files to place on the PYTHONPATH for Python apps. Defaults to None.
jars (Optional[List[str]]) – The list of .JAR files to include on the driver and executor classpaths. Defaults to None.
files (Optional[List[str]]) – The list of files to be placed in the working directory of each executor. Defaults to None.
archives (Optional[List[str]]) – The list of archives to be extracted into the working directory of each executor. Defaults to None.
identity (Optional[Union[ dict[str, str], ManagedIdentityConfiguration, AmlTokenConfiguration, UserIdentityConfiguration]]) – The identity that the Spark job will use while running on compute.
driver_cores (Optional[int]) – The number of cores to use for the driver process, only in cluster mode.
driver_memory (Optional[str]) – The amount of memory to use for the driver process, formatted as strings with a size unit suffix (“k”, “m”, “g” or “t”) (e.g. “512m”, “2g”).
executor_cores (Optional[int]) – The number of cores to use on each executor.
executor_memory (Optional[str]) – The amount of memory to use per executor process, formatted as strings with a size unit suffix (“k”, “m”, “g” or “t”) (e.g. “512m”, “2g”).
executor_instances (Optional[int]) – The initial number of executors.
dynamic_allocation_enabled (Optional[bool]) – Whether to use dynamic resource allocation, which scales the number of executors registered with this application up and down based on the workload.
dynamic_allocation_min_executors (Optional[int]) – The lower bound for the number of executors if dynamic allocation is enabled.
dynamic_allocation_max_executors (Optional[int]) – The upper bound for the number of executors if dynamic allocation is enabled.
conf (Optional[dict[str, str]]) – A dictionary with pre-defined Spark configurations key and values. Defaults to None.
environment (Optional[Union[str, Environment]]) – The Azure ML environment to run the job in.
inputs (Optional[dict[str, Input]]) – A mapping of input names to input data used in the job. Defaults to None.
outputs (Optional[dict[str, Output]]) – A mapping of output names to output data used in the job. Defaults to None.
args (Optional[str]) – The arguments for the job.
compute (Optional[str]) – The compute resource the job runs on.
resources (Optional[Union[dict, SparkResourceConfiguration]]) – The compute resource configuration for the job.
- Returns:
A Spark object.
- Return type:
Example:
Configuring a SparkJob.¶from azure.ai.ml import Input, Output, spark from azure.ai.ml.entities import ManagedIdentityConfiguration node = spark( experiment_name="builder-spark-experiment-name", description="simply spark description", code="./sdk/ml/azure-ai-ml/tests/test_configs/spark_job/basic_spark_job/src", entry={"file": "./main.py"}, jars=["simple-1.1.1.jar"], driver_cores=1, driver_memory="2g", executor_cores=2, executor_memory="2g", executor_instances=2, dynamic_allocation_enabled=True, dynamic_allocation_min_executors=1, dynamic_allocation_max_executors=3, identity=ManagedIdentityConfiguration(), inputs={ "input1": Input( type="uri_file", path="azureml://datastores/workspaceblobstore/paths/python/data.csv", mode="direct" ) }, outputs={ "output1": Output( type="uri_file", path="azureml://datastores/workspaceblobstore/spark_titanic_output/titanic.parquet", mode="direct", ) }, args="--input1 ${{inputs.input1}} --output1 ${{outputs.output1}} --my_sample_rate 0.01", resources={ "instance_type": "Standard_E8S_V3", "runtime_version": "3.3.0", }, )
Example:
Configuring a SparkJob.¶node = spark( code="./sdk/ml/azure-ai-ml/tests/test_configs/spark_job/basic_spark_job/src", entry={"file": "./main.py"}, driver_cores=1, driver_memory="2g", executor_cores=2, executor_memory="2g", executor_instances=2, resources={ "instance_type": "Standard_E8S_V3", "runtime_version": "3.3.0", }, identity={"type": "managed"}, )
Example:
Building a Spark pipeline using the DSL pipeline decorator¶from azure.ai.ml import Input, Output, dsl, spark from azure.ai.ml.constants import AssetTypes, InputOutputModes # define the spark task first_step = spark( code="/src", entry={"file": "add_greeting_column.py"}, py_files=["utils.zip"], files=["my_files.txt"], driver_cores=2, driver_memory="1g", executor_cores=1, executor_memory="1g", executor_instances=1, inputs=dict( file_input=Input(path="/dataset/iris.csv", type=AssetTypes.URI_FILE, mode=InputOutputModes.DIRECT) ), args="--file_input ${{inputs.file_input}}", resources={"instance_type": "standard_e4s_v3", "runtime_version": "3.3.0"}, ) second_step = spark( code="/src", entry={"file": "count_by_row.py"}, jars=["scala_project.jar"], files=["my_files.txt"], driver_cores=2, driver_memory="1g", executor_cores=1, executor_memory="1g", executor_instances=1, inputs=dict( file_input=Input(path="/dataset/iris.csv", type=AssetTypes.URI_FILE, mode=InputOutputModes.DIRECT) ), outputs=dict(output=Output(type="uri_folder", mode=InputOutputModes.DIRECT)), args="--file_input ${{inputs.file_input}} --output ${{outputs.output}}", resources={"instance_type": "standard_e4s_v3", "runtime_version": "3.3.0"}, ) # Define pipeline @dsl.pipeline(description="submit a pipeline with spark job") def spark_pipeline_from_builder(data): add_greeting_column = first_step(file_input=data) count_by_row = second_step(file_input=data) return {"output": count_by_row.outputs.output} pipeline = spark_pipeline_from_builder( data=Input(path="/dataset/iris.csv", type=AssetTypes.URI_FILE, mode=InputOutputModes.DIRECT), )
Subpackages¶
- azure.ai.ml.automl package
BlockedTransformersBlockedTransformers.capitalize()BlockedTransformers.casefold()BlockedTransformers.center()BlockedTransformers.count()BlockedTransformers.encode()BlockedTransformers.endswith()BlockedTransformers.expandtabs()BlockedTransformers.find()BlockedTransformers.format()BlockedTransformers.format_map()BlockedTransformers.index()BlockedTransformers.isalnum()BlockedTransformers.isalpha()BlockedTransformers.isascii()BlockedTransformers.isdecimal()BlockedTransformers.isdigit()BlockedTransformers.isidentifier()BlockedTransformers.islower()BlockedTransformers.isnumeric()BlockedTransformers.isprintable()BlockedTransformers.isspace()BlockedTransformers.istitle()BlockedTransformers.isupper()BlockedTransformers.join()BlockedTransformers.ljust()BlockedTransformers.lower()BlockedTransformers.lstrip()BlockedTransformers.maketrans()BlockedTransformers.partition()BlockedTransformers.removeprefix()BlockedTransformers.removesuffix()BlockedTransformers.replace()BlockedTransformers.rfind()BlockedTransformers.rindex()BlockedTransformers.rjust()BlockedTransformers.rpartition()BlockedTransformers.rsplit()BlockedTransformers.rstrip()BlockedTransformers.split()BlockedTransformers.splitlines()BlockedTransformers.startswith()BlockedTransformers.strip()BlockedTransformers.swapcase()BlockedTransformers.title()BlockedTransformers.translate()BlockedTransformers.upper()BlockedTransformers.zfill()BlockedTransformers.CAT_TARGET_ENCODERBlockedTransformers.COUNT_VECTORIZERBlockedTransformers.HASH_ONE_HOT_ENCODERBlockedTransformers.LABEL_ENCODERBlockedTransformers.NAIVE_BAYESBlockedTransformers.ONE_HOT_ENCODERBlockedTransformers.TEXT_TARGET_ENCODERBlockedTransformers.TF_IDFBlockedTransformers.WORD_EMBEDDINGBlockedTransformers.WO_E_TARGET_ENCODER
ClassificationJobClassificationJob.dump()ClassificationJob.set_data()ClassificationJob.set_featurization()ClassificationJob.set_limits()ClassificationJob.set_training()ClassificationJob.base_pathClassificationJob.creation_contextClassificationJob.featurizationClassificationJob.idClassificationJob.inputsClassificationJob.limitsClassificationJob.log_filesClassificationJob.log_verbosityClassificationJob.outputsClassificationJob.primary_metricClassificationJob.statusClassificationJob.studio_urlClassificationJob.task_typeClassificationJob.test_dataClassificationJob.trainingClassificationJob.training_dataClassificationJob.typeClassificationJob.validation_data
ClassificationModelsClassificationModels.capitalize()ClassificationModels.casefold()ClassificationModels.center()ClassificationModels.count()ClassificationModels.encode()ClassificationModels.endswith()ClassificationModels.expandtabs()ClassificationModels.find()ClassificationModels.format()ClassificationModels.format_map()ClassificationModels.index()ClassificationModels.isalnum()ClassificationModels.isalpha()ClassificationModels.isascii()ClassificationModels.isdecimal()ClassificationModels.isdigit()ClassificationModels.isidentifier()ClassificationModels.islower()ClassificationModels.isnumeric()ClassificationModels.isprintable()ClassificationModels.isspace()ClassificationModels.istitle()ClassificationModels.isupper()ClassificationModels.join()ClassificationModels.ljust()ClassificationModels.lower()ClassificationModels.lstrip()ClassificationModels.maketrans()ClassificationModels.partition()ClassificationModels.removeprefix()ClassificationModels.removesuffix()ClassificationModels.replace()ClassificationModels.rfind()ClassificationModels.rindex()ClassificationModels.rjust()ClassificationModels.rpartition()ClassificationModels.rsplit()ClassificationModels.rstrip()ClassificationModels.split()ClassificationModels.splitlines()ClassificationModels.startswith()ClassificationModels.strip()ClassificationModels.swapcase()ClassificationModels.title()ClassificationModels.translate()ClassificationModels.upper()ClassificationModels.zfill()ClassificationModels.BERNOULLI_NAIVE_BAYESClassificationModels.DECISION_TREEClassificationModels.EXTREME_RANDOM_TREESClassificationModels.GRADIENT_BOOSTINGClassificationModels.KNNClassificationModels.LIGHT_GBMClassificationModels.LINEAR_SVMClassificationModels.LOGISTIC_REGRESSIONClassificationModels.MULTINOMIAL_NAIVE_BAYESClassificationModels.RANDOM_FORESTClassificationModels.SGDClassificationModels.SVMClassificationModels.XG_BOOST_CLASSIFIER
ClassificationMultilabelPrimaryMetricsClassificationMultilabelPrimaryMetrics.capitalize()ClassificationMultilabelPrimaryMetrics.casefold()ClassificationMultilabelPrimaryMetrics.center()ClassificationMultilabelPrimaryMetrics.count()ClassificationMultilabelPrimaryMetrics.encode()ClassificationMultilabelPrimaryMetrics.endswith()ClassificationMultilabelPrimaryMetrics.expandtabs()ClassificationMultilabelPrimaryMetrics.find()ClassificationMultilabelPrimaryMetrics.format()ClassificationMultilabelPrimaryMetrics.format_map()ClassificationMultilabelPrimaryMetrics.index()ClassificationMultilabelPrimaryMetrics.isalnum()ClassificationMultilabelPrimaryMetrics.isalpha()ClassificationMultilabelPrimaryMetrics.isascii()ClassificationMultilabelPrimaryMetrics.isdecimal()ClassificationMultilabelPrimaryMetrics.isdigit()ClassificationMultilabelPrimaryMetrics.isidentifier()ClassificationMultilabelPrimaryMetrics.islower()ClassificationMultilabelPrimaryMetrics.isnumeric()ClassificationMultilabelPrimaryMetrics.isprintable()ClassificationMultilabelPrimaryMetrics.isspace()ClassificationMultilabelPrimaryMetrics.istitle()ClassificationMultilabelPrimaryMetrics.isupper()ClassificationMultilabelPrimaryMetrics.join()ClassificationMultilabelPrimaryMetrics.ljust()ClassificationMultilabelPrimaryMetrics.lower()ClassificationMultilabelPrimaryMetrics.lstrip()ClassificationMultilabelPrimaryMetrics.maketrans()ClassificationMultilabelPrimaryMetrics.partition()ClassificationMultilabelPrimaryMetrics.removeprefix()ClassificationMultilabelPrimaryMetrics.removesuffix()ClassificationMultilabelPrimaryMetrics.replace()ClassificationMultilabelPrimaryMetrics.rfind()ClassificationMultilabelPrimaryMetrics.rindex()ClassificationMultilabelPrimaryMetrics.rjust()ClassificationMultilabelPrimaryMetrics.rpartition()ClassificationMultilabelPrimaryMetrics.rsplit()ClassificationMultilabelPrimaryMetrics.rstrip()ClassificationMultilabelPrimaryMetrics.split()ClassificationMultilabelPrimaryMetrics.splitlines()ClassificationMultilabelPrimaryMetrics.startswith()ClassificationMultilabelPrimaryMetrics.strip()ClassificationMultilabelPrimaryMetrics.swapcase()ClassificationMultilabelPrimaryMetrics.title()ClassificationMultilabelPrimaryMetrics.translate()ClassificationMultilabelPrimaryMetrics.upper()ClassificationMultilabelPrimaryMetrics.zfill()ClassificationMultilabelPrimaryMetrics.ACCURACYClassificationMultilabelPrimaryMetrics.AUC_WEIGHTEDClassificationMultilabelPrimaryMetrics.AVERAGE_PRECISION_SCORE_WEIGHTEDClassificationMultilabelPrimaryMetrics.IOUClassificationMultilabelPrimaryMetrics.NORM_MACRO_RECALLClassificationMultilabelPrimaryMetrics.PRECISION_SCORE_WEIGHTED
ClassificationPrimaryMetricsClassificationPrimaryMetrics.capitalize()ClassificationPrimaryMetrics.casefold()ClassificationPrimaryMetrics.center()ClassificationPrimaryMetrics.count()ClassificationPrimaryMetrics.encode()ClassificationPrimaryMetrics.endswith()ClassificationPrimaryMetrics.expandtabs()ClassificationPrimaryMetrics.find()ClassificationPrimaryMetrics.format()ClassificationPrimaryMetrics.format_map()ClassificationPrimaryMetrics.index()ClassificationPrimaryMetrics.isalnum()ClassificationPrimaryMetrics.isalpha()ClassificationPrimaryMetrics.isascii()ClassificationPrimaryMetrics.isdecimal()ClassificationPrimaryMetrics.isdigit()ClassificationPrimaryMetrics.isidentifier()ClassificationPrimaryMetrics.islower()ClassificationPrimaryMetrics.isnumeric()ClassificationPrimaryMetrics.isprintable()ClassificationPrimaryMetrics.isspace()ClassificationPrimaryMetrics.istitle()ClassificationPrimaryMetrics.isupper()ClassificationPrimaryMetrics.join()ClassificationPrimaryMetrics.ljust()ClassificationPrimaryMetrics.lower()ClassificationPrimaryMetrics.lstrip()ClassificationPrimaryMetrics.maketrans()ClassificationPrimaryMetrics.partition()ClassificationPrimaryMetrics.removeprefix()ClassificationPrimaryMetrics.removesuffix()ClassificationPrimaryMetrics.replace()ClassificationPrimaryMetrics.rfind()ClassificationPrimaryMetrics.rindex()ClassificationPrimaryMetrics.rjust()ClassificationPrimaryMetrics.rpartition()ClassificationPrimaryMetrics.rsplit()ClassificationPrimaryMetrics.rstrip()ClassificationPrimaryMetrics.split()ClassificationPrimaryMetrics.splitlines()ClassificationPrimaryMetrics.startswith()ClassificationPrimaryMetrics.strip()ClassificationPrimaryMetrics.swapcase()ClassificationPrimaryMetrics.title()ClassificationPrimaryMetrics.translate()ClassificationPrimaryMetrics.upper()ClassificationPrimaryMetrics.zfill()ClassificationPrimaryMetrics.ACCURACYClassificationPrimaryMetrics.AUC_WEIGHTEDClassificationPrimaryMetrics.AVERAGE_PRECISION_SCORE_WEIGHTEDClassificationPrimaryMetrics.NORM_MACRO_RECALLClassificationPrimaryMetrics.PRECISION_SCORE_WEIGHTED
ColumnTransformerFeaturizationModeFeaturizationMode.capitalize()FeaturizationMode.casefold()FeaturizationMode.center()FeaturizationMode.count()FeaturizationMode.encode()FeaturizationMode.endswith()FeaturizationMode.expandtabs()FeaturizationMode.find()FeaturizationMode.format()FeaturizationMode.format_map()FeaturizationMode.index()FeaturizationMode.isalnum()FeaturizationMode.isalpha()FeaturizationMode.isascii()FeaturizationMode.isdecimal()FeaturizationMode.isdigit()FeaturizationMode.isidentifier()FeaturizationMode.islower()FeaturizationMode.isnumeric()FeaturizationMode.isprintable()FeaturizationMode.isspace()FeaturizationMode.istitle()FeaturizationMode.isupper()FeaturizationMode.join()FeaturizationMode.ljust()FeaturizationMode.lower()FeaturizationMode.lstrip()FeaturizationMode.maketrans()FeaturizationMode.partition()FeaturizationMode.removeprefix()FeaturizationMode.removesuffix()FeaturizationMode.replace()FeaturizationMode.rfind()FeaturizationMode.rindex()FeaturizationMode.rjust()FeaturizationMode.rpartition()FeaturizationMode.rsplit()FeaturizationMode.rstrip()FeaturizationMode.split()FeaturizationMode.splitlines()FeaturizationMode.startswith()FeaturizationMode.strip()FeaturizationMode.swapcase()FeaturizationMode.title()FeaturizationMode.translate()FeaturizationMode.upper()FeaturizationMode.zfill()FeaturizationMode.AUTOFeaturizationMode.CUSTOMFeaturizationMode.OFF
ForecastHorizonModeForecastHorizonMode.capitalize()ForecastHorizonMode.casefold()ForecastHorizonMode.center()ForecastHorizonMode.count()ForecastHorizonMode.encode()ForecastHorizonMode.endswith()ForecastHorizonMode.expandtabs()ForecastHorizonMode.find()ForecastHorizonMode.format()ForecastHorizonMode.format_map()ForecastHorizonMode.index()ForecastHorizonMode.isalnum()ForecastHorizonMode.isalpha()ForecastHorizonMode.isascii()ForecastHorizonMode.isdecimal()ForecastHorizonMode.isdigit()ForecastHorizonMode.isidentifier()ForecastHorizonMode.islower()ForecastHorizonMode.isnumeric()ForecastHorizonMode.isprintable()ForecastHorizonMode.isspace()ForecastHorizonMode.istitle()ForecastHorizonMode.isupper()ForecastHorizonMode.join()ForecastHorizonMode.ljust()ForecastHorizonMode.lower()ForecastHorizonMode.lstrip()ForecastHorizonMode.maketrans()ForecastHorizonMode.partition()ForecastHorizonMode.removeprefix()ForecastHorizonMode.removesuffix()ForecastHorizonMode.replace()ForecastHorizonMode.rfind()ForecastHorizonMode.rindex()ForecastHorizonMode.rjust()ForecastHorizonMode.rpartition()ForecastHorizonMode.rsplit()ForecastHorizonMode.rstrip()ForecastHorizonMode.split()ForecastHorizonMode.splitlines()ForecastHorizonMode.startswith()ForecastHorizonMode.strip()ForecastHorizonMode.swapcase()ForecastHorizonMode.title()ForecastHorizonMode.translate()ForecastHorizonMode.upper()ForecastHorizonMode.zfill()ForecastHorizonMode.AUTOForecastHorizonMode.CUSTOM
ForecastingJobForecastingJob.dump()ForecastingJob.set_data()ForecastingJob.set_featurization()ForecastingJob.set_forecast_settings()ForecastingJob.set_limits()ForecastingJob.set_training()ForecastingJob.base_pathForecastingJob.creation_contextForecastingJob.featurizationForecastingJob.forecasting_settingsForecastingJob.idForecastingJob.inputsForecastingJob.limitsForecastingJob.log_filesForecastingJob.log_verbosityForecastingJob.outputsForecastingJob.primary_metricForecastingJob.statusForecastingJob.studio_urlForecastingJob.task_typeForecastingJob.test_dataForecastingJob.trainingForecastingJob.training_dataForecastingJob.typeForecastingJob.validation_data
ForecastingModelsForecastingModels.capitalize()ForecastingModels.casefold()ForecastingModels.center()ForecastingModels.count()ForecastingModels.encode()ForecastingModels.endswith()ForecastingModels.expandtabs()ForecastingModels.find()ForecastingModels.format()ForecastingModels.format_map()ForecastingModels.index()ForecastingModels.isalnum()ForecastingModels.isalpha()ForecastingModels.isascii()ForecastingModels.isdecimal()ForecastingModels.isdigit()ForecastingModels.isidentifier()ForecastingModels.islower()ForecastingModels.isnumeric()ForecastingModels.isprintable()ForecastingModels.isspace()ForecastingModels.istitle()ForecastingModels.isupper()ForecastingModels.join()ForecastingModels.ljust()ForecastingModels.lower()ForecastingModels.lstrip()ForecastingModels.maketrans()ForecastingModels.partition()ForecastingModels.removeprefix()ForecastingModels.removesuffix()ForecastingModels.replace()ForecastingModels.rfind()ForecastingModels.rindex()ForecastingModels.rjust()ForecastingModels.rpartition()ForecastingModels.rsplit()ForecastingModels.rstrip()ForecastingModels.split()ForecastingModels.splitlines()ForecastingModels.startswith()ForecastingModels.strip()ForecastingModels.swapcase()ForecastingModels.title()ForecastingModels.translate()ForecastingModels.upper()ForecastingModels.zfill()ForecastingModels.ARIMAXForecastingModels.AUTO_ARIMAForecastingModels.AVERAGEForecastingModels.DECISION_TREEForecastingModels.ELASTIC_NETForecastingModels.EXPONENTIAL_SMOOTHINGForecastingModels.EXTREME_RANDOM_TREESForecastingModels.GRADIENT_BOOSTINGForecastingModels.KNNForecastingModels.LASSO_LARSForecastingModels.LIGHT_GBMForecastingModels.NAIVEForecastingModels.PROPHETForecastingModels.RANDOM_FORESTForecastingModels.SEASONAL_AVERAGEForecastingModels.SEASONAL_NAIVEForecastingModels.SGDForecastingModels.TCN_FORECASTERForecastingModels.XG_BOOST_REGRESSOR
ForecastingPrimaryMetricsForecastingPrimaryMetrics.capitalize()ForecastingPrimaryMetrics.casefold()ForecastingPrimaryMetrics.center()ForecastingPrimaryMetrics.count()ForecastingPrimaryMetrics.encode()ForecastingPrimaryMetrics.endswith()ForecastingPrimaryMetrics.expandtabs()ForecastingPrimaryMetrics.find()ForecastingPrimaryMetrics.format()ForecastingPrimaryMetrics.format_map()ForecastingPrimaryMetrics.index()ForecastingPrimaryMetrics.isalnum()ForecastingPrimaryMetrics.isalpha()ForecastingPrimaryMetrics.isascii()ForecastingPrimaryMetrics.isdecimal()ForecastingPrimaryMetrics.isdigit()ForecastingPrimaryMetrics.isidentifier()ForecastingPrimaryMetrics.islower()ForecastingPrimaryMetrics.isnumeric()ForecastingPrimaryMetrics.isprintable()ForecastingPrimaryMetrics.isspace()ForecastingPrimaryMetrics.istitle()ForecastingPrimaryMetrics.isupper()ForecastingPrimaryMetrics.join()ForecastingPrimaryMetrics.ljust()ForecastingPrimaryMetrics.lower()ForecastingPrimaryMetrics.lstrip()ForecastingPrimaryMetrics.maketrans()ForecastingPrimaryMetrics.partition()ForecastingPrimaryMetrics.removeprefix()ForecastingPrimaryMetrics.removesuffix()ForecastingPrimaryMetrics.replace()ForecastingPrimaryMetrics.rfind()ForecastingPrimaryMetrics.rindex()ForecastingPrimaryMetrics.rjust()ForecastingPrimaryMetrics.rpartition()ForecastingPrimaryMetrics.rsplit()ForecastingPrimaryMetrics.rstrip()ForecastingPrimaryMetrics.split()ForecastingPrimaryMetrics.splitlines()ForecastingPrimaryMetrics.startswith()ForecastingPrimaryMetrics.strip()ForecastingPrimaryMetrics.swapcase()ForecastingPrimaryMetrics.title()ForecastingPrimaryMetrics.translate()ForecastingPrimaryMetrics.upper()ForecastingPrimaryMetrics.zfill()ForecastingPrimaryMetrics.NORMALIZED_MEAN_ABSOLUTE_ERRORForecastingPrimaryMetrics.NORMALIZED_ROOT_MEAN_SQUARED_ERRORForecastingPrimaryMetrics.R2_SCOREForecastingPrimaryMetrics.SPEARMAN_CORRELATION
ForecastingSettingsImageClassificationJobImageClassificationJob.dump()ImageClassificationJob.extend_search_space()ImageClassificationJob.set_data()ImageClassificationJob.set_limits()ImageClassificationJob.set_sweep()ImageClassificationJob.set_training_parameters()ImageClassificationJob.base_pathImageClassificationJob.creation_contextImageClassificationJob.idImageClassificationJob.inputsImageClassificationJob.limitsImageClassificationJob.log_filesImageClassificationJob.log_verbosityImageClassificationJob.outputsImageClassificationJob.primary_metricImageClassificationJob.search_spaceImageClassificationJob.statusImageClassificationJob.studio_urlImageClassificationJob.sweepImageClassificationJob.task_typeImageClassificationJob.test_dataImageClassificationJob.training_dataImageClassificationJob.training_parametersImageClassificationJob.typeImageClassificationJob.validation_data
ImageClassificationMultilabelJobImageClassificationMultilabelJob.dump()ImageClassificationMultilabelJob.extend_search_space()ImageClassificationMultilabelJob.set_data()ImageClassificationMultilabelJob.set_limits()ImageClassificationMultilabelJob.set_sweep()ImageClassificationMultilabelJob.set_training_parameters()ImageClassificationMultilabelJob.base_pathImageClassificationMultilabelJob.creation_contextImageClassificationMultilabelJob.idImageClassificationMultilabelJob.inputsImageClassificationMultilabelJob.limitsImageClassificationMultilabelJob.log_filesImageClassificationMultilabelJob.log_verbosityImageClassificationMultilabelJob.outputsImageClassificationMultilabelJob.primary_metricImageClassificationMultilabelJob.search_spaceImageClassificationMultilabelJob.statusImageClassificationMultilabelJob.studio_urlImageClassificationMultilabelJob.sweepImageClassificationMultilabelJob.task_typeImageClassificationMultilabelJob.test_dataImageClassificationMultilabelJob.training_dataImageClassificationMultilabelJob.training_parametersImageClassificationMultilabelJob.typeImageClassificationMultilabelJob.validation_data
ImageClassificationSearchSpaceImageInstanceSegmentationJobImageInstanceSegmentationJob.dump()ImageInstanceSegmentationJob.extend_search_space()ImageInstanceSegmentationJob.set_data()ImageInstanceSegmentationJob.set_limits()ImageInstanceSegmentationJob.set_sweep()ImageInstanceSegmentationJob.set_training_parameters()ImageInstanceSegmentationJob.base_pathImageInstanceSegmentationJob.creation_contextImageInstanceSegmentationJob.idImageInstanceSegmentationJob.inputsImageInstanceSegmentationJob.limitsImageInstanceSegmentationJob.log_filesImageInstanceSegmentationJob.log_verbosityImageInstanceSegmentationJob.outputsImageInstanceSegmentationJob.primary_metricImageInstanceSegmentationJob.search_spaceImageInstanceSegmentationJob.statusImageInstanceSegmentationJob.studio_urlImageInstanceSegmentationJob.sweepImageInstanceSegmentationJob.task_typeImageInstanceSegmentationJob.test_dataImageInstanceSegmentationJob.training_dataImageInstanceSegmentationJob.training_parametersImageInstanceSegmentationJob.typeImageInstanceSegmentationJob.validation_data
ImageLimitSettingsImageModelSettingsClassificationImageModelSettingsObjectDetectionImageObjectDetectionJobImageObjectDetectionJob.dump()ImageObjectDetectionJob.extend_search_space()ImageObjectDetectionJob.set_data()ImageObjectDetectionJob.set_limits()ImageObjectDetectionJob.set_sweep()ImageObjectDetectionJob.set_training_parameters()ImageObjectDetectionJob.base_pathImageObjectDetectionJob.creation_contextImageObjectDetectionJob.idImageObjectDetectionJob.inputsImageObjectDetectionJob.limitsImageObjectDetectionJob.log_filesImageObjectDetectionJob.log_verbosityImageObjectDetectionJob.outputsImageObjectDetectionJob.primary_metricImageObjectDetectionJob.search_spaceImageObjectDetectionJob.statusImageObjectDetectionJob.studio_urlImageObjectDetectionJob.sweepImageObjectDetectionJob.task_typeImageObjectDetectionJob.test_dataImageObjectDetectionJob.training_dataImageObjectDetectionJob.training_parametersImageObjectDetectionJob.typeImageObjectDetectionJob.validation_data
ImageObjectDetectionSearchSpaceImageSweepSettingsInstanceSegmentationPrimaryMetricsInstanceSegmentationPrimaryMetrics.capitalize()InstanceSegmentationPrimaryMetrics.casefold()InstanceSegmentationPrimaryMetrics.center()InstanceSegmentationPrimaryMetrics.count()InstanceSegmentationPrimaryMetrics.encode()InstanceSegmentationPrimaryMetrics.endswith()InstanceSegmentationPrimaryMetrics.expandtabs()InstanceSegmentationPrimaryMetrics.find()InstanceSegmentationPrimaryMetrics.format()InstanceSegmentationPrimaryMetrics.format_map()InstanceSegmentationPrimaryMetrics.index()InstanceSegmentationPrimaryMetrics.isalnum()InstanceSegmentationPrimaryMetrics.isalpha()InstanceSegmentationPrimaryMetrics.isascii()InstanceSegmentationPrimaryMetrics.isdecimal()InstanceSegmentationPrimaryMetrics.isdigit()InstanceSegmentationPrimaryMetrics.isidentifier()InstanceSegmentationPrimaryMetrics.islower()InstanceSegmentationPrimaryMetrics.isnumeric()InstanceSegmentationPrimaryMetrics.isprintable()InstanceSegmentationPrimaryMetrics.isspace()InstanceSegmentationPrimaryMetrics.istitle()InstanceSegmentationPrimaryMetrics.isupper()InstanceSegmentationPrimaryMetrics.join()InstanceSegmentationPrimaryMetrics.ljust()InstanceSegmentationPrimaryMetrics.lower()InstanceSegmentationPrimaryMetrics.lstrip()InstanceSegmentationPrimaryMetrics.maketrans()InstanceSegmentationPrimaryMetrics.partition()InstanceSegmentationPrimaryMetrics.removeprefix()InstanceSegmentationPrimaryMetrics.removesuffix()InstanceSegmentationPrimaryMetrics.replace()InstanceSegmentationPrimaryMetrics.rfind()InstanceSegmentationPrimaryMetrics.rindex()InstanceSegmentationPrimaryMetrics.rjust()InstanceSegmentationPrimaryMetrics.rpartition()InstanceSegmentationPrimaryMetrics.rsplit()InstanceSegmentationPrimaryMetrics.rstrip()InstanceSegmentationPrimaryMetrics.split()InstanceSegmentationPrimaryMetrics.splitlines()InstanceSegmentationPrimaryMetrics.startswith()InstanceSegmentationPrimaryMetrics.strip()InstanceSegmentationPrimaryMetrics.swapcase()InstanceSegmentationPrimaryMetrics.title()InstanceSegmentationPrimaryMetrics.translate()InstanceSegmentationPrimaryMetrics.upper()InstanceSegmentationPrimaryMetrics.zfill()InstanceSegmentationPrimaryMetrics.MEAN_AVERAGE_PRECISION
LearningRateSchedulerLearningRateScheduler.capitalize()LearningRateScheduler.casefold()LearningRateScheduler.center()LearningRateScheduler.count()LearningRateScheduler.encode()LearningRateScheduler.endswith()LearningRateScheduler.expandtabs()LearningRateScheduler.find()LearningRateScheduler.format()LearningRateScheduler.format_map()LearningRateScheduler.index()LearningRateScheduler.isalnum()LearningRateScheduler.isalpha()LearningRateScheduler.isascii()LearningRateScheduler.isdecimal()LearningRateScheduler.isdigit()LearningRateScheduler.isidentifier()LearningRateScheduler.islower()LearningRateScheduler.isnumeric()LearningRateScheduler.isprintable()LearningRateScheduler.isspace()LearningRateScheduler.istitle()LearningRateScheduler.isupper()LearningRateScheduler.join()LearningRateScheduler.ljust()LearningRateScheduler.lower()LearningRateScheduler.lstrip()LearningRateScheduler.maketrans()LearningRateScheduler.partition()LearningRateScheduler.removeprefix()LearningRateScheduler.removesuffix()LearningRateScheduler.replace()LearningRateScheduler.rfind()LearningRateScheduler.rindex()LearningRateScheduler.rjust()LearningRateScheduler.rpartition()LearningRateScheduler.rsplit()LearningRateScheduler.rstrip()LearningRateScheduler.split()LearningRateScheduler.splitlines()LearningRateScheduler.startswith()LearningRateScheduler.strip()LearningRateScheduler.swapcase()LearningRateScheduler.title()LearningRateScheduler.translate()LearningRateScheduler.upper()LearningRateScheduler.zfill()LearningRateScheduler.NONELearningRateScheduler.STEPLearningRateScheduler.WARMUP_COSINE
LogTrainingMetricsLogTrainingMetrics.capitalize()LogTrainingMetrics.casefold()LogTrainingMetrics.center()LogTrainingMetrics.count()LogTrainingMetrics.encode()LogTrainingMetrics.endswith()LogTrainingMetrics.expandtabs()LogTrainingMetrics.find()LogTrainingMetrics.format()LogTrainingMetrics.format_map()LogTrainingMetrics.index()LogTrainingMetrics.isalnum()LogTrainingMetrics.isalpha()LogTrainingMetrics.isascii()LogTrainingMetrics.isdecimal()LogTrainingMetrics.isdigit()LogTrainingMetrics.isidentifier()LogTrainingMetrics.islower()LogTrainingMetrics.isnumeric()LogTrainingMetrics.isprintable()LogTrainingMetrics.isspace()LogTrainingMetrics.istitle()LogTrainingMetrics.isupper()LogTrainingMetrics.join()LogTrainingMetrics.ljust()LogTrainingMetrics.lower()LogTrainingMetrics.lstrip()LogTrainingMetrics.maketrans()LogTrainingMetrics.partition()LogTrainingMetrics.removeprefix()LogTrainingMetrics.removesuffix()LogTrainingMetrics.replace()LogTrainingMetrics.rfind()LogTrainingMetrics.rindex()LogTrainingMetrics.rjust()LogTrainingMetrics.rpartition()LogTrainingMetrics.rsplit()LogTrainingMetrics.rstrip()LogTrainingMetrics.split()LogTrainingMetrics.splitlines()LogTrainingMetrics.startswith()LogTrainingMetrics.strip()LogTrainingMetrics.swapcase()LogTrainingMetrics.title()LogTrainingMetrics.translate()LogTrainingMetrics.upper()LogTrainingMetrics.zfill()LogTrainingMetrics.DISABLELogTrainingMetrics.ENABLE
LogValidationLossLogValidationLoss.capitalize()LogValidationLoss.casefold()LogValidationLoss.center()LogValidationLoss.count()LogValidationLoss.encode()LogValidationLoss.endswith()LogValidationLoss.expandtabs()LogValidationLoss.find()LogValidationLoss.format()LogValidationLoss.format_map()LogValidationLoss.index()LogValidationLoss.isalnum()LogValidationLoss.isalpha()LogValidationLoss.isascii()LogValidationLoss.isdecimal()LogValidationLoss.isdigit()LogValidationLoss.isidentifier()LogValidationLoss.islower()LogValidationLoss.isnumeric()LogValidationLoss.isprintable()LogValidationLoss.isspace()LogValidationLoss.istitle()LogValidationLoss.isupper()LogValidationLoss.join()LogValidationLoss.ljust()LogValidationLoss.lower()LogValidationLoss.lstrip()LogValidationLoss.maketrans()LogValidationLoss.partition()LogValidationLoss.removeprefix()LogValidationLoss.removesuffix()LogValidationLoss.replace()LogValidationLoss.rfind()LogValidationLoss.rindex()LogValidationLoss.rjust()LogValidationLoss.rpartition()LogValidationLoss.rsplit()LogValidationLoss.rstrip()LogValidationLoss.split()LogValidationLoss.splitlines()LogValidationLoss.startswith()LogValidationLoss.strip()LogValidationLoss.swapcase()LogValidationLoss.title()LogValidationLoss.translate()LogValidationLoss.upper()LogValidationLoss.zfill()LogValidationLoss.DISABLELogValidationLoss.ENABLE
NCrossValidationsModeNCrossValidationsMode.capitalize()NCrossValidationsMode.casefold()NCrossValidationsMode.center()NCrossValidationsMode.count()NCrossValidationsMode.encode()NCrossValidationsMode.endswith()NCrossValidationsMode.expandtabs()NCrossValidationsMode.find()NCrossValidationsMode.format()NCrossValidationsMode.format_map()NCrossValidationsMode.index()NCrossValidationsMode.isalnum()NCrossValidationsMode.isalpha()NCrossValidationsMode.isascii()NCrossValidationsMode.isdecimal()NCrossValidationsMode.isdigit()NCrossValidationsMode.isidentifier()NCrossValidationsMode.islower()NCrossValidationsMode.isnumeric()NCrossValidationsMode.isprintable()NCrossValidationsMode.isspace()NCrossValidationsMode.istitle()NCrossValidationsMode.isupper()NCrossValidationsMode.join()NCrossValidationsMode.ljust()NCrossValidationsMode.lower()NCrossValidationsMode.lstrip()NCrossValidationsMode.maketrans()NCrossValidationsMode.partition()NCrossValidationsMode.removeprefix()NCrossValidationsMode.removesuffix()NCrossValidationsMode.replace()NCrossValidationsMode.rfind()NCrossValidationsMode.rindex()NCrossValidationsMode.rjust()NCrossValidationsMode.rpartition()NCrossValidationsMode.rsplit()NCrossValidationsMode.rstrip()NCrossValidationsMode.split()NCrossValidationsMode.splitlines()NCrossValidationsMode.startswith()NCrossValidationsMode.strip()NCrossValidationsMode.swapcase()NCrossValidationsMode.title()NCrossValidationsMode.translate()NCrossValidationsMode.upper()NCrossValidationsMode.zfill()NCrossValidationsMode.AUTONCrossValidationsMode.CUSTOM
NlpFeaturizationSettingsNlpFixedParametersNlpLimitSettingsNlpSearchSpaceNlpSweepSettingsObjectDetectionPrimaryMetricsObjectDetectionPrimaryMetrics.capitalize()ObjectDetectionPrimaryMetrics.casefold()ObjectDetectionPrimaryMetrics.center()ObjectDetectionPrimaryMetrics.count()ObjectDetectionPrimaryMetrics.encode()ObjectDetectionPrimaryMetrics.endswith()ObjectDetectionPrimaryMetrics.expandtabs()ObjectDetectionPrimaryMetrics.find()ObjectDetectionPrimaryMetrics.format()ObjectDetectionPrimaryMetrics.format_map()ObjectDetectionPrimaryMetrics.index()ObjectDetectionPrimaryMetrics.isalnum()ObjectDetectionPrimaryMetrics.isalpha()ObjectDetectionPrimaryMetrics.isascii()ObjectDetectionPrimaryMetrics.isdecimal()ObjectDetectionPrimaryMetrics.isdigit()ObjectDetectionPrimaryMetrics.isidentifier()ObjectDetectionPrimaryMetrics.islower()ObjectDetectionPrimaryMetrics.isnumeric()ObjectDetectionPrimaryMetrics.isprintable()ObjectDetectionPrimaryMetrics.isspace()ObjectDetectionPrimaryMetrics.istitle()ObjectDetectionPrimaryMetrics.isupper()ObjectDetectionPrimaryMetrics.join()ObjectDetectionPrimaryMetrics.ljust()ObjectDetectionPrimaryMetrics.lower()ObjectDetectionPrimaryMetrics.lstrip()ObjectDetectionPrimaryMetrics.maketrans()ObjectDetectionPrimaryMetrics.partition()ObjectDetectionPrimaryMetrics.removeprefix()ObjectDetectionPrimaryMetrics.removesuffix()ObjectDetectionPrimaryMetrics.replace()ObjectDetectionPrimaryMetrics.rfind()ObjectDetectionPrimaryMetrics.rindex()ObjectDetectionPrimaryMetrics.rjust()ObjectDetectionPrimaryMetrics.rpartition()ObjectDetectionPrimaryMetrics.rsplit()ObjectDetectionPrimaryMetrics.rstrip()ObjectDetectionPrimaryMetrics.split()ObjectDetectionPrimaryMetrics.splitlines()ObjectDetectionPrimaryMetrics.startswith()ObjectDetectionPrimaryMetrics.strip()ObjectDetectionPrimaryMetrics.swapcase()ObjectDetectionPrimaryMetrics.title()ObjectDetectionPrimaryMetrics.translate()ObjectDetectionPrimaryMetrics.upper()ObjectDetectionPrimaryMetrics.zfill()ObjectDetectionPrimaryMetrics.MEAN_AVERAGE_PRECISION
RegressionJobRegressionJob.dump()RegressionJob.set_data()RegressionJob.set_featurization()RegressionJob.set_limits()RegressionJob.set_training()RegressionJob.base_pathRegressionJob.creation_contextRegressionJob.featurizationRegressionJob.idRegressionJob.inputsRegressionJob.limitsRegressionJob.log_filesRegressionJob.log_verbosityRegressionJob.outputsRegressionJob.primary_metricRegressionJob.statusRegressionJob.studio_urlRegressionJob.task_typeRegressionJob.test_dataRegressionJob.trainingRegressionJob.training_dataRegressionJob.typeRegressionJob.validation_data
RegressionModelsRegressionModels.capitalize()RegressionModels.casefold()RegressionModels.center()RegressionModels.count()RegressionModels.encode()RegressionModels.endswith()RegressionModels.expandtabs()RegressionModels.find()RegressionModels.format()RegressionModels.format_map()RegressionModels.index()RegressionModels.isalnum()RegressionModels.isalpha()RegressionModels.isascii()RegressionModels.isdecimal()RegressionModels.isdigit()RegressionModels.isidentifier()RegressionModels.islower()RegressionModels.isnumeric()RegressionModels.isprintable()RegressionModels.isspace()RegressionModels.istitle()RegressionModels.isupper()RegressionModels.join()RegressionModels.ljust()RegressionModels.lower()RegressionModels.lstrip()RegressionModels.maketrans()RegressionModels.partition()RegressionModels.removeprefix()RegressionModels.removesuffix()RegressionModels.replace()RegressionModels.rfind()RegressionModels.rindex()RegressionModels.rjust()RegressionModels.rpartition()RegressionModels.rsplit()RegressionModels.rstrip()RegressionModels.split()RegressionModels.splitlines()RegressionModels.startswith()RegressionModels.strip()RegressionModels.swapcase()RegressionModels.title()RegressionModels.translate()RegressionModels.upper()RegressionModels.zfill()RegressionModels.DECISION_TREERegressionModels.ELASTIC_NETRegressionModels.EXTREME_RANDOM_TREESRegressionModels.GRADIENT_BOOSTINGRegressionModels.KNNRegressionModels.LASSO_LARSRegressionModels.LIGHT_GBMRegressionModels.RANDOM_FORESTRegressionModels.SGDRegressionModels.XG_BOOST_REGRESSOR
RegressionPrimaryMetricsRegressionPrimaryMetrics.capitalize()RegressionPrimaryMetrics.casefold()RegressionPrimaryMetrics.center()RegressionPrimaryMetrics.count()RegressionPrimaryMetrics.encode()RegressionPrimaryMetrics.endswith()RegressionPrimaryMetrics.expandtabs()RegressionPrimaryMetrics.find()RegressionPrimaryMetrics.format()RegressionPrimaryMetrics.format_map()RegressionPrimaryMetrics.index()RegressionPrimaryMetrics.isalnum()RegressionPrimaryMetrics.isalpha()RegressionPrimaryMetrics.isascii()RegressionPrimaryMetrics.isdecimal()RegressionPrimaryMetrics.isdigit()RegressionPrimaryMetrics.isidentifier()RegressionPrimaryMetrics.islower()RegressionPrimaryMetrics.isnumeric()RegressionPrimaryMetrics.isprintable()RegressionPrimaryMetrics.isspace()RegressionPrimaryMetrics.istitle()RegressionPrimaryMetrics.isupper()RegressionPrimaryMetrics.join()RegressionPrimaryMetrics.ljust()RegressionPrimaryMetrics.lower()RegressionPrimaryMetrics.lstrip()RegressionPrimaryMetrics.maketrans()RegressionPrimaryMetrics.partition()RegressionPrimaryMetrics.removeprefix()RegressionPrimaryMetrics.removesuffix()RegressionPrimaryMetrics.replace()RegressionPrimaryMetrics.rfind()RegressionPrimaryMetrics.rindex()RegressionPrimaryMetrics.rjust()RegressionPrimaryMetrics.rpartition()RegressionPrimaryMetrics.rsplit()RegressionPrimaryMetrics.rstrip()RegressionPrimaryMetrics.split()RegressionPrimaryMetrics.splitlines()RegressionPrimaryMetrics.startswith()RegressionPrimaryMetrics.strip()RegressionPrimaryMetrics.swapcase()RegressionPrimaryMetrics.title()RegressionPrimaryMetrics.translate()RegressionPrimaryMetrics.upper()RegressionPrimaryMetrics.zfill()RegressionPrimaryMetrics.NORMALIZED_MEAN_ABSOLUTE_ERRORRegressionPrimaryMetrics.NORMALIZED_ROOT_MEAN_SQUARED_ERRORRegressionPrimaryMetrics.R2_SCORERegressionPrimaryMetrics.SPEARMAN_CORRELATION
SamplingAlgorithmTypeSamplingAlgorithmType.capitalize()SamplingAlgorithmType.casefold()SamplingAlgorithmType.center()SamplingAlgorithmType.count()SamplingAlgorithmType.encode()SamplingAlgorithmType.endswith()SamplingAlgorithmType.expandtabs()SamplingAlgorithmType.find()SamplingAlgorithmType.format()SamplingAlgorithmType.format_map()SamplingAlgorithmType.index()SamplingAlgorithmType.isalnum()SamplingAlgorithmType.isalpha()SamplingAlgorithmType.isascii()SamplingAlgorithmType.isdecimal()SamplingAlgorithmType.isdigit()SamplingAlgorithmType.isidentifier()SamplingAlgorithmType.islower()SamplingAlgorithmType.isnumeric()SamplingAlgorithmType.isprintable()SamplingAlgorithmType.isspace()SamplingAlgorithmType.istitle()SamplingAlgorithmType.isupper()SamplingAlgorithmType.join()SamplingAlgorithmType.ljust()SamplingAlgorithmType.lower()SamplingAlgorithmType.lstrip()SamplingAlgorithmType.maketrans()SamplingAlgorithmType.partition()SamplingAlgorithmType.removeprefix()SamplingAlgorithmType.removesuffix()SamplingAlgorithmType.replace()SamplingAlgorithmType.rfind()SamplingAlgorithmType.rindex()SamplingAlgorithmType.rjust()SamplingAlgorithmType.rpartition()SamplingAlgorithmType.rsplit()SamplingAlgorithmType.rstrip()SamplingAlgorithmType.split()SamplingAlgorithmType.splitlines()SamplingAlgorithmType.startswith()SamplingAlgorithmType.strip()SamplingAlgorithmType.swapcase()SamplingAlgorithmType.title()SamplingAlgorithmType.translate()SamplingAlgorithmType.upper()SamplingAlgorithmType.zfill()SamplingAlgorithmType.BAYESIANSamplingAlgorithmType.GRIDSamplingAlgorithmType.RANDOM
SearchSpaceShortSeriesHandlingConfigurationShortSeriesHandlingConfiguration.capitalize()ShortSeriesHandlingConfiguration.casefold()ShortSeriesHandlingConfiguration.center()ShortSeriesHandlingConfiguration.count()ShortSeriesHandlingConfiguration.encode()ShortSeriesHandlingConfiguration.endswith()ShortSeriesHandlingConfiguration.expandtabs()ShortSeriesHandlingConfiguration.find()ShortSeriesHandlingConfiguration.format()ShortSeriesHandlingConfiguration.format_map()ShortSeriesHandlingConfiguration.index()ShortSeriesHandlingConfiguration.isalnum()ShortSeriesHandlingConfiguration.isalpha()ShortSeriesHandlingConfiguration.isascii()ShortSeriesHandlingConfiguration.isdecimal()ShortSeriesHandlingConfiguration.isdigit()ShortSeriesHandlingConfiguration.isidentifier()ShortSeriesHandlingConfiguration.islower()ShortSeriesHandlingConfiguration.isnumeric()ShortSeriesHandlingConfiguration.isprintable()ShortSeriesHandlingConfiguration.isspace()ShortSeriesHandlingConfiguration.istitle()ShortSeriesHandlingConfiguration.isupper()ShortSeriesHandlingConfiguration.join()ShortSeriesHandlingConfiguration.ljust()ShortSeriesHandlingConfiguration.lower()ShortSeriesHandlingConfiguration.lstrip()ShortSeriesHandlingConfiguration.maketrans()ShortSeriesHandlingConfiguration.partition()ShortSeriesHandlingConfiguration.removeprefix()ShortSeriesHandlingConfiguration.removesuffix()ShortSeriesHandlingConfiguration.replace()ShortSeriesHandlingConfiguration.rfind()ShortSeriesHandlingConfiguration.rindex()ShortSeriesHandlingConfiguration.rjust()ShortSeriesHandlingConfiguration.rpartition()ShortSeriesHandlingConfiguration.rsplit()ShortSeriesHandlingConfiguration.rstrip()ShortSeriesHandlingConfiguration.split()ShortSeriesHandlingConfiguration.splitlines()ShortSeriesHandlingConfiguration.startswith()ShortSeriesHandlingConfiguration.strip()ShortSeriesHandlingConfiguration.swapcase()ShortSeriesHandlingConfiguration.title()ShortSeriesHandlingConfiguration.translate()ShortSeriesHandlingConfiguration.upper()ShortSeriesHandlingConfiguration.zfill()ShortSeriesHandlingConfiguration.AUTOShortSeriesHandlingConfiguration.DROPShortSeriesHandlingConfiguration.NONEShortSeriesHandlingConfiguration.PAD
StackEnsembleSettingsStochasticOptimizerStochasticOptimizer.capitalize()StochasticOptimizer.casefold()StochasticOptimizer.center()StochasticOptimizer.count()StochasticOptimizer.encode()StochasticOptimizer.endswith()StochasticOptimizer.expandtabs()StochasticOptimizer.find()StochasticOptimizer.format()StochasticOptimizer.format_map()StochasticOptimizer.index()StochasticOptimizer.isalnum()StochasticOptimizer.isalpha()StochasticOptimizer.isascii()StochasticOptimizer.isdecimal()StochasticOptimizer.isdigit()StochasticOptimizer.isidentifier()StochasticOptimizer.islower()StochasticOptimizer.isnumeric()StochasticOptimizer.isprintable()StochasticOptimizer.isspace()StochasticOptimizer.istitle()StochasticOptimizer.isupper()StochasticOptimizer.join()StochasticOptimizer.ljust()StochasticOptimizer.lower()StochasticOptimizer.lstrip()StochasticOptimizer.maketrans()StochasticOptimizer.partition()StochasticOptimizer.removeprefix()StochasticOptimizer.removesuffix()StochasticOptimizer.replace()StochasticOptimizer.rfind()StochasticOptimizer.rindex()StochasticOptimizer.rjust()StochasticOptimizer.rpartition()StochasticOptimizer.rsplit()StochasticOptimizer.rstrip()StochasticOptimizer.split()StochasticOptimizer.splitlines()StochasticOptimizer.startswith()StochasticOptimizer.strip()StochasticOptimizer.swapcase()StochasticOptimizer.title()StochasticOptimizer.translate()StochasticOptimizer.upper()StochasticOptimizer.zfill()StochasticOptimizer.ADAMStochasticOptimizer.ADAMWStochasticOptimizer.NONEStochasticOptimizer.SGD
TabularFeaturizationSettingsTabularLimitSettingsTargetAggregationFunctionTargetAggregationFunction.capitalize()TargetAggregationFunction.casefold()TargetAggregationFunction.center()TargetAggregationFunction.count()TargetAggregationFunction.encode()TargetAggregationFunction.endswith()TargetAggregationFunction.expandtabs()TargetAggregationFunction.find()TargetAggregationFunction.format()TargetAggregationFunction.format_map()TargetAggregationFunction.index()TargetAggregationFunction.isalnum()TargetAggregationFunction.isalpha()TargetAggregationFunction.isascii()TargetAggregationFunction.isdecimal()TargetAggregationFunction.isdigit()TargetAggregationFunction.isidentifier()TargetAggregationFunction.islower()TargetAggregationFunction.isnumeric()TargetAggregationFunction.isprintable()TargetAggregationFunction.isspace()TargetAggregationFunction.istitle()TargetAggregationFunction.isupper()TargetAggregationFunction.join()TargetAggregationFunction.ljust()TargetAggregationFunction.lower()TargetAggregationFunction.lstrip()TargetAggregationFunction.maketrans()TargetAggregationFunction.partition()TargetAggregationFunction.removeprefix()TargetAggregationFunction.removesuffix()TargetAggregationFunction.replace()TargetAggregationFunction.rfind()TargetAggregationFunction.rindex()TargetAggregationFunction.rjust()TargetAggregationFunction.rpartition()TargetAggregationFunction.rsplit()TargetAggregationFunction.rstrip()TargetAggregationFunction.split()TargetAggregationFunction.splitlines()TargetAggregationFunction.startswith()TargetAggregationFunction.strip()TargetAggregationFunction.swapcase()TargetAggregationFunction.title()TargetAggregationFunction.translate()TargetAggregationFunction.upper()TargetAggregationFunction.zfill()TargetAggregationFunction.MAXTargetAggregationFunction.MEANTargetAggregationFunction.MINTargetAggregationFunction.NONETargetAggregationFunction.SUM
TargetLagsModeTargetLagsMode.capitalize()TargetLagsMode.casefold()TargetLagsMode.center()TargetLagsMode.count()TargetLagsMode.encode()TargetLagsMode.endswith()TargetLagsMode.expandtabs()TargetLagsMode.find()TargetLagsMode.format()TargetLagsMode.format_map()TargetLagsMode.index()TargetLagsMode.isalnum()TargetLagsMode.isalpha()TargetLagsMode.isascii()TargetLagsMode.isdecimal()TargetLagsMode.isdigit()TargetLagsMode.isidentifier()TargetLagsMode.islower()TargetLagsMode.isnumeric()TargetLagsMode.isprintable()TargetLagsMode.isspace()TargetLagsMode.istitle()TargetLagsMode.isupper()TargetLagsMode.join()TargetLagsMode.ljust()TargetLagsMode.lower()TargetLagsMode.lstrip()TargetLagsMode.maketrans()TargetLagsMode.partition()TargetLagsMode.removeprefix()TargetLagsMode.removesuffix()TargetLagsMode.replace()TargetLagsMode.rfind()TargetLagsMode.rindex()TargetLagsMode.rjust()TargetLagsMode.rpartition()TargetLagsMode.rsplit()TargetLagsMode.rstrip()TargetLagsMode.split()TargetLagsMode.splitlines()TargetLagsMode.startswith()TargetLagsMode.strip()TargetLagsMode.swapcase()TargetLagsMode.title()TargetLagsMode.translate()TargetLagsMode.upper()TargetLagsMode.zfill()TargetLagsMode.AUTOTargetLagsMode.CUSTOM
TargetRollingWindowSizeModeTargetRollingWindowSizeMode.capitalize()TargetRollingWindowSizeMode.casefold()TargetRollingWindowSizeMode.center()TargetRollingWindowSizeMode.count()TargetRollingWindowSizeMode.encode()TargetRollingWindowSizeMode.endswith()TargetRollingWindowSizeMode.expandtabs()TargetRollingWindowSizeMode.find()TargetRollingWindowSizeMode.format()TargetRollingWindowSizeMode.format_map()TargetRollingWindowSizeMode.index()TargetRollingWindowSizeMode.isalnum()TargetRollingWindowSizeMode.isalpha()TargetRollingWindowSizeMode.isascii()TargetRollingWindowSizeMode.isdecimal()TargetRollingWindowSizeMode.isdigit()TargetRollingWindowSizeMode.isidentifier()TargetRollingWindowSizeMode.islower()TargetRollingWindowSizeMode.isnumeric()TargetRollingWindowSizeMode.isprintable()TargetRollingWindowSizeMode.isspace()TargetRollingWindowSizeMode.istitle()TargetRollingWindowSizeMode.isupper()TargetRollingWindowSizeMode.join()TargetRollingWindowSizeMode.ljust()TargetRollingWindowSizeMode.lower()TargetRollingWindowSizeMode.lstrip()TargetRollingWindowSizeMode.maketrans()TargetRollingWindowSizeMode.partition()TargetRollingWindowSizeMode.removeprefix()TargetRollingWindowSizeMode.removesuffix()TargetRollingWindowSizeMode.replace()TargetRollingWindowSizeMode.rfind()TargetRollingWindowSizeMode.rindex()TargetRollingWindowSizeMode.rjust()TargetRollingWindowSizeMode.rpartition()TargetRollingWindowSizeMode.rsplit()TargetRollingWindowSizeMode.rstrip()TargetRollingWindowSizeMode.split()TargetRollingWindowSizeMode.splitlines()TargetRollingWindowSizeMode.startswith()TargetRollingWindowSizeMode.strip()TargetRollingWindowSizeMode.swapcase()TargetRollingWindowSizeMode.title()TargetRollingWindowSizeMode.translate()TargetRollingWindowSizeMode.upper()TargetRollingWindowSizeMode.zfill()TargetRollingWindowSizeMode.AUTOTargetRollingWindowSizeMode.CUSTOM
TextClassificationJobTextClassificationJob.dump()TextClassificationJob.extend_search_space()TextClassificationJob.set_data()TextClassificationJob.set_featurization()TextClassificationJob.set_limits()TextClassificationJob.set_sweep()TextClassificationJob.set_training_parameters()TextClassificationJob.base_pathTextClassificationJob.creation_contextTextClassificationJob.featurizationTextClassificationJob.idTextClassificationJob.inputsTextClassificationJob.limitsTextClassificationJob.log_filesTextClassificationJob.log_verbosityTextClassificationJob.outputsTextClassificationJob.primary_metricTextClassificationJob.search_spaceTextClassificationJob.statusTextClassificationJob.studio_urlTextClassificationJob.sweepTextClassificationJob.task_typeTextClassificationJob.test_dataTextClassificationJob.training_dataTextClassificationJob.training_parametersTextClassificationJob.typeTextClassificationJob.validation_data
TextClassificationMultilabelJobTextClassificationMultilabelJob.dump()TextClassificationMultilabelJob.extend_search_space()TextClassificationMultilabelJob.set_data()TextClassificationMultilabelJob.set_featurization()TextClassificationMultilabelJob.set_limits()TextClassificationMultilabelJob.set_sweep()TextClassificationMultilabelJob.set_training_parameters()TextClassificationMultilabelJob.base_pathTextClassificationMultilabelJob.creation_contextTextClassificationMultilabelJob.featurizationTextClassificationMultilabelJob.idTextClassificationMultilabelJob.inputsTextClassificationMultilabelJob.limitsTextClassificationMultilabelJob.log_filesTextClassificationMultilabelJob.log_verbosityTextClassificationMultilabelJob.outputsTextClassificationMultilabelJob.primary_metricTextClassificationMultilabelJob.search_spaceTextClassificationMultilabelJob.statusTextClassificationMultilabelJob.studio_urlTextClassificationMultilabelJob.sweepTextClassificationMultilabelJob.task_typeTextClassificationMultilabelJob.test_dataTextClassificationMultilabelJob.training_dataTextClassificationMultilabelJob.training_parametersTextClassificationMultilabelJob.typeTextClassificationMultilabelJob.validation_data
TextNerJobTextNerJob.dump()TextNerJob.extend_search_space()TextNerJob.set_data()TextNerJob.set_featurization()TextNerJob.set_limits()TextNerJob.set_sweep()TextNerJob.set_training_parameters()TextNerJob.base_pathTextNerJob.creation_contextTextNerJob.featurizationTextNerJob.idTextNerJob.inputsTextNerJob.limitsTextNerJob.log_filesTextNerJob.log_verbosityTextNerJob.outputsTextNerJob.primary_metricTextNerJob.search_spaceTextNerJob.statusTextNerJob.studio_urlTextNerJob.sweepTextNerJob.task_typeTextNerJob.test_dataTextNerJob.training_dataTextNerJob.training_parametersTextNerJob.typeTextNerJob.validation_data
TrainingSettingsUseStlUseStl.capitalize()UseStl.casefold()UseStl.center()UseStl.count()UseStl.encode()UseStl.endswith()UseStl.expandtabs()UseStl.find()UseStl.format()UseStl.format_map()UseStl.index()UseStl.isalnum()UseStl.isalpha()UseStl.isascii()UseStl.isdecimal()UseStl.isdigit()UseStl.isidentifier()UseStl.islower()UseStl.isnumeric()UseStl.isprintable()UseStl.isspace()UseStl.istitle()UseStl.isupper()UseStl.join()UseStl.ljust()UseStl.lower()UseStl.lstrip()UseStl.maketrans()UseStl.partition()UseStl.removeprefix()UseStl.removesuffix()UseStl.replace()UseStl.rfind()UseStl.rindex()UseStl.rjust()UseStl.rpartition()UseStl.rsplit()UseStl.rstrip()UseStl.split()UseStl.splitlines()UseStl.startswith()UseStl.strip()UseStl.swapcase()UseStl.title()UseStl.translate()UseStl.upper()UseStl.zfill()UseStl.NONEUseStl.SEASONUseStl.SEASON_TREND
ValidationMetricTypeValidationMetricType.capitalize()ValidationMetricType.casefold()ValidationMetricType.center()ValidationMetricType.count()ValidationMetricType.encode()ValidationMetricType.endswith()ValidationMetricType.expandtabs()ValidationMetricType.find()ValidationMetricType.format()ValidationMetricType.format_map()ValidationMetricType.index()ValidationMetricType.isalnum()ValidationMetricType.isalpha()ValidationMetricType.isascii()ValidationMetricType.isdecimal()ValidationMetricType.isdigit()ValidationMetricType.isidentifier()ValidationMetricType.islower()ValidationMetricType.isnumeric()ValidationMetricType.isprintable()ValidationMetricType.isspace()ValidationMetricType.istitle()ValidationMetricType.isupper()ValidationMetricType.join()ValidationMetricType.ljust()ValidationMetricType.lower()ValidationMetricType.lstrip()ValidationMetricType.maketrans()ValidationMetricType.partition()ValidationMetricType.removeprefix()ValidationMetricType.removesuffix()ValidationMetricType.replace()ValidationMetricType.rfind()ValidationMetricType.rindex()ValidationMetricType.rjust()ValidationMetricType.rpartition()ValidationMetricType.rsplit()ValidationMetricType.rstrip()ValidationMetricType.split()ValidationMetricType.splitlines()ValidationMetricType.startswith()ValidationMetricType.strip()ValidationMetricType.swapcase()ValidationMetricType.title()ValidationMetricType.translate()ValidationMetricType.upper()ValidationMetricType.zfill()ValidationMetricType.COCOValidationMetricType.COCO_VOCValidationMetricType.NONEValidationMetricType.VOC
classification()forecasting()image_classification()image_classification_multilabel()image_instance_segmentation()image_object_detection()regression()text_classification()text_classification_multilabel()text_ner()
- azure.ai.ml.constants package
AcrAccountSkuAcrAccountSku.capitalize()AcrAccountSku.casefold()AcrAccountSku.center()AcrAccountSku.count()AcrAccountSku.encode()AcrAccountSku.endswith()AcrAccountSku.expandtabs()AcrAccountSku.find()AcrAccountSku.format()AcrAccountSku.format_map()AcrAccountSku.index()AcrAccountSku.isalnum()AcrAccountSku.isalpha()AcrAccountSku.isascii()AcrAccountSku.isdecimal()AcrAccountSku.isdigit()AcrAccountSku.isidentifier()AcrAccountSku.islower()AcrAccountSku.isnumeric()AcrAccountSku.isprintable()AcrAccountSku.isspace()AcrAccountSku.istitle()AcrAccountSku.isupper()AcrAccountSku.join()AcrAccountSku.ljust()AcrAccountSku.lower()AcrAccountSku.lstrip()AcrAccountSku.maketrans()AcrAccountSku.partition()AcrAccountSku.removeprefix()AcrAccountSku.removesuffix()AcrAccountSku.replace()AcrAccountSku.rfind()AcrAccountSku.rindex()AcrAccountSku.rjust()AcrAccountSku.rpartition()AcrAccountSku.rsplit()AcrAccountSku.rstrip()AcrAccountSku.split()AcrAccountSku.splitlines()AcrAccountSku.startswith()AcrAccountSku.strip()AcrAccountSku.swapcase()AcrAccountSku.title()AcrAccountSku.translate()AcrAccountSku.upper()AcrAccountSku.zfill()AcrAccountSku.PREMIUM
AssetTypesBatchDeploymentOutputActionDataGenerationTaskTypeDataGenerationTypeDistributionTypeIPProtectionLevelIPProtectionLevel.capitalize()IPProtectionLevel.casefold()IPProtectionLevel.center()IPProtectionLevel.count()IPProtectionLevel.encode()IPProtectionLevel.endswith()IPProtectionLevel.expandtabs()IPProtectionLevel.find()IPProtectionLevel.format()IPProtectionLevel.format_map()IPProtectionLevel.index()IPProtectionLevel.isalnum()IPProtectionLevel.isalpha()IPProtectionLevel.isascii()IPProtectionLevel.isdecimal()IPProtectionLevel.isdigit()IPProtectionLevel.isidentifier()IPProtectionLevel.islower()IPProtectionLevel.isnumeric()IPProtectionLevel.isprintable()IPProtectionLevel.isspace()IPProtectionLevel.istitle()IPProtectionLevel.isupper()IPProtectionLevel.join()IPProtectionLevel.ljust()IPProtectionLevel.lower()IPProtectionLevel.lstrip()IPProtectionLevel.maketrans()IPProtectionLevel.partition()IPProtectionLevel.removeprefix()IPProtectionLevel.removesuffix()IPProtectionLevel.replace()IPProtectionLevel.rfind()IPProtectionLevel.rindex()IPProtectionLevel.rjust()IPProtectionLevel.rpartition()IPProtectionLevel.rsplit()IPProtectionLevel.rstrip()IPProtectionLevel.split()IPProtectionLevel.splitlines()IPProtectionLevel.startswith()IPProtectionLevel.strip()IPProtectionLevel.swapcase()IPProtectionLevel.title()IPProtectionLevel.translate()IPProtectionLevel.upper()IPProtectionLevel.zfill()IPProtectionLevel.ALLIPProtectionLevel.NONE
ImageClassificationModelNamesImageClassificationModelNames.MOBILENETV2ImageClassificationModelNames.RESNEST101ImageClassificationModelNames.RESNEST50ImageClassificationModelNames.RESNET101ImageClassificationModelNames.RESNET152ImageClassificationModelNames.RESNET18ImageClassificationModelNames.RESNET34ImageClassificationModelNames.RESNET50ImageClassificationModelNames.SERESNEXTImageClassificationModelNames.VITB16R224ImageClassificationModelNames.VITL16R224ImageClassificationModelNames.VITS16R224
ImageInstanceSegmentationModelNamesImageObjectDetectionModelNamesImageObjectDetectionModelNames.FASTERRCNN_RESNET101_FPNImageObjectDetectionModelNames.FASTERRCNN_RESNET152_FPNImageObjectDetectionModelNames.FASTERRCNN_RESNET18_FPNImageObjectDetectionModelNames.FASTERRCNN_RESNET34_FPNImageObjectDetectionModelNames.FASTERRCNN_RESNET50_FPNImageObjectDetectionModelNames.RETINANET_RESNET50_FPNImageObjectDetectionModelNames.YOLOV5
ImportSourceTypeInputOutputModesInputTypesJobTypeListViewTypeListViewType.capitalize()ListViewType.casefold()ListViewType.center()ListViewType.count()ListViewType.encode()ListViewType.endswith()ListViewType.expandtabs()ListViewType.find()ListViewType.format()ListViewType.format_map()ListViewType.index()ListViewType.isalnum()ListViewType.isalpha()ListViewType.isascii()ListViewType.isdecimal()ListViewType.isdigit()ListViewType.isidentifier()ListViewType.islower()ListViewType.isnumeric()ListViewType.isprintable()ListViewType.isspace()ListViewType.istitle()ListViewType.isupper()ListViewType.join()ListViewType.ljust()ListViewType.lower()ListViewType.lstrip()ListViewType.maketrans()ListViewType.partition()ListViewType.removeprefix()ListViewType.removesuffix()ListViewType.replace()ListViewType.rfind()ListViewType.rindex()ListViewType.rjust()ListViewType.rpartition()ListViewType.rsplit()ListViewType.rstrip()ListViewType.split()ListViewType.splitlines()ListViewType.startswith()ListViewType.strip()ListViewType.swapcase()ListViewType.title()ListViewType.translate()ListViewType.upper()ListViewType.zfill()ListViewType.ACTIVE_ONLYListViewType.ALLListViewType.ARCHIVED_ONLY
ManagedServiceIdentityTypeManagedServiceIdentityType.capitalize()ManagedServiceIdentityType.casefold()ManagedServiceIdentityType.center()ManagedServiceIdentityType.count()ManagedServiceIdentityType.encode()ManagedServiceIdentityType.endswith()ManagedServiceIdentityType.expandtabs()ManagedServiceIdentityType.find()ManagedServiceIdentityType.format()ManagedServiceIdentityType.format_map()ManagedServiceIdentityType.index()ManagedServiceIdentityType.isalnum()ManagedServiceIdentityType.isalpha()ManagedServiceIdentityType.isascii()ManagedServiceIdentityType.isdecimal()ManagedServiceIdentityType.isdigit()ManagedServiceIdentityType.isidentifier()ManagedServiceIdentityType.islower()ManagedServiceIdentityType.isnumeric()ManagedServiceIdentityType.isprintable()ManagedServiceIdentityType.isspace()ManagedServiceIdentityType.istitle()ManagedServiceIdentityType.isupper()ManagedServiceIdentityType.join()ManagedServiceIdentityType.ljust()ManagedServiceIdentityType.lower()ManagedServiceIdentityType.lstrip()ManagedServiceIdentityType.maketrans()ManagedServiceIdentityType.partition()ManagedServiceIdentityType.removeprefix()ManagedServiceIdentityType.removesuffix()ManagedServiceIdentityType.replace()ManagedServiceIdentityType.rfind()ManagedServiceIdentityType.rindex()ManagedServiceIdentityType.rjust()ManagedServiceIdentityType.rpartition()ManagedServiceIdentityType.rsplit()ManagedServiceIdentityType.rstrip()ManagedServiceIdentityType.split()ManagedServiceIdentityType.splitlines()ManagedServiceIdentityType.startswith()ManagedServiceIdentityType.strip()ManagedServiceIdentityType.swapcase()ManagedServiceIdentityType.title()ManagedServiceIdentityType.translate()ManagedServiceIdentityType.upper()ManagedServiceIdentityType.zfill()ManagedServiceIdentityType.NONEManagedServiceIdentityType.SYSTEM_ASSIGNEDManagedServiceIdentityType.SYSTEM_ASSIGNED_USER_ASSIGNEDManagedServiceIdentityType.USER_ASSIGNED
ModelTypeMonitorDatasetContextMonitorDatasetContext.capitalize()MonitorDatasetContext.casefold()MonitorDatasetContext.center()MonitorDatasetContext.count()MonitorDatasetContext.encode()MonitorDatasetContext.endswith()MonitorDatasetContext.expandtabs()MonitorDatasetContext.find()MonitorDatasetContext.format()MonitorDatasetContext.format_map()MonitorDatasetContext.index()MonitorDatasetContext.isalnum()MonitorDatasetContext.isalpha()MonitorDatasetContext.isascii()MonitorDatasetContext.isdecimal()MonitorDatasetContext.isdigit()MonitorDatasetContext.isidentifier()MonitorDatasetContext.islower()MonitorDatasetContext.isnumeric()MonitorDatasetContext.isprintable()MonitorDatasetContext.isspace()MonitorDatasetContext.istitle()MonitorDatasetContext.isupper()MonitorDatasetContext.join()MonitorDatasetContext.ljust()MonitorDatasetContext.lower()MonitorDatasetContext.lstrip()MonitorDatasetContext.maketrans()MonitorDatasetContext.partition()MonitorDatasetContext.removeprefix()MonitorDatasetContext.removesuffix()MonitorDatasetContext.replace()MonitorDatasetContext.rfind()MonitorDatasetContext.rindex()MonitorDatasetContext.rjust()MonitorDatasetContext.rpartition()MonitorDatasetContext.rsplit()MonitorDatasetContext.rstrip()MonitorDatasetContext.split()MonitorDatasetContext.splitlines()MonitorDatasetContext.startswith()MonitorDatasetContext.strip()MonitorDatasetContext.swapcase()MonitorDatasetContext.title()MonitorDatasetContext.translate()MonitorDatasetContext.upper()MonitorDatasetContext.zfill()MonitorDatasetContext.GROUND_TRUTH_DATAMonitorDatasetContext.MODEL_INPUTSMonitorDatasetContext.MODEL_OUTPUTSMonitorDatasetContext.TESTMonitorDatasetContext.TRAININGMonitorDatasetContext.VALIDATION
MonitorFeatureTypeMonitorFeatureType.capitalize()MonitorFeatureType.casefold()MonitorFeatureType.center()MonitorFeatureType.count()MonitorFeatureType.encode()MonitorFeatureType.endswith()MonitorFeatureType.expandtabs()MonitorFeatureType.find()MonitorFeatureType.format()MonitorFeatureType.format_map()MonitorFeatureType.index()MonitorFeatureType.isalnum()MonitorFeatureType.isalpha()MonitorFeatureType.isascii()MonitorFeatureType.isdecimal()MonitorFeatureType.isdigit()MonitorFeatureType.isidentifier()MonitorFeatureType.islower()MonitorFeatureType.isnumeric()MonitorFeatureType.isprintable()MonitorFeatureType.isspace()MonitorFeatureType.istitle()MonitorFeatureType.isupper()MonitorFeatureType.join()MonitorFeatureType.ljust()MonitorFeatureType.lower()MonitorFeatureType.lstrip()MonitorFeatureType.maketrans()MonitorFeatureType.partition()MonitorFeatureType.removeprefix()MonitorFeatureType.removesuffix()MonitorFeatureType.replace()MonitorFeatureType.rfind()MonitorFeatureType.rindex()MonitorFeatureType.rjust()MonitorFeatureType.rpartition()MonitorFeatureType.rsplit()MonitorFeatureType.rstrip()MonitorFeatureType.split()MonitorFeatureType.splitlines()MonitorFeatureType.startswith()MonitorFeatureType.strip()MonitorFeatureType.swapcase()MonitorFeatureType.title()MonitorFeatureType.translate()MonitorFeatureType.upper()MonitorFeatureType.zfill()MonitorFeatureType.ALL_FEATURE_TYPESMonitorFeatureType.CATEGORICALMonitorFeatureType.NOT_APPLICABLEMonitorFeatureType.NUMERICAL
MonitorMetricNameMonitorMetricName.capitalize()MonitorMetricName.casefold()MonitorMetricName.center()MonitorMetricName.count()MonitorMetricName.encode()MonitorMetricName.endswith()MonitorMetricName.expandtabs()MonitorMetricName.find()MonitorMetricName.format()MonitorMetricName.format_map()MonitorMetricName.index()MonitorMetricName.isalnum()MonitorMetricName.isalpha()MonitorMetricName.isascii()MonitorMetricName.isdecimal()MonitorMetricName.isdigit()MonitorMetricName.isidentifier()MonitorMetricName.islower()MonitorMetricName.isnumeric()MonitorMetricName.isprintable()MonitorMetricName.isspace()MonitorMetricName.istitle()MonitorMetricName.isupper()MonitorMetricName.join()MonitorMetricName.ljust()MonitorMetricName.lower()MonitorMetricName.lstrip()MonitorMetricName.maketrans()MonitorMetricName.partition()MonitorMetricName.removeprefix()MonitorMetricName.removesuffix()MonitorMetricName.replace()MonitorMetricName.rfind()MonitorMetricName.rindex()MonitorMetricName.rjust()MonitorMetricName.rpartition()MonitorMetricName.rsplit()MonitorMetricName.rstrip()MonitorMetricName.split()MonitorMetricName.splitlines()MonitorMetricName.startswith()MonitorMetricName.strip()MonitorMetricName.swapcase()MonitorMetricName.title()MonitorMetricName.translate()MonitorMetricName.upper()MonitorMetricName.zfill()MonitorMetricName.ACCURACYMonitorMetricName.DATA_TYPE_ERROR_RATEMonitorMetricName.F1_SCOREMonitorMetricName.JENSEN_SHANNON_DISTANCEMonitorMetricName.MAEMonitorMetricName.MSEMonitorMetricName.NORMALIZED_DISCOUNTED_CUMULATIVE_GAINMonitorMetricName.NORMALIZED_WASSERSTEIN_DISTANCEMonitorMetricName.NULL_VALUE_RATEMonitorMetricName.OUT_OF_BOUND_RATEMonitorMetricName.PEARSONS_CHI_SQUARED_TESTMonitorMetricName.POPULATION_STABILITY_INDEXMonitorMetricName.PRECISIONMonitorMetricName.RECALLMonitorMetricName.RMSEMonitorMetricName.TWO_SAMPLE_KOLMOGOROV_SMIRNOV_TEST
MonitorModelTypeMonitorModelType.capitalize()MonitorModelType.casefold()MonitorModelType.center()MonitorModelType.count()MonitorModelType.encode()MonitorModelType.endswith()MonitorModelType.expandtabs()MonitorModelType.find()MonitorModelType.format()MonitorModelType.format_map()MonitorModelType.index()MonitorModelType.isalnum()MonitorModelType.isalpha()MonitorModelType.isascii()MonitorModelType.isdecimal()MonitorModelType.isdigit()MonitorModelType.isidentifier()MonitorModelType.islower()MonitorModelType.isnumeric()MonitorModelType.isprintable()MonitorModelType.isspace()MonitorModelType.istitle()MonitorModelType.isupper()MonitorModelType.join()MonitorModelType.ljust()MonitorModelType.lower()MonitorModelType.lstrip()MonitorModelType.maketrans()MonitorModelType.partition()MonitorModelType.removeprefix()MonitorModelType.removesuffix()MonitorModelType.replace()MonitorModelType.rfind()MonitorModelType.rindex()MonitorModelType.rjust()MonitorModelType.rpartition()MonitorModelType.rsplit()MonitorModelType.rstrip()MonitorModelType.split()MonitorModelType.splitlines()MonitorModelType.startswith()MonitorModelType.strip()MonitorModelType.swapcase()MonitorModelType.title()MonitorModelType.translate()MonitorModelType.upper()MonitorModelType.zfill()MonitorModelType.CLASSIFICATIONMonitorModelType.REGRESSION
MonitorSignalTypeMonitorSignalType.capitalize()MonitorSignalType.casefold()MonitorSignalType.center()MonitorSignalType.count()MonitorSignalType.encode()MonitorSignalType.endswith()MonitorSignalType.expandtabs()MonitorSignalType.find()MonitorSignalType.format()MonitorSignalType.format_map()MonitorSignalType.index()MonitorSignalType.isalnum()MonitorSignalType.isalpha()MonitorSignalType.isascii()MonitorSignalType.isdecimal()MonitorSignalType.isdigit()MonitorSignalType.isidentifier()MonitorSignalType.islower()MonitorSignalType.isnumeric()MonitorSignalType.isprintable()MonitorSignalType.isspace()MonitorSignalType.istitle()MonitorSignalType.isupper()MonitorSignalType.join()MonitorSignalType.ljust()MonitorSignalType.lower()MonitorSignalType.lstrip()MonitorSignalType.maketrans()MonitorSignalType.partition()MonitorSignalType.removeprefix()MonitorSignalType.removesuffix()MonitorSignalType.replace()MonitorSignalType.rfind()MonitorSignalType.rindex()MonitorSignalType.rjust()MonitorSignalType.rpartition()MonitorSignalType.rsplit()MonitorSignalType.rstrip()MonitorSignalType.split()MonitorSignalType.splitlines()MonitorSignalType.startswith()MonitorSignalType.strip()MonitorSignalType.swapcase()MonitorSignalType.title()MonitorSignalType.translate()MonitorSignalType.upper()MonitorSignalType.zfill()MonitorSignalType.CUSTOMMonitorSignalType.DATA_DRIFTMonitorSignalType.DATA_QUALITYMonitorSignalType.FEATURE_ATTRIBUTION_DRIFTMonitorSignalType.GENERATION_SAFETY_QUALITYMonitorSignalType.GENERATION_TOKEN_STATISTICSMonitorSignalType.MODEL_PERFORMANCEMonitorSignalType.PREDICTION_DRIFT
MonitorTargetTasksMonitorTargetTasks.capitalize()MonitorTargetTasks.casefold()MonitorTargetTasks.center()MonitorTargetTasks.count()MonitorTargetTasks.encode()MonitorTargetTasks.endswith()MonitorTargetTasks.expandtabs()MonitorTargetTasks.find()MonitorTargetTasks.format()MonitorTargetTasks.format_map()MonitorTargetTasks.index()MonitorTargetTasks.isalnum()MonitorTargetTasks.isalpha()MonitorTargetTasks.isascii()MonitorTargetTasks.isdecimal()MonitorTargetTasks.isdigit()MonitorTargetTasks.isidentifier()MonitorTargetTasks.islower()MonitorTargetTasks.isnumeric()MonitorTargetTasks.isprintable()MonitorTargetTasks.isspace()MonitorTargetTasks.istitle()MonitorTargetTasks.isupper()MonitorTargetTasks.join()MonitorTargetTasks.ljust()MonitorTargetTasks.lower()MonitorTargetTasks.lstrip()MonitorTargetTasks.maketrans()MonitorTargetTasks.partition()MonitorTargetTasks.removeprefix()MonitorTargetTasks.removesuffix()MonitorTargetTasks.replace()MonitorTargetTasks.rfind()MonitorTargetTasks.rindex()MonitorTargetTasks.rjust()MonitorTargetTasks.rpartition()MonitorTargetTasks.rsplit()MonitorTargetTasks.rstrip()MonitorTargetTasks.split()MonitorTargetTasks.splitlines()MonitorTargetTasks.startswith()MonitorTargetTasks.strip()MonitorTargetTasks.swapcase()MonitorTargetTasks.title()MonitorTargetTasks.translate()MonitorTargetTasks.upper()MonitorTargetTasks.zfill()MonitorTargetTasks.CLASSIFICATIONMonitorTargetTasks.QUESTION_ANSWERINGMonitorTargetTasks.REGRESSION
NlpLearningRateSchedulerNlpLearningRateScheduler.capitalize()NlpLearningRateScheduler.casefold()NlpLearningRateScheduler.center()NlpLearningRateScheduler.count()NlpLearningRateScheduler.encode()NlpLearningRateScheduler.endswith()NlpLearningRateScheduler.expandtabs()NlpLearningRateScheduler.find()NlpLearningRateScheduler.format()NlpLearningRateScheduler.format_map()NlpLearningRateScheduler.index()NlpLearningRateScheduler.isalnum()NlpLearningRateScheduler.isalpha()NlpLearningRateScheduler.isascii()NlpLearningRateScheduler.isdecimal()NlpLearningRateScheduler.isdigit()NlpLearningRateScheduler.isidentifier()NlpLearningRateScheduler.islower()NlpLearningRateScheduler.isnumeric()NlpLearningRateScheduler.isprintable()NlpLearningRateScheduler.isspace()NlpLearningRateScheduler.istitle()NlpLearningRateScheduler.isupper()NlpLearningRateScheduler.join()NlpLearningRateScheduler.ljust()NlpLearningRateScheduler.lower()NlpLearningRateScheduler.lstrip()NlpLearningRateScheduler.maketrans()NlpLearningRateScheduler.partition()NlpLearningRateScheduler.removeprefix()NlpLearningRateScheduler.removesuffix()NlpLearningRateScheduler.replace()NlpLearningRateScheduler.rfind()NlpLearningRateScheduler.rindex()NlpLearningRateScheduler.rjust()NlpLearningRateScheduler.rpartition()NlpLearningRateScheduler.rsplit()NlpLearningRateScheduler.rstrip()NlpLearningRateScheduler.split()NlpLearningRateScheduler.splitlines()NlpLearningRateScheduler.startswith()NlpLearningRateScheduler.strip()NlpLearningRateScheduler.swapcase()NlpLearningRateScheduler.title()NlpLearningRateScheduler.translate()NlpLearningRateScheduler.upper()NlpLearningRateScheduler.zfill()NlpLearningRateScheduler.CONSTANTNlpLearningRateScheduler.CONSTANT_WITH_WARMUPNlpLearningRateScheduler.COSINENlpLearningRateScheduler.COSINE_WITH_RESTARTSNlpLearningRateScheduler.LINEARNlpLearningRateScheduler.NONENlpLearningRateScheduler.POLYNOMIAL
NlpModelsNlpModels.BERT_BASE_CASEDNlpModels.BERT_BASE_GERMAN_CASEDNlpModels.BERT_BASE_MULTILINGUAL_CASEDNlpModels.BERT_BASE_UNCASEDNlpModels.BERT_LARGE_CASEDNlpModels.BERT_LARGE_UNCASEDNlpModels.DISTILBERT_BASE_CASEDNlpModels.DISTILBERT_BASE_UNCASEDNlpModels.DISTILROBERTA_BASENlpModels.ROBERTA_BASENlpModels.ROBERTA_LARGENlpModels.XLM_ROBERTA_BASENlpModels.XLM_ROBERTA_LARGENlpModels.XLNET_BASE_CASEDNlpModels.XLNET_LARGE_CASED
ParallelTaskTypeScopeStorageAccountTypeStorageAccountType.capitalize()StorageAccountType.casefold()StorageAccountType.center()StorageAccountType.count()StorageAccountType.encode()StorageAccountType.endswith()StorageAccountType.expandtabs()StorageAccountType.find()StorageAccountType.format()StorageAccountType.format_map()StorageAccountType.index()StorageAccountType.isalnum()StorageAccountType.isalpha()StorageAccountType.isascii()StorageAccountType.isdecimal()StorageAccountType.isdigit()StorageAccountType.isidentifier()StorageAccountType.islower()StorageAccountType.isnumeric()StorageAccountType.isprintable()StorageAccountType.isspace()StorageAccountType.istitle()StorageAccountType.isupper()StorageAccountType.join()StorageAccountType.ljust()StorageAccountType.lower()StorageAccountType.lstrip()StorageAccountType.maketrans()StorageAccountType.partition()StorageAccountType.removeprefix()StorageAccountType.removesuffix()StorageAccountType.replace()StorageAccountType.rfind()StorageAccountType.rindex()StorageAccountType.rjust()StorageAccountType.rpartition()StorageAccountType.rsplit()StorageAccountType.rstrip()StorageAccountType.split()StorageAccountType.splitlines()StorageAccountType.startswith()StorageAccountType.strip()StorageAccountType.swapcase()StorageAccountType.title()StorageAccountType.translate()StorageAccountType.upper()StorageAccountType.zfill()StorageAccountType.PREMIUM_LRSStorageAccountType.PREMIUM_ZRSStorageAccountType.STANDARD_GRSStorageAccountType.STANDARD_GZRSStorageAccountType.STANDARD_LRSStorageAccountType.STANDARD_RAGRSStorageAccountType.STANDARD_RAGZRSStorageAccountType.STANDARD_ZRS
TabularTrainingModeTabularTrainingMode.capitalize()TabularTrainingMode.casefold()TabularTrainingMode.center()TabularTrainingMode.count()TabularTrainingMode.encode()TabularTrainingMode.endswith()TabularTrainingMode.expandtabs()TabularTrainingMode.find()TabularTrainingMode.format()TabularTrainingMode.format_map()TabularTrainingMode.index()TabularTrainingMode.isalnum()TabularTrainingMode.isalpha()TabularTrainingMode.isascii()TabularTrainingMode.isdecimal()TabularTrainingMode.isdigit()TabularTrainingMode.isidentifier()TabularTrainingMode.islower()TabularTrainingMode.isnumeric()TabularTrainingMode.isprintable()TabularTrainingMode.isspace()TabularTrainingMode.istitle()TabularTrainingMode.isupper()TabularTrainingMode.join()TabularTrainingMode.ljust()TabularTrainingMode.lower()TabularTrainingMode.lstrip()TabularTrainingMode.maketrans()TabularTrainingMode.partition()TabularTrainingMode.removeprefix()TabularTrainingMode.removesuffix()TabularTrainingMode.replace()TabularTrainingMode.rfind()TabularTrainingMode.rindex()TabularTrainingMode.rjust()TabularTrainingMode.rpartition()TabularTrainingMode.rsplit()TabularTrainingMode.rstrip()TabularTrainingMode.split()TabularTrainingMode.splitlines()TabularTrainingMode.startswith()TabularTrainingMode.strip()TabularTrainingMode.swapcase()TabularTrainingMode.title()TabularTrainingMode.translate()TabularTrainingMode.upper()TabularTrainingMode.zfill()TabularTrainingMode.AUTOTabularTrainingMode.DISTRIBUTEDTabularTrainingMode.NON_DISTRIBUTED
TimeZoneTimeZone.capitalize()TimeZone.casefold()TimeZone.center()TimeZone.count()TimeZone.encode()TimeZone.endswith()TimeZone.expandtabs()TimeZone.find()TimeZone.format()TimeZone.format_map()TimeZone.index()TimeZone.isalnum()TimeZone.isalpha()TimeZone.isascii()TimeZone.isdecimal()TimeZone.isdigit()TimeZone.isidentifier()TimeZone.islower()TimeZone.isnumeric()TimeZone.isprintable()TimeZone.isspace()TimeZone.istitle()TimeZone.isupper()TimeZone.join()TimeZone.ljust()TimeZone.lower()TimeZone.lstrip()TimeZone.maketrans()TimeZone.partition()TimeZone.removeprefix()TimeZone.removesuffix()TimeZone.replace()TimeZone.rfind()TimeZone.rindex()TimeZone.rjust()TimeZone.rpartition()TimeZone.rsplit()TimeZone.rstrip()TimeZone.split()TimeZone.splitlines()TimeZone.startswith()TimeZone.strip()TimeZone.swapcase()TimeZone.title()TimeZone.translate()TimeZone.upper()TimeZone.zfill()TimeZone.AFGHANISTANA_STANDARD_TIMETimeZone.ALASKAN_STANDARD_TIMETimeZone.ALEUTIAN_STANDARD_TIMETimeZone.ALTAI_STANDARD_TIMETimeZone.ARABIAN_STANDARD_TIMETimeZone.ARABIC_STANDARD_TIMETimeZone.ARAB_STANDARD_TIMETimeZone.ARGENTINA_STANDARD_TIMETimeZone.ASTRAKHAN_STANDARD_TIMETimeZone.ATLANTIC_STANDARD_TIMETimeZone.AUS_CENTRAL_STANDARD_TIMETimeZone.AUS_CENTRAL_W_STANDARD_TIMETimeZone.AUS_EASTERN_STANDARD_TIMETimeZone.AZERBAIJAN_STANDARD_TIMETimeZone.AZORES_STANDARD_TIMETimeZone.BAHIA_STANDARD_TIMETimeZone.BANGLADESH_STANDARD_TIMETimeZone.BELARUS_STANDARD_TIMETimeZone.BOUGAINVILLE_STANDARD_TIMETimeZone.CANADA_CENTRAL_STANDARD_TIMETimeZone.CAPE_VERDE_STANDARD_TIMETimeZone.CAUCASUS_STANDARD_TIMETimeZone.CENTRAL_AMERICA_STANDARD_TIMETimeZone.CENTRAL_ASIA_STANDARD_TIMETimeZone.CENTRAL_BRAZILIAN_STANDARD_TIMETimeZone.CENTRAL_EUROPEAN_STANDARD_TIMETimeZone.CENTRAL_EUROPE_STANDARD_TIMETimeZone.CENTRAL_PACIFIC_STANDARD_TIMETimeZone.CENTRAL_STANDARD_TIMETimeZone.CENTRAL_STANDARD_TIME_MEXICOTimeZone.CEN_AUSTRALIA_STANDARD_TIMETimeZone.CHATHAM_ISLANDS_STANDARD_TIMETimeZone.CHINA_STANDARD_TIMETimeZone.CUBA_STANDARD_TIMETimeZone.DATELINE_STANDARD_TIMETimeZone.EASTERN_STANDARD_TIMETimeZone.EASTERN_STANDARD_TIME_MEXICOTimeZone.EASTER_ISLAND_STANDARD_TIMETimeZone.EGYPT_STANDARD_TIMETimeZone.EKATERINBURG_STANDARD_TIMETimeZone.E_AFRICA_STANDARD_TIMETimeZone.E_AUSTRALIAN_STANDARD_TIMETimeZone.E_EUROPE_STANDARD_TIMETimeZone.E_SOUTH_AMERICAN_STANDARD_TIMETimeZone.FIJI_STANDARD_TIMETimeZone.FLE_STANDARD_TIMETimeZone.GEORGIAN_STANDARD_TIMETimeZone.GMT_STANDARD_TIMETimeZone.GREENLAND_STANDARD_TIMETimeZone.GREENWICH_STANDARD_TIMETimeZone.GTB_STANDARD_TIMETimeZone.HAITI_STANDARD_TIMETimeZone.HAWAIIAN_STANDARD_TIMETimeZone.INDIA_STANDARD_TIMETimeZone.IRAN_STANDARD_TIMETimeZone.ISRAEL_STANDARD_TIMETimeZone.JORDAN_STANDARD_TIMETimeZone.KALININGRAD_STANDARD_TIMETimeZone.KAMCHATKA_STANDARD_TIMETimeZone.KOREA_STANDARD_TIMETimeZone.LIBYA_STANDARD_TIMETimeZone.LINE_ISLANDS_STANDARD_TIMETimeZone.LORD_HOWE_STANDARD_TIMETimeZone.MAGADAN_STANDARD_TIMETimeZone.MARQUESAS_STANDARD_TIMETimeZone.MAURITIUS_STANDARD_TIMETimeZone.MIDDLE_EAST_STANDARD_TIMETimeZone.MID_ATLANTIC_STANDARD_TIMETimeZone.MONTEVIDEO_STANDARD_TIMETimeZone.MOROCCO_STANDARD_TIMETimeZone.MOUNTAIN_STANDARD_TIMETimeZone.MOUNTAIN_STANDARD_TIME_MEXICOTimeZone.MYANMAR_STANDARD_TIMETimeZone.NAMIBIA_STANDARD_TIMETimeZone.NEPAL_STANDARD_TIMETimeZone.NEWFOUNDLAND_STANDARD_TIMETimeZone.NEW_ZEALAND_STANDARD_TIMETimeZone.NORFOLK_STANDARD_TIMETimeZone.NORTH_ASIA_EAST_STANDARD_TIMETimeZone.NORTH_ASIA_STANDARD_TIMETimeZone.NORTH_KOREA_STANDARD_TIMETimeZone.N_CENTRAL_ASIA_STANDARD_TIMETimeZone.PACIFIC_SA_STANDARD_TIMETimeZone.PACIFIC_STANDARD_TIMETimeZone.PACIFIC_STANDARD_TIME_MEXICOTimeZone.PAKISTAN_STANDARD_TIMETimeZone.PARAGUAY_STANDARD_TIMETimeZone.ROMANCE_STANDARD_TIMETimeZone.RUSSIAN_STANDARD_TIMETimeZone.RUSSIA_TIME_ZONE_10TimeZone.RUSSIA_TIME_ZONE_11TimeZone.RUSSIA_TIME_ZONE_3TimeZone.SAINT_PIERRE_STANDARD_TIMETimeZone.SAKHALIN_STANDARD_TIMETimeZone.SAMOA_STANDARD_TIMETimeZone.SA_EASTERN_STANDARD_TIMETimeZone.SA_PACIFIC_STANDARD_TIMETimeZone.SA_WESTERN_STANDARD_TIMETimeZone.SE_ASIA_STANDARD_TIMETimeZone.SINGAPORE_STANDARD_TIMETimeZone.SOUTH_AFRICA_STANDARD_TIMETimeZone.SRI_LANKA_STANDARD_TIMETimeZone.SYRIA_STANDARD_TIMETimeZone.TAIPEI_STANDARD_TIMETimeZone.TASMANIA_STANDARD_TIMETimeZone.TOCANTINS_STANDARD_TIMETimeZone.TOKYO_STANDARD_TIMETimeZone.TOMSK_STANDARD_TIMETimeZone.TONGA__STANDARD_TIMETimeZone.TRANSBAIKAL_STANDARD_TIMETimeZone.TURKEY_STANDARD_TIMETimeZone.TURKS_AND_CAICOS_STANDARD_TIMETimeZone.ULAANBAATAR_STANDARD_TIMETimeZone.US_EASTERN_STANDARD_TIMETimeZone.US_MOUNTAIN_STANDARD_TIMETimeZone.UTCTimeZone.UTC_02TimeZone.UTC_08TimeZone.UTC_09TimeZone.UTC_11TimeZone.UTC_12TimeZone.VENEZUELA_STANDARD_TIMETimeZone.VLADIVOSTOK_STANDARD_TIMETimeZone.WEST_ASIA_STANDARD_TIMETimeZone.WEST_BANK_STANDARD_TIMETimeZone.WEST_PACIFIC_STANDARD_TIMETimeZone.W_AUSTRALIA_STANDARD_TIMETimeZone.W_CENTEAL_AFRICA_STANDARD_TIMETimeZone.W_EUROPE_STANDARD_TIMETimeZone.W_MONGOLIA_STANDARD_TIMETimeZone.YAKUTSK_STANDARD_TIME
WorkspaceKind
- azure.ai.ml.data_transfer package
DataTransferCopyDataTransferCopy.clear()DataTransferCopy.copy()DataTransferCopy.dump()DataTransferCopy.fromkeys()DataTransferCopy.get()DataTransferCopy.items()DataTransferCopy.keys()DataTransferCopy.pop()DataTransferCopy.popitem()DataTransferCopy.setdefault()DataTransferCopy.update()DataTransferCopy.values()DataTransferCopy.base_pathDataTransferCopy.componentDataTransferCopy.creation_contextDataTransferCopy.idDataTransferCopy.inputsDataTransferCopy.log_filesDataTransferCopy.nameDataTransferCopy.outputsDataTransferCopy.statusDataTransferCopy.studio_urlDataTransferCopy.type
DataTransferCopyComponentDataTransferCopyComponent.dump()DataTransferCopyComponent.base_pathDataTransferCopyComponent.creation_contextDataTransferCopyComponent.data_copy_modeDataTransferCopyComponent.display_nameDataTransferCopyComponent.idDataTransferCopyComponent.inputsDataTransferCopyComponent.is_deterministicDataTransferCopyComponent.outputsDataTransferCopyComponent.taskDataTransferCopyComponent.typeDataTransferCopyComponent.version
DataTransferExportDataTransferExport.clear()DataTransferExport.copy()DataTransferExport.dump()DataTransferExport.fromkeys()DataTransferExport.get()DataTransferExport.items()DataTransferExport.keys()DataTransferExport.pop()DataTransferExport.popitem()DataTransferExport.setdefault()DataTransferExport.update()DataTransferExport.values()DataTransferExport.base_pathDataTransferExport.componentDataTransferExport.creation_contextDataTransferExport.idDataTransferExport.inputsDataTransferExport.log_filesDataTransferExport.nameDataTransferExport.outputsDataTransferExport.sinkDataTransferExport.statusDataTransferExport.studio_urlDataTransferExport.type
DataTransferExportComponentDataTransferExportComponent.dump()DataTransferExportComponent.base_pathDataTransferExportComponent.creation_contextDataTransferExportComponent.display_nameDataTransferExportComponent.idDataTransferExportComponent.inputsDataTransferExportComponent.is_deterministicDataTransferExportComponent.outputsDataTransferExportComponent.taskDataTransferExportComponent.typeDataTransferExportComponent.version
DataTransferImportDataTransferImport.clear()DataTransferImport.copy()DataTransferImport.dump()DataTransferImport.fromkeys()DataTransferImport.get()DataTransferImport.items()DataTransferImport.keys()DataTransferImport.pop()DataTransferImport.popitem()DataTransferImport.setdefault()DataTransferImport.update()DataTransferImport.values()DataTransferImport.base_pathDataTransferImport.componentDataTransferImport.creation_contextDataTransferImport.idDataTransferImport.inputsDataTransferImport.log_filesDataTransferImport.nameDataTransferImport.outputsDataTransferImport.statusDataTransferImport.studio_urlDataTransferImport.type
DataTransferImportComponentDataTransferImportComponent.dump()DataTransferImportComponent.base_pathDataTransferImportComponent.creation_contextDataTransferImportComponent.display_nameDataTransferImportComponent.idDataTransferImportComponent.inputsDataTransferImportComponent.is_deterministicDataTransferImportComponent.outputsDataTransferImportComponent.taskDataTransferImportComponent.typeDataTransferImportComponent.version
DatabaseFileSystemcopy_data()export_data()import_data()
- azure.ai.ml.dsl package
- azure.ai.ml.entities package
APIKeyConnectionAPIKeyConnection.dump()APIKeyConnection.api_baseAPIKeyConnection.api_keyAPIKeyConnection.azure_endpointAPIKeyConnection.base_pathAPIKeyConnection.creation_contextAPIKeyConnection.credentialsAPIKeyConnection.endpointAPIKeyConnection.idAPIKeyConnection.is_sharedAPIKeyConnection.metadataAPIKeyConnection.tagsAPIKeyConnection.targetAPIKeyConnection.typeAPIKeyConnection.url
AadCredentialConfigurationAccessKeyConfigurationAccountKeyConfigurationAlertNotificationAmlComputeAmlComputeNodeInfoAmlComputeSshSettingsAmlTokenConfigurationApiKeyConfigurationAssetAssignedUserConfigurationAutoPauseSettingsAutoScaleSettingsAzureAISearchConfigAzureAISearchConnectionAzureAISearchConnection.dump()AzureAISearchConnection.api_baseAzureAISearchConnection.api_keyAzureAISearchConnection.azure_endpointAzureAISearchConnection.base_pathAzureAISearchConnection.creation_contextAzureAISearchConnection.credentialsAzureAISearchConnection.endpointAzureAISearchConnection.idAzureAISearchConnection.is_sharedAzureAISearchConnection.metadataAzureAISearchConnection.tagsAzureAISearchConnection.targetAzureAISearchConnection.typeAzureAISearchConnection.url
AzureAIServicesConnectionAzureAIServicesConnection.dump()AzureAIServicesConnection.ai_services_resource_idAzureAIServicesConnection.api_baseAzureAIServicesConnection.api_keyAzureAIServicesConnection.azure_endpointAzureAIServicesConnection.base_pathAzureAIServicesConnection.creation_contextAzureAIServicesConnection.credentialsAzureAIServicesConnection.endpointAzureAIServicesConnection.idAzureAIServicesConnection.is_sharedAzureAIServicesConnection.metadataAzureAIServicesConnection.tagsAzureAIServicesConnection.targetAzureAIServicesConnection.typeAzureAIServicesConnection.url
AzureBlobDatastoreAzureBlobStoreConnectionAzureBlobStoreConnection.dump()AzureBlobStoreConnection.account_nameAzureBlobStoreConnection.api_baseAzureBlobStoreConnection.azure_endpointAzureBlobStoreConnection.base_pathAzureBlobStoreConnection.container_nameAzureBlobStoreConnection.creation_contextAzureBlobStoreConnection.credentialsAzureBlobStoreConnection.endpointAzureBlobStoreConnection.idAzureBlobStoreConnection.is_sharedAzureBlobStoreConnection.metadataAzureBlobStoreConnection.tagsAzureBlobStoreConnection.targetAzureBlobStoreConnection.typeAzureBlobStoreConnection.url
AzureContentSafetyConnectionAzureContentSafetyConnection.dump()AzureContentSafetyConnection.api_baseAzureContentSafetyConnection.api_keyAzureContentSafetyConnection.azure_endpointAzureContentSafetyConnection.base_pathAzureContentSafetyConnection.creation_contextAzureContentSafetyConnection.credentialsAzureContentSafetyConnection.endpointAzureContentSafetyConnection.idAzureContentSafetyConnection.is_sharedAzureContentSafetyConnection.metadataAzureContentSafetyConnection.tagsAzureContentSafetyConnection.targetAzureContentSafetyConnection.typeAzureContentSafetyConnection.url
AzureDataLakeGen1DatastoreAzureDataLakeGen2DatastoreAzureFileDatastoreAzureMLBatchInferencingServerAzureMLOnlineInferencingServerAzureOpenAIConnectionAzureOpenAIConnection.dump()AzureOpenAIConnection.api_baseAzureOpenAIConnection.api_keyAzureOpenAIConnection.api_versionAzureOpenAIConnection.azure_endpointAzureOpenAIConnection.base_pathAzureOpenAIConnection.creation_contextAzureOpenAIConnection.credentialsAzureOpenAIConnection.endpointAzureOpenAIConnection.idAzureOpenAIConnection.is_sharedAzureOpenAIConnection.metadataAzureOpenAIConnection.open_ai_resource_idAzureOpenAIConnection.tagsAzureOpenAIConnection.targetAzureOpenAIConnection.typeAzureOpenAIConnection.url
AzureOpenAIDeploymentAzureOpenAIDeployment.as_dict()AzureOpenAIDeployment.clear()AzureOpenAIDeployment.copy()AzureOpenAIDeployment.get()AzureOpenAIDeployment.items()AzureOpenAIDeployment.keys()AzureOpenAIDeployment.pop()AzureOpenAIDeployment.popitem()AzureOpenAIDeployment.setdefault()AzureOpenAIDeployment.update()AzureOpenAIDeployment.values()AzureOpenAIDeployment.connection_nameAzureOpenAIDeployment.idAzureOpenAIDeployment.model_nameAzureOpenAIDeployment.model_versionAzureOpenAIDeployment.nameAzureOpenAIDeployment.system_dataAzureOpenAIDeployment.target_url
AzureSpeechServicesConnectionAzureSpeechServicesConnection.dump()AzureSpeechServicesConnection.api_baseAzureSpeechServicesConnection.api_keyAzureSpeechServicesConnection.azure_endpointAzureSpeechServicesConnection.base_pathAzureSpeechServicesConnection.creation_contextAzureSpeechServicesConnection.credentialsAzureSpeechServicesConnection.endpointAzureSpeechServicesConnection.idAzureSpeechServicesConnection.is_sharedAzureSpeechServicesConnection.metadataAzureSpeechServicesConnection.tagsAzureSpeechServicesConnection.targetAzureSpeechServicesConnection.typeAzureSpeechServicesConnection.url
BaseEnvironmentBaselineDataRangeBatchDeploymentBatchEndpointBatchJobBatchRetrySettingsBuildContextCapabilityHostCapabilityHostKindCapabilityHostKind.capitalize()CapabilityHostKind.casefold()CapabilityHostKind.center()CapabilityHostKind.count()CapabilityHostKind.encode()CapabilityHostKind.endswith()CapabilityHostKind.expandtabs()CapabilityHostKind.find()CapabilityHostKind.format()CapabilityHostKind.format_map()CapabilityHostKind.index()CapabilityHostKind.isalnum()CapabilityHostKind.isalpha()CapabilityHostKind.isascii()CapabilityHostKind.isdecimal()CapabilityHostKind.isdigit()CapabilityHostKind.isidentifier()CapabilityHostKind.islower()CapabilityHostKind.isnumeric()CapabilityHostKind.isprintable()CapabilityHostKind.isspace()CapabilityHostKind.istitle()CapabilityHostKind.isupper()CapabilityHostKind.join()CapabilityHostKind.ljust()CapabilityHostKind.lower()CapabilityHostKind.lstrip()CapabilityHostKind.maketrans()CapabilityHostKind.partition()CapabilityHostKind.removeprefix()CapabilityHostKind.removesuffix()CapabilityHostKind.replace()CapabilityHostKind.rfind()CapabilityHostKind.rindex()CapabilityHostKind.rjust()CapabilityHostKind.rpartition()CapabilityHostKind.rsplit()CapabilityHostKind.rstrip()CapabilityHostKind.split()CapabilityHostKind.splitlines()CapabilityHostKind.startswith()CapabilityHostKind.strip()CapabilityHostKind.swapcase()CapabilityHostKind.title()CapabilityHostKind.translate()CapabilityHostKind.upper()CapabilityHostKind.zfill()CapabilityHostKind.AGENTS
CategoricalDriftMetricsCertificateConfigurationCodeConfigurationCommandCommand.clear()Command.copy()Command.dump()Command.fromkeys()Command.get()Command.items()Command.keys()Command.pop()Command.popitem()Command.set_limits()Command.set_queue_settings()Command.set_resources()Command.setdefault()Command.sweep()Command.update()Command.values()Command.base_pathCommand.codeCommand.commandCommand.componentCommand.creation_contextCommand.distributionCommand.idCommand.identityCommand.inputsCommand.log_filesCommand.nameCommand.outputsCommand.parametersCommand.queue_settingsCommand.resourcesCommand.servicesCommand.statusCommand.studio_urlCommand.type
CommandComponentCommandComponent.dump()CommandComponent.base_pathCommandComponent.creation_contextCommandComponent.display_nameCommandComponent.distributionCommandComponent.idCommandComponent.inputsCommandComponent.instance_countCommandComponent.is_deterministicCommandComponent.outputsCommandComponent.resourcesCommandComponent.typeCommandComponent.version
CommandJobCommandJobLimitsComponentComputeComputeConfigurationComputeInstanceComputeInstance.dump()ComputeInstance.base_pathComputeInstance.created_onComputeInstance.creation_contextComputeInstance.idComputeInstance.last_operationComputeInstance.os_image_metadataComputeInstance.provisioning_errorsComputeInstance.provisioning_stateComputeInstance.servicesComputeInstance.stateComputeInstance.type
ComputeInstanceSshSettingsComputePowerActionComputePowerAction.capitalize()ComputePowerAction.casefold()ComputePowerAction.center()ComputePowerAction.count()ComputePowerAction.encode()ComputePowerAction.endswith()ComputePowerAction.expandtabs()ComputePowerAction.find()ComputePowerAction.format()ComputePowerAction.format_map()ComputePowerAction.index()ComputePowerAction.isalnum()ComputePowerAction.isalpha()ComputePowerAction.isascii()ComputePowerAction.isdecimal()ComputePowerAction.isdigit()ComputePowerAction.isidentifier()ComputePowerAction.islower()ComputePowerAction.isnumeric()ComputePowerAction.isprintable()ComputePowerAction.isspace()ComputePowerAction.istitle()ComputePowerAction.isupper()ComputePowerAction.join()ComputePowerAction.ljust()ComputePowerAction.lower()ComputePowerAction.lstrip()ComputePowerAction.maketrans()ComputePowerAction.partition()ComputePowerAction.removeprefix()ComputePowerAction.removesuffix()ComputePowerAction.replace()ComputePowerAction.rfind()ComputePowerAction.rindex()ComputePowerAction.rjust()ComputePowerAction.rpartition()ComputePowerAction.rsplit()ComputePowerAction.rstrip()ComputePowerAction.split()ComputePowerAction.splitlines()ComputePowerAction.startswith()ComputePowerAction.strip()ComputePowerAction.swapcase()ComputePowerAction.title()ComputePowerAction.translate()ComputePowerAction.upper()ComputePowerAction.zfill()ComputePowerAction.STARTComputePowerAction.STOP
ComputeRuntimeComputeSchedulesComputeStartStopScheduleContainerRegistryCredentialCreatedByTypeCreatedByType.capitalize()CreatedByType.casefold()CreatedByType.center()CreatedByType.count()CreatedByType.encode()CreatedByType.endswith()CreatedByType.expandtabs()CreatedByType.find()CreatedByType.format()CreatedByType.format_map()CreatedByType.index()CreatedByType.isalnum()CreatedByType.isalpha()CreatedByType.isascii()CreatedByType.isdecimal()CreatedByType.isdigit()CreatedByType.isidentifier()CreatedByType.islower()CreatedByType.isnumeric()CreatedByType.isprintable()CreatedByType.isspace()CreatedByType.istitle()CreatedByType.isupper()CreatedByType.join()CreatedByType.ljust()CreatedByType.lower()CreatedByType.lstrip()CreatedByType.maketrans()CreatedByType.partition()CreatedByType.removeprefix()CreatedByType.removesuffix()CreatedByType.replace()CreatedByType.rfind()CreatedByType.rindex()CreatedByType.rjust()CreatedByType.rpartition()CreatedByType.rsplit()CreatedByType.rstrip()CreatedByType.split()CreatedByType.splitlines()CreatedByType.startswith()CreatedByType.strip()CreatedByType.swapcase()CreatedByType.title()CreatedByType.translate()CreatedByType.upper()CreatedByType.zfill()CreatedByType.APPLICATIONCreatedByType.KEYCreatedByType.MANAGED_IDENTITYCreatedByType.USER
CronTriggerCustomApplicationsCustomInferencingServerCustomModelFineTuningJobCustomModelFineTuningJob.dump()CustomModelFineTuningJob.base_pathCustomModelFineTuningJob.creation_contextCustomModelFineTuningJob.hyperparametersCustomModelFineTuningJob.idCustomModelFineTuningJob.inputsCustomModelFineTuningJob.log_filesCustomModelFineTuningJob.modelCustomModelFineTuningJob.model_providerCustomModelFineTuningJob.outputsCustomModelFineTuningJob.queue_settingsCustomModelFineTuningJob.resourcesCustomModelFineTuningJob.statusCustomModelFineTuningJob.studio_urlCustomModelFineTuningJob.taskCustomModelFineTuningJob.training_dataCustomModelFineTuningJob.typeCustomModelFineTuningJob.validation_data
CustomMonitoringMetricThresholdCustomMonitoringSignalCustomerManagedKeyDataDataAssetDataAvailabilityStatusDataAvailabilityStatus.capitalize()DataAvailabilityStatus.casefold()DataAvailabilityStatus.center()DataAvailabilityStatus.count()DataAvailabilityStatus.encode()DataAvailabilityStatus.endswith()DataAvailabilityStatus.expandtabs()DataAvailabilityStatus.find()DataAvailabilityStatus.format()DataAvailabilityStatus.format_map()DataAvailabilityStatus.index()DataAvailabilityStatus.isalnum()DataAvailabilityStatus.isalpha()DataAvailabilityStatus.isascii()DataAvailabilityStatus.isdecimal()DataAvailabilityStatus.isdigit()DataAvailabilityStatus.isidentifier()DataAvailabilityStatus.islower()DataAvailabilityStatus.isnumeric()DataAvailabilityStatus.isprintable()DataAvailabilityStatus.isspace()DataAvailabilityStatus.istitle()DataAvailabilityStatus.isupper()DataAvailabilityStatus.join()DataAvailabilityStatus.ljust()DataAvailabilityStatus.lower()DataAvailabilityStatus.lstrip()DataAvailabilityStatus.maketrans()DataAvailabilityStatus.partition()DataAvailabilityStatus.removeprefix()DataAvailabilityStatus.removesuffix()DataAvailabilityStatus.replace()DataAvailabilityStatus.rfind()DataAvailabilityStatus.rindex()DataAvailabilityStatus.rjust()DataAvailabilityStatus.rpartition()DataAvailabilityStatus.rsplit()DataAvailabilityStatus.rstrip()DataAvailabilityStatus.split()DataAvailabilityStatus.splitlines()DataAvailabilityStatus.startswith()DataAvailabilityStatus.strip()DataAvailabilityStatus.swapcase()DataAvailabilityStatus.title()DataAvailabilityStatus.translate()DataAvailabilityStatus.upper()DataAvailabilityStatus.zfill()DataAvailabilityStatus.COMPLETEDataAvailabilityStatus.INCOMPLETEDataAvailabilityStatus.NONEDataAvailabilityStatus.PENDING
DataCollectorDataColumnDataColumnTypeDataColumnType.capitalize()DataColumnType.casefold()DataColumnType.center()DataColumnType.count()DataColumnType.encode()DataColumnType.endswith()DataColumnType.expandtabs()DataColumnType.find()DataColumnType.format()DataColumnType.format_map()DataColumnType.index()DataColumnType.isalnum()DataColumnType.isalpha()DataColumnType.isascii()DataColumnType.isdecimal()DataColumnType.isdigit()DataColumnType.isidentifier()DataColumnType.islower()DataColumnType.isnumeric()DataColumnType.isprintable()DataColumnType.isspace()DataColumnType.istitle()DataColumnType.isupper()DataColumnType.join()DataColumnType.ljust()DataColumnType.lower()DataColumnType.lstrip()DataColumnType.maketrans()DataColumnType.partition()DataColumnType.removeprefix()DataColumnType.removesuffix()DataColumnType.replace()DataColumnType.rfind()DataColumnType.rindex()DataColumnType.rjust()DataColumnType.rpartition()DataColumnType.rsplit()DataColumnType.rstrip()DataColumnType.split()DataColumnType.splitlines()DataColumnType.startswith()DataColumnType.strip()DataColumnType.swapcase()DataColumnType.title()DataColumnType.translate()DataColumnType.upper()DataColumnType.zfill()DataColumnType.BINARYDataColumnType.BOOLEANDataColumnType.DATETIMEDataColumnType.DOUBLEDataColumnType.FLOATDataColumnType.INTEGERDataColumnType.LONGDataColumnType.STRING
DataDriftMetricThresholdDataDriftSignalDataImportDataQualityMetricThresholdDataQualityMetricsCategoricalDataQualityMetricsNumericalDataQualitySignalDataSegmentDatastoreDefaultActionTypeDefaultScaleSettingsDeploymentDeploymentCollectionDiagnoseRequestPropertiesDiagnoseResponseResultDiagnoseResponseResultValueDiagnoseResultDiagnoseWorkspaceParametersEndpointEndpointAadTokenEndpointAuthKeysEndpointAuthTokenEndpointConnectionEndpointsSettingsEnvironmentFADProductionDataFeatureFeatureAttributionDriftMetricThresholdFeatureAttributionDriftSignalFeatureSetFeatureSetBackfillMetadataFeatureSetBackfillRequestFeatureSetMaterializationMetadataFeatureSetSpecificationFeatureStoreFeatureStoreEntityFeatureStoreSettingsFeatureWindowFixedInputDataFqdnDestinationGenerationSafetyQualityMonitoringMetricThresholdGenerationSafetyQualitySignalGenerationTokenStatisticsMonitorMetricThresholdGenerationTokenStatisticsSignalGitSourceHubIPRuleIdentityConfigurationImageMetadataImageSettingsImportDataScheduleIndexIndexDataSourceIndexModelConfigurationInputPortIntellectualPropertyIsolationModeJobJobResourceConfigurationJobResourcesJobScheduleJobServiceJupyterLabJobServiceKubernetesComputeKubernetesOnlineDeploymentKubernetesOnlineDeployment.dump()KubernetesOnlineDeployment.base_pathKubernetesOnlineDeployment.code_pathKubernetesOnlineDeployment.creation_contextKubernetesOnlineDeployment.idKubernetesOnlineDeployment.provisioning_stateKubernetesOnlineDeployment.scoring_scriptKubernetesOnlineDeployment.type
KubernetesOnlineEndpointLlmDataLocalSourceManagedIdentityConfigurationManagedNetworkManagedNetworkProvisionStatusManagedOnlineDeploymentManagedOnlineEndpointMarketplacePlanMarketplacePlan.as_dict()MarketplacePlan.clear()MarketplacePlan.copy()MarketplacePlan.get()MarketplacePlan.items()MarketplacePlan.keys()MarketplacePlan.pop()MarketplacePlan.popitem()MarketplacePlan.setdefault()MarketplacePlan.update()MarketplacePlan.values()MarketplacePlan.offer_idMarketplacePlan.plan_idMarketplacePlan.publisher_idMarketplacePlan.term_id
MarketplaceSubscriptionMarketplaceSubscription.as_dict()MarketplaceSubscription.clear()MarketplaceSubscription.copy()MarketplaceSubscription.get()MarketplaceSubscription.items()MarketplaceSubscription.keys()MarketplaceSubscription.pop()MarketplaceSubscription.popitem()MarketplaceSubscription.setdefault()MarketplaceSubscription.update()MarketplaceSubscription.values()MarketplaceSubscription.idMarketplaceSubscription.marketplace_planMarketplaceSubscription.model_idMarketplaceSubscription.nameMarketplaceSubscription.provisioning_stateMarketplaceSubscription.statusMarketplaceSubscription.system_data
MaterializationComputeResourceMaterializationSettingsMaterializationStoreMaterializationTypeMaterializationType.capitalize()MaterializationType.casefold()MaterializationType.center()MaterializationType.count()MaterializationType.encode()MaterializationType.endswith()MaterializationType.expandtabs()MaterializationType.find()MaterializationType.format()MaterializationType.format_map()MaterializationType.index()MaterializationType.isalnum()MaterializationType.isalpha()MaterializationType.isascii()MaterializationType.isdecimal()MaterializationType.isdigit()MaterializationType.isidentifier()MaterializationType.islower()MaterializationType.isnumeric()MaterializationType.isprintable()MaterializationType.isspace()MaterializationType.istitle()MaterializationType.isupper()MaterializationType.join()MaterializationType.ljust()MaterializationType.lower()MaterializationType.lstrip()MaterializationType.maketrans()MaterializationType.partition()MaterializationType.removeprefix()MaterializationType.removesuffix()MaterializationType.replace()MaterializationType.rfind()MaterializationType.rindex()MaterializationType.rjust()MaterializationType.rpartition()MaterializationType.rsplit()MaterializationType.rstrip()MaterializationType.split()MaterializationType.splitlines()MaterializationType.startswith()MaterializationType.strip()MaterializationType.swapcase()MaterializationType.title()MaterializationType.translate()MaterializationType.upper()MaterializationType.zfill()MaterializationType.BACKFILL_MATERIALIZATIONMaterializationType.RECURRENT_MATERIALIZATION
MicrosoftOneLakeConnectionMicrosoftOneLakeConnection.dump()MicrosoftOneLakeConnection.api_baseMicrosoftOneLakeConnection.azure_endpointMicrosoftOneLakeConnection.base_pathMicrosoftOneLakeConnection.creation_contextMicrosoftOneLakeConnection.credentialsMicrosoftOneLakeConnection.endpointMicrosoftOneLakeConnection.idMicrosoftOneLakeConnection.is_sharedMicrosoftOneLakeConnection.metadataMicrosoftOneLakeConnection.tagsMicrosoftOneLakeConnection.targetMicrosoftOneLakeConnection.typeMicrosoftOneLakeConnection.url
ModelModelBatchDeploymentModelBatchDeploymentSettingsModelConfigurationModelPackageModelPackageInputModelPerformanceClassificationThresholdsModelPerformanceMetricThresholdModelPerformanceRegressionThresholdsModelPerformanceSignalMonitorDefinitionMonitorFeatureFilterMonitorInputDataMonitorScheduleMonitoringTargetNetworkAclsNetworkSettingsNoneCredentialConfigurationNotebookAccessKeysNotificationNumericalDriftMetricsOneLakeArtifactOneLakeConnectionArtifactOneLakeDatastoreOnlineDeploymentOnlineEndpointOnlineRequestSettingsOnlineScaleSettingsOpenAIConnectionOpenAIConnection.dump()OpenAIConnection.api_baseOpenAIConnection.api_keyOpenAIConnection.azure_endpointOpenAIConnection.base_pathOpenAIConnection.creation_contextOpenAIConnection.credentialsOpenAIConnection.endpointOpenAIConnection.idOpenAIConnection.is_sharedOpenAIConnection.metadataOpenAIConnection.tagsOpenAIConnection.targetOpenAIConnection.typeOpenAIConnection.url
OutboundRulePackageInputPathIdPackageInputPathUrlPackageInputPathVersionParallelParallel.clear()Parallel.copy()Parallel.dump()Parallel.fromkeys()Parallel.get()Parallel.items()Parallel.keys()Parallel.pop()Parallel.popitem()Parallel.set_resources()Parallel.setdefault()Parallel.update()Parallel.values()Parallel.base_pathParallel.componentParallel.creation_contextParallel.idParallel.identityParallel.inputsParallel.log_filesParallel.nameParallel.outputsParallel.resourcesParallel.retry_settingsParallel.statusParallel.studio_urlParallel.taskParallel.type
ParallelComponentParallelComponent.dump()ParallelComponent.base_pathParallelComponent.codeParallelComponent.creation_contextParallelComponent.display_nameParallelComponent.environmentParallelComponent.idParallelComponent.inputsParallelComponent.instance_countParallelComponent.is_deterministicParallelComponent.outputsParallelComponent.resourcesParallelComponent.retry_settingsParallelComponent.taskParallelComponent.typeParallelComponent.version
ParallelTaskParameterizedCommandPatTokenConfigurationPipelinePipeline.clear()Pipeline.copy()Pipeline.dump()Pipeline.fromkeys()Pipeline.get()Pipeline.items()Pipeline.keys()Pipeline.pop()Pipeline.popitem()Pipeline.setdefault()Pipeline.update()Pipeline.values()Pipeline.base_pathPipeline.componentPipeline.creation_contextPipeline.idPipeline.inputsPipeline.log_filesPipeline.namePipeline.outputsPipeline.settingsPipeline.statusPipeline.studio_urlPipeline.type
PipelineComponentPipelineComponentBatchDeploymentPipelineJobPipelineJobSettingsPipelineJobSettings.clear()PipelineJobSettings.copy()PipelineJobSettings.fromkeys()PipelineJobSettings.get()PipelineJobSettings.items()PipelineJobSettings.keys()PipelineJobSettings.pop()PipelineJobSettings.popitem()PipelineJobSettings.setdefault()PipelineJobSettings.update()PipelineJobSettings.values()
PredictionDriftMetricThresholdPredictionDriftSignalPrivateEndpointPrivateEndpointDestinationProbeSettingsProductionDataProjectQueueSettingsRecurrencePatternRecurrenceTriggerReferenceDataRegistryRegistryRegionDetailsRequestLoggingResourceResourceConfigurationResourceRequirementsSettingsResourceSettingsRetrySettingsRouteSasTokenConfigurationScheduleScheduleStateScheduleTriggerResultScriptReferenceSerpConnectionSerpConnection.dump()SerpConnection.api_baseSerpConnection.api_keySerpConnection.azure_endpointSerpConnection.base_pathSerpConnection.creation_contextSerpConnection.credentialsSerpConnection.endpointSerpConnection.idSerpConnection.is_sharedSerpConnection.metadataSerpConnection.tagsSerpConnection.targetSerpConnection.typeSerpConnection.url
ServerlessComputeSettingsServerlessConnectionServerlessConnection.dump()ServerlessConnection.api_baseServerlessConnection.api_keyServerlessConnection.azure_endpointServerlessConnection.base_pathServerlessConnection.creation_contextServerlessConnection.credentialsServerlessConnection.endpointServerlessConnection.idServerlessConnection.is_sharedServerlessConnection.metadataServerlessConnection.tagsServerlessConnection.targetServerlessConnection.typeServerlessConnection.url
ServerlessEndpointServerlessEndpoint.as_dict()ServerlessEndpoint.clear()ServerlessEndpoint.copy()ServerlessEndpoint.get()ServerlessEndpoint.items()ServerlessEndpoint.keys()ServerlessEndpoint.pop()ServerlessEndpoint.popitem()ServerlessEndpoint.setdefault()ServerlessEndpoint.update()ServerlessEndpoint.values()ServerlessEndpoint.auth_modeServerlessEndpoint.descriptionServerlessEndpoint.headersServerlessEndpoint.idServerlessEndpoint.locationServerlessEndpoint.model_idServerlessEndpoint.nameServerlessEndpoint.propertiesServerlessEndpoint.provisioning_stateServerlessEndpoint.scoring_uriServerlessEndpoint.system_dataServerlessEndpoint.tags
ServerlessSparkComputeServiceInstanceServicePrincipalConfigurationServiceTagDestinationSetupScriptsSparkSpark.clear()Spark.copy()Spark.dump()Spark.fromkeys()Spark.get()Spark.items()Spark.keys()Spark.pop()Spark.popitem()Spark.setdefault()Spark.update()Spark.values()Spark.CODE_ID_RE_PATTERNSpark.base_pathSpark.codeSpark.componentSpark.creation_contextSpark.entrySpark.idSpark.identitySpark.inputsSpark.log_filesSpark.nameSpark.outputsSpark.resourcesSpark.statusSpark.studio_urlSpark.type
SparkComponentSparkComponent.dump()SparkComponent.CODE_ID_RE_PATTERNSparkComponent.base_pathSparkComponent.creation_contextSparkComponent.display_nameSparkComponent.entrySparkComponent.environmentSparkComponent.idSparkComponent.inputsSparkComponent.is_deterministicSparkComponent.outputsSparkComponent.typeSparkComponent.version
SparkJobSparkJob.dump()SparkJob.filter_conf_fields()SparkJob.CODE_ID_RE_PATTERNSparkJob.base_pathSparkJob.creation_contextSparkJob.entrySparkJob.environmentSparkJob.idSparkJob.identitySparkJob.inputsSparkJob.log_filesSparkJob.outputsSparkJob.resourcesSparkJob.statusSparkJob.studio_urlSparkJob.type
SparkJobEntrySparkJobEntryTypeSparkResourceConfigurationSshJobServiceStaticInputDataSweepSweep.clear()Sweep.copy()Sweep.dump()Sweep.fromkeys()Sweep.get()Sweep.items()Sweep.keys()Sweep.pop()Sweep.popitem()Sweep.set_limits()Sweep.set_objective()Sweep.set_resources()Sweep.setdefault()Sweep.update()Sweep.values()Sweep.base_pathSweep.creation_contextSweep.early_terminationSweep.idSweep.inputsSweep.limitsSweep.log_filesSweep.nameSweep.outputsSweep.resourcesSweep.sampling_algorithmSweep.search_spaceSweep.statusSweep.studio_urlSweep.trialSweep.type
SynapseSparkComputeSystemCreatedAcrAccountSystemCreatedStorageAccountSystemDataTargetUtilizationScaleSettingsTensorBoardJobServiceTrailingInputDataTritonInferencingServerUnsupportedComputeUsageUsageNameUsageUnitUsageUnit.capitalize()UsageUnit.casefold()UsageUnit.center()UsageUnit.count()UsageUnit.encode()UsageUnit.endswith()UsageUnit.expandtabs()UsageUnit.find()UsageUnit.format()UsageUnit.format_map()UsageUnit.index()UsageUnit.isalnum()UsageUnit.isalpha()UsageUnit.isascii()UsageUnit.isdecimal()UsageUnit.isdigit()UsageUnit.isidentifier()UsageUnit.islower()UsageUnit.isnumeric()UsageUnit.isprintable()UsageUnit.isspace()UsageUnit.istitle()UsageUnit.isupper()UsageUnit.join()UsageUnit.ljust()UsageUnit.lower()UsageUnit.lstrip()UsageUnit.maketrans()UsageUnit.partition()UsageUnit.removeprefix()UsageUnit.removesuffix()UsageUnit.replace()UsageUnit.rfind()UsageUnit.rindex()UsageUnit.rjust()UsageUnit.rpartition()UsageUnit.rsplit()UsageUnit.rstrip()UsageUnit.split()UsageUnit.splitlines()UsageUnit.startswith()UsageUnit.strip()UsageUnit.swapcase()UsageUnit.title()UsageUnit.translate()UsageUnit.upper()UsageUnit.zfill()UsageUnit.COUNT
UserIdentityConfigurationUsernamePasswordConfigurationValidationResultVirtualMachineComputeVirtualMachineCompute.dump()VirtualMachineCompute.base_pathVirtualMachineCompute.created_onVirtualMachineCompute.creation_contextVirtualMachineCompute.idVirtualMachineCompute.provisioning_errorsVirtualMachineCompute.provisioning_stateVirtualMachineCompute.public_key_dataVirtualMachineCompute.type
VirtualMachineSshSettingsVmSizeVolumeSettingsVsCodeJobServiceWorkspaceWorkspaceConnectionWorkspaceConnection.dump()WorkspaceConnection.api_baseWorkspaceConnection.azure_endpointWorkspaceConnection.base_pathWorkspaceConnection.creation_contextWorkspaceConnection.credentialsWorkspaceConnection.endpointWorkspaceConnection.idWorkspaceConnection.is_sharedWorkspaceConnection.metadataWorkspaceConnection.tagsWorkspaceConnection.targetWorkspaceConnection.typeWorkspaceConnection.url
WorkspaceKeysWorkspaceModelReference
- azure.ai.ml.finetuning package
FineTuningTaskTypeFineTuningTaskType.CHAT_COMPLETIONFineTuningTaskType.IMAGE_CLASSIFICATIONFineTuningTaskType.IMAGE_INSTANCE_SEGMENTATIONFineTuningTaskType.IMAGE_OBJECT_DETECTIONFineTuningTaskType.QUESTION_ANSWERINGFineTuningTaskType.TEXT_CLASSIFICATIONFineTuningTaskType.TEXT_COMPLETIONFineTuningTaskType.TEXT_SUMMARIZATIONFineTuningTaskType.TEXT_TRANSLATIONFineTuningTaskType.TOKEN_CLASSIFICATIONFineTuningTaskType.VIDEO_MULTI_OBJECT_TRACKING
create_finetuning_job()
- azure.ai.ml.identity package
- azure.ai.ml.model_customization package
- azure.ai.ml.operations package
AzureOpenAIDeploymentOperationsBatchDeploymentOperationsBatchEndpointOperationsCapabilityHostsOperationsComponentOperationsComputeOperationsComputeOperations.begin_attach()ComputeOperations.begin_create_or_update()ComputeOperations.begin_delete()ComputeOperations.begin_restart()ComputeOperations.begin_start()ComputeOperations.begin_stop()ComputeOperations.begin_update()ComputeOperations.enable_sso()ComputeOperations.get()ComputeOperations.list()ComputeOperations.list_nodes()ComputeOperations.list_sizes()ComputeOperations.list_usage()
DataOperationsDatastoreOperationsEnvironmentOperationsEvaluatorOperationsFeatureSetOperationsFeatureSetOperations.archive()FeatureSetOperations.begin_backfill()FeatureSetOperations.begin_create_or_update()FeatureSetOperations.get()FeatureSetOperations.get_feature()FeatureSetOperations.list()FeatureSetOperations.list_features()FeatureSetOperations.list_materialization_operations()FeatureSetOperations.restore()
FeatureStoreEntityOperationsFeatureStoreOperationsIndexOperationsJobOperationsMarketplaceSubscriptionOperationsModelOperationsOnlineDeploymentOperationsOnlineEndpointOperationsRegistryOperationsScheduleOperationsServerlessEndpointOperationsWorkspaceConnectionsOperationsWorkspaceOperationsWorkspaceOperations.begin_create()WorkspaceOperations.begin_delete()WorkspaceOperations.begin_diagnose()WorkspaceOperations.begin_provision_network()WorkspaceOperations.begin_sync_keys()WorkspaceOperations.begin_update()WorkspaceOperations.get()WorkspaceOperations.get_keys()WorkspaceOperations.list()
WorkspaceOutboundRuleOperations
- azure.ai.ml.parallel package
- azure.ai.ml.sweep package
BanditPolicyBayesianSamplingAlgorithmChoiceGridSamplingAlgorithmLogNormalLogUniformMedianStoppingPolicyNormalObjectiveQLogNormalQLogUniformQNormalQUniformRandintRandomSamplingAlgorithmSamplingAlgorithmSweepJobSweepJob.dump()SweepJob.set_limits()SweepJob.set_objective()SweepJob.set_resources()SweepJob.base_pathSweepJob.creation_contextSweepJob.early_terminationSweepJob.idSweepJob.inputsSweepJob.limitsSweepJob.log_filesSweepJob.outputsSweepJob.resourcesSweepJob.sampling_algorithmSweepJob.statusSweepJob.studio_urlSweepJob.type
SweepJobLimitsTruncationSelectionPolicyUniform
Submodules¶
azure.ai.ml.exceptions module¶
Contains exception module in Azure Machine Learning SDKv2.
This includes enums and classes for exceptions.
- exception azure.ai.ml.exceptions.AssetException(message: str, no_personal_data_message: str, *args, target: ErrorTarget = 'Unknown', error_category: ErrorCategory = 'Unknown', **kwargs)[source]¶
Class for all exceptions related to Assets.
- Parameters:
message (str) – A message describing the error. This is the error message the user will see.
no_personal_data_message (str) – The error message without any personal data. This will be pushed to telemetry logs.
target (ErrorTarget) – The name of the element that caused the exception to be thrown.
error_category (ErrorCategory) – The error category, defaults to Unknown.
- add_note()¶
Exception.add_note(note) – add a note to the exception
- raise_with_traceback() None¶
Raise the exception with the existing traceback.
Deprecated since version 1.22.0: This method is deprecated as we don’t support Python 2 anymore. Use raise/from instead.
- with_traceback()¶
Exception.with_traceback(tb) – set self.__traceback__ to tb and return self.
- args¶
- property error_category¶
Return the error category.
- Returns:
The error category.
- Return type:
- property no_personal_data_message¶
Return the error message with no personal data.
- Returns:
No personal data error message.
- Return type:
- property target¶
Return the error target.
- Returns:
The error target.
- Return type:
- exception azure.ai.ml.exceptions.AssetPathException(message: str, no_personal_data_message: str, *args, target: ErrorTarget = 'Unknown', error_category: ErrorCategory = 'Unknown', **kwargs)[source]¶
Class for the exception raised when an attempt is made to update the path of an existing asset. Asset paths are immutable.
- Parameters:
message (str) – A message describing the error. This is the error message the user will see.
no_personal_data_message (str) – The error message without any personal data. This will be pushed to telemetry logs.
target (ErrorTarget) – The name of the element that caused the exception to be thrown.
error_category (ErrorCategory) – The error category, defaults to Unknown.
- add_note()¶
Exception.add_note(note) – add a note to the exception
- raise_with_traceback() None¶
Raise the exception with the existing traceback.
Deprecated since version 1.22.0: This method is deprecated as we don’t support Python 2 anymore. Use raise/from instead.
- with_traceback()¶
Exception.with_traceback(tb) – set self.__traceback__ to tb and return self.
- args¶
- property error_category¶
Return the error category.
- Returns:
The error category.
- Return type:
- property no_personal_data_message¶
Return the error message with no personal data.
- Returns:
No personal data error message.
- Return type:
- property target¶
Return the error target.
- Returns:
The error target.
- Return type:
- exception azure.ai.ml.exceptions.CannotSetAttributeError(object_name)[source]¶
Exception raised when a user try setting attributes of inputs/outputs.
- add_note()¶
Exception.add_note(note) – add a note to the exception
- raise_with_traceback() None¶
Raise the exception with the existing traceback.
Deprecated since version 1.22.0: This method is deprecated as we don’t support Python 2 anymore. Use raise/from instead.
- with_traceback()¶
Exception.with_traceback(tb) – set self.__traceback__ to tb and return self.
- args¶
- property error_category¶
Return the error category.
- Returns:
The error category.
- Return type:
- property no_personal_data_message¶
Return the error message with no personal data.
- Returns:
No personal data error message.
- Return type:
- property target¶
Return the error target.
- Returns:
The error target.
- Return type:
- exception azure.ai.ml.exceptions.CloudArtifactsNotSupportedError(endpoint_name: str, invalid_artifact: str, deployment_name: str | None = None, error_category='UserError')[source]¶
Exception raised when remote cloud artifacts are used with local endpoints.
Local endpoints only support local artifacts.
- add_note()¶
Exception.add_note(note) – add a note to the exception
- raise_with_traceback() None¶
Raise the exception with the existing traceback.
Deprecated since version 1.22.0: This method is deprecated as we don’t support Python 2 anymore. Use raise/from instead.
- with_traceback()¶
Exception.with_traceback(tb) – set self.__traceback__ to tb and return self.
- args¶
- property error_category¶
Return the error category.
- Returns:
The error category.
- Return type:
- property no_personal_data_message¶
Return the error message with no personal data.
- Returns:
No personal data error message.
- Return type:
- property target¶
Return the error target.
- Returns:
The error target.
- Return type:
- exception azure.ai.ml.exceptions.ComponentException(message: str, no_personal_data_message: str, *args, target: ErrorTarget = 'Unknown', error_category: ErrorCategory = 'Unknown', **kwargs)[source]¶
Class for all exceptions related to Components.
- Parameters:
message (str) – A message describing the error. This is the error message the user will see.
no_personal_data_message (str) – The error message without any personal data. This will be pushed to telemetry logs.
target (ErrorTarget) – The name of the element that caused the exception to be thrown.
error_category (ErrorCategory) – The error category, defaults to Unknown.
- add_note()¶
Exception.add_note(note) – add a note to the exception
- raise_with_traceback() None¶
Raise the exception with the existing traceback.
Deprecated since version 1.22.0: This method is deprecated as we don’t support Python 2 anymore. Use raise/from instead.
- with_traceback()¶
Exception.with_traceback(tb) – set self.__traceback__ to tb and return self.
- args¶
- property error_category¶
Return the error category.
- Returns:
The error category.
- Return type:
- property no_personal_data_message¶
Return the error message with no personal data.
- Returns:
No personal data error message.
- Return type:
- property target¶
Return the error target.
- Returns:
The error target.
- Return type:
- exception azure.ai.ml.exceptions.DeploymentException(message: str, no_personal_data_message: str, *args, target: ErrorTarget = 'Unknown', error_category: ErrorCategory = 'Unknown', **kwargs)[source]¶
Class for all exceptions related to Deployments.
- Parameters:
message (str) – A message describing the error. This is the error message the user will see.
no_personal_data_message (str) – The error message without any personal data. This will be pushed to telemetry logs.
target (ErrorTarget) – The name of the element that caused the exception to be thrown.
error_category (ErrorCategory) – The error category, defaults to Unknown.
- add_note()¶
Exception.add_note(note) – add a note to the exception
- raise_with_traceback() None¶
Raise the exception with the existing traceback.
Deprecated since version 1.22.0: This method is deprecated as we don’t support Python 2 anymore. Use raise/from instead.
- with_traceback()¶
Exception.with_traceback(tb) – set self.__traceback__ to tb and return self.
- args¶
- property error_category¶
Return the error category.
- Returns:
The error category.
- Return type:
- property no_personal_data_message¶
Return the error message with no personal data.
- Returns:
No personal data error message.
- Return type:
- property target¶
Return the error target.
- Returns:
The error target.
- Return type:
- exception azure.ai.ml.exceptions.DockerEngineNotAvailableError(error_category='Unknown')[source]¶
Exception raised when local Docker Engine is unavailable for local operation.
- add_note()¶
Exception.add_note(note) – add a note to the exception
- raise_with_traceback() None¶
Raise the exception with the existing traceback.
Deprecated since version 1.22.0: This method is deprecated as we don’t support Python 2 anymore. Use raise/from instead.
- with_traceback()¶
Exception.with_traceback(tb) – set self.__traceback__ to tb and return self.
- args¶
- property error_category¶
Return the error category.
- Returns:
The error category.
- Return type:
- property no_personal_data_message¶
Return the error message with no personal data.
- Returns:
No personal data error message.
- Return type:
- property target¶
Return the error target.
- Returns:
The error target.
- Return type:
- exception azure.ai.ml.exceptions.EmptyDirectoryError(message: str, no_personal_data_message: str, target: ErrorTarget = 'Unknown', error_category: ErrorCategory = 'Unknown')[source]¶
Exception raised when an empty directory is provided as input for an I/O operation.
- add_note()¶
Exception.add_note(note) – add a note to the exception
- raise_with_traceback() None¶
Raise the exception with the existing traceback.
Deprecated since version 1.22.0: This method is deprecated as we don’t support Python 2 anymore. Use raise/from instead.
- with_traceback()¶
Exception.with_traceback(tb) – set self.__traceback__ to tb and return self.
- args¶
- property error_category¶
Return the error category.
- Returns:
The error category.
- Return type:
- property no_personal_data_message¶
Return the error message with no personal data.
- Returns:
No personal data error message.
- Return type:
- property target¶
Return the error target.
- Returns:
The error target.
- Return type:
- exception azure.ai.ml.exceptions.InvalidLocalEndpointError(message: str, no_personal_data_message: str, error_category='UserError')[source]¶
Exception raised when local endpoint is invalid.
- add_note()¶
Exception.add_note(note) – add a note to the exception
- raise_with_traceback() None¶
Raise the exception with the existing traceback.
Deprecated since version 1.22.0: This method is deprecated as we don’t support Python 2 anymore. Use raise/from instead.
- with_traceback()¶
Exception.with_traceback(tb) – set self.__traceback__ to tb and return self.
- args¶
- property error_category¶
Return the error category.
- Returns:
The error category.
- Return type:
- property no_personal_data_message¶
Return the error message with no personal data.
- Returns:
No personal data error message.
- Return type:
- property target¶
Return the error target.
- Returns:
The error target.
- Return type:
- exception azure.ai.ml.exceptions.InvalidVSCodeRequestError(error_category='UserError', msg=None)[source]¶
Exception raised when VS Code Debug is invoked with a remote endpoint.
VSCode debug is only supported for local endpoints.
- add_note()¶
Exception.add_note(note) – add a note to the exception
- raise_with_traceback() None¶
Raise the exception with the existing traceback.
Deprecated since version 1.22.0: This method is deprecated as we don’t support Python 2 anymore. Use raise/from instead.
- with_traceback()¶
Exception.with_traceback(tb) – set self.__traceback__ to tb and return self.
- args¶
- property error_category¶
Return the error category.
- Returns:
The error category.
- Return type:
- property no_personal_data_message¶
Return the error message with no personal data.
- Returns:
No personal data error message.
- Return type:
- property target¶
Return the error target.
- Returns:
The error target.
- Return type:
- exception azure.ai.ml.exceptions.JobException(message: str, no_personal_data_message: str, *args, target: ErrorTarget = 'Unknown', error_category: ErrorCategory = 'Unknown', **kwargs)[source]¶
Class for all exceptions related to Jobs.
- Parameters:
message (str) – A message describing the error. This is the error message the user will see.
no_personal_data_message (str) – The error message without any personal data. This will be pushed to telemetry logs.
target (ErrorTarget) – The name of the element that caused the exception to be thrown.
error_category (ErrorCategory) – The error category, defaults to Unknown.
- add_note()¶
Exception.add_note(note) – add a note to the exception
- raise_with_traceback() None¶
Raise the exception with the existing traceback.
Deprecated since version 1.22.0: This method is deprecated as we don’t support Python 2 anymore. Use raise/from instead.
- with_traceback()¶
Exception.with_traceback(tb) – set self.__traceback__ to tb and return self.
- args¶
- property error_category¶
Return the error category.
- Returns:
The error category.
- Return type:
- property no_personal_data_message¶
Return the error message with no personal data.
- Returns:
No personal data error message.
- Return type:
- property target¶
Return the error target.
- Returns:
The error target.
- Return type:
- exception azure.ai.ml.exceptions.JobParsingError(error_category, no_personal_data_message, message, *args, **kwargs)[source]¶
Exception that the job data returned by MFE cannot be parsed.
- add_note()¶
Exception.add_note(note) – add a note to the exception
- raise_with_traceback() None¶
Raise the exception with the existing traceback.
Deprecated since version 1.22.0: This method is deprecated as we don’t support Python 2 anymore. Use raise/from instead.
- with_traceback()¶
Exception.with_traceback(tb) – set self.__traceback__ to tb and return self.
- args¶
- property error_category¶
Return the error category.
- Returns:
The error category.
- Return type:
- property no_personal_data_message¶
Return the error message with no personal data.
- Returns:
No personal data error message.
- Return type:
- property target¶
Return the error target.
- Returns:
The error target.
- Return type:
- exception azure.ai.ml.exceptions.KeywordError(message, no_personal_data_message=None)[source]¶
Super class of all type keyword error.
- add_note()¶
Exception.add_note(note) – add a note to the exception
- raise_with_traceback() None¶
Raise the exception with the existing traceback.
Deprecated since version 1.22.0: This method is deprecated as we don’t support Python 2 anymore. Use raise/from instead.
- with_traceback()¶
Exception.with_traceback(tb) – set self.__traceback__ to tb and return self.
- args¶
- property error_category¶
Return the error category.
- Returns:
The error category.
- Return type:
- property no_personal_data_message¶
Return the error message with no personal data.
- Returns:
No personal data error message.
- Return type:
- property target¶
Return the error target.
- Returns:
The error target.
- Return type:
- exception azure.ai.ml.exceptions.LocalDeploymentGPUNotAvailable(error_category='UserError', msg=None)[source]¶
Exception raised when local_enable_gpu is set and Nvidia GPU is not available.
- add_note()¶
Exception.add_note(note) – add a note to the exception
- raise_with_traceback() None¶
Raise the exception with the existing traceback.
Deprecated since version 1.22.0: This method is deprecated as we don’t support Python 2 anymore. Use raise/from instead.
- with_traceback()¶
Exception.with_traceback(tb) – set self.__traceback__ to tb and return self.
- args¶
- property error_category¶
Return the error category.
- Returns:
The error category.
- Return type:
- property no_personal_data_message¶
Return the error message with no personal data.
- Returns:
No personal data error message.
- Return type:
- property target¶
Return the error target.
- Returns:
The error target.
- Return type:
- exception azure.ai.ml.exceptions.LocalEndpointImageBuildError(error: str | Exception, error_category='Unknown')[source]¶
Exception raised when local endpoint’s Docker image build is unsuccessful.
- add_note()¶
Exception.add_note(note) – add a note to the exception
- raise_with_traceback() None¶
Raise the exception with the existing traceback.
Deprecated since version 1.22.0: This method is deprecated as we don’t support Python 2 anymore. Use raise/from instead.
- with_traceback()¶
Exception.with_traceback(tb) – set self.__traceback__ to tb and return self.
- args¶
- property error_category¶
Return the error category.
- Returns:
The error category.
- Return type:
- property no_personal_data_message¶
Return the error message with no personal data.
- Returns:
No personal data error message.
- Return type:
- property target¶
Return the error target.
- Returns:
The error target.
- Return type:
- exception azure.ai.ml.exceptions.LocalEndpointInFailedStateError(endpoint_name, deployment_name=None, error_category='Unknown')[source]¶
Exception raised when local endpoint is in Failed state.
- add_note()¶
Exception.add_note(note) – add a note to the exception
- raise_with_traceback() None¶
Raise the exception with the existing traceback.
Deprecated since version 1.22.0: This method is deprecated as we don’t support Python 2 anymore. Use raise/from instead.
- with_traceback()¶
Exception.with_traceback(tb) – set self.__traceback__ to tb and return self.
- args¶
- property error_category¶
Return the error category.
- Returns:
The error category.
- Return type:
- property no_personal_data_message¶
Return the error message with no personal data.
- Returns:
No personal data error message.
- Return type:
- property target¶
Return the error target.
- Returns:
The error target.
- Return type:
- exception azure.ai.ml.exceptions.LocalEndpointNotFoundError(endpoint_name: str, deployment_name: str | None = None, error_category='UserError')[source]¶
Exception raised if local endpoint cannot be found.
- add_note()¶
Exception.add_note(note) – add a note to the exception
- raise_with_traceback() None¶
Raise the exception with the existing traceback.
Deprecated since version 1.22.0: This method is deprecated as we don’t support Python 2 anymore. Use raise/from instead.
- with_traceback()¶
Exception.with_traceback(tb) – set self.__traceback__ to tb and return self.
- args¶
- property error_category¶
Return the error category.
- Returns:
The error category.
- Return type:
- property no_personal_data_message¶
Return the error message with no personal data.
- Returns:
No personal data error message.
- Return type:
- property target¶
Return the error target.
- Returns:
The error target.
- Return type:
- exception azure.ai.ml.exceptions.MissingPositionalArgsError(func_name, missing_args)[source]¶
Exception raised when missing positional keyword parameter in dynamic functions.
- add_note()¶
Exception.add_note(note) – add a note to the exception
- raise_with_traceback() None¶
Raise the exception with the existing traceback.
Deprecated since version 1.22.0: This method is deprecated as we don’t support Python 2 anymore. Use raise/from instead.
- with_traceback()¶
Exception.with_traceback(tb) – set self.__traceback__ to tb and return self.
- args¶
- property error_category¶
Return the error category.
- Returns:
The error category.
- Return type:
- exc_value: BaseException | None¶
- inner_exception: BaseException | None¶
- property no_personal_data_message¶
Return the error message with no personal data.
- Returns:
No personal data error message.
- Return type:
- property target¶
Return the error target.
- Returns:
The error target.
- Return type:
- exception azure.ai.ml.exceptions.MlException(message: str, no_personal_data_message: str, *args, target: ErrorTarget = 'Unknown', error_category: ErrorCategory = 'Unknown', **kwargs)[source]¶
The base class for all exceptions raised in AzureML SDK code base. If there is a need to define a custom exception type, that custom exception type should extend from this class.
- Parameters:
message (str) – A message describing the error. This is the error message the user will see.
no_personal_data_message (str) – The error message without any personal data. This will be pushed to telemetry logs.
target (ErrorTarget) – The name of the element that caused the exception to be thrown.
error_category (ErrorCategory) – The error category, defaults to Unknown.
error (Exception) – The original exception if any.
- add_note()¶
Exception.add_note(note) – add a note to the exception
- raise_with_traceback() None¶
Raise the exception with the existing traceback.
Deprecated since version 1.22.0: This method is deprecated as we don’t support Python 2 anymore. Use raise/from instead.
- with_traceback()¶
Exception.with_traceback(tb) – set self.__traceback__ to tb and return self.
- args¶
- property error_category¶
Return the error category.
- Returns:
The error category.
- Return type:
- property no_personal_data_message¶
Return the error message with no personal data.
- Returns:
No personal data error message.
- Return type:
- property target¶
Return the error target.
- Returns:
The error target.
- Return type:
- exception azure.ai.ml.exceptions.ModelException(message: str, no_personal_data_message: str, *args, target: ErrorTarget = 'Unknown', error_category: ErrorCategory = 'Unknown', **kwargs)[source]¶
Class for all exceptions related to Models.
- Parameters:
message (str) – A message describing the error. This is the error message the user will see.
no_personal_data_message (str) – The error message without any personal data. This will be pushed to telemetry logs.
target (ErrorTarget) – The name of the element that caused the exception to be thrown.
error_category (ErrorCategory) – The error category, defaults to Unknown.
- add_note()¶
Exception.add_note(note) – add a note to the exception
- raise_with_traceback() None¶
Raise the exception with the existing traceback.
Deprecated since version 1.22.0: This method is deprecated as we don’t support Python 2 anymore. Use raise/from instead.
- with_traceback()¶
Exception.with_traceback(tb) – set self.__traceback__ to tb and return self.
- args¶
- property error_category¶
Return the error category.
- Returns:
The error category.
- Return type:
- exc_value: BaseException | None¶
- inner_exception: BaseException | None¶
- property no_personal_data_message¶
Return the error message with no personal data.
- Returns:
No personal data error message.
- Return type:
- property target¶
Return the error target.
- Returns:
The error target.
- Return type:
- exception azure.ai.ml.exceptions.MultipleLocalDeploymentsFoundError(endpoint_name: str, error_category='Unknown')[source]¶
Exception raised when no deployment name is specified for local endpoint even though multiple deployments exist.
- add_note()¶
Exception.add_note(note) – add a note to the exception
- raise_with_traceback() None¶
Raise the exception with the existing traceback.
Deprecated since version 1.22.0: This method is deprecated as we don’t support Python 2 anymore. Use raise/from instead.
- with_traceback()¶
Exception.with_traceback(tb) – set self.__traceback__ to tb and return self.
- args¶
- property error_category¶
Return the error category.
- Returns:
The error category.
- Return type:
- exc_value: BaseException | None¶
- inner_exception: BaseException | None¶
- property no_personal_data_message¶
Return the error message with no personal data.
- Returns:
No personal data error message.
- Return type:
- property target¶
Return the error target.
- Returns:
The error target.
- Return type:
- exception azure.ai.ml.exceptions.MultipleValueError(func_name, keyword)[source]¶
Exception raised when giving multiple value of a keyword parameter in dynamic functions.
- add_note()¶
Exception.add_note(note) – add a note to the exception
- raise_with_traceback() None¶
Raise the exception with the existing traceback.
Deprecated since version 1.22.0: This method is deprecated as we don’t support Python 2 anymore. Use raise/from instead.
- with_traceback()¶
Exception.with_traceback(tb) – set self.__traceback__ to tb and return self.
- args¶
- property error_category¶
Return the error category.
- Returns:
The error category.
- Return type:
- exc_value: BaseException | None¶
- inner_exception: BaseException | None¶
- property no_personal_data_message¶
Return the error message with no personal data.
- Returns:
No personal data error message.
- Return type:
- property target¶
Return the error target.
- Returns:
The error target.
- Return type:
- exception azure.ai.ml.exceptions.ParamValueNotExistsError(func_name, keywords)[source]¶
Exception raised when items in non_pipeline_inputs not in keyword parameters in dynamic functions.
- add_note()¶
Exception.add_note(note) – add a note to the exception
- raise_with_traceback() None¶
Raise the exception with the existing traceback.
Deprecated since version 1.22.0: This method is deprecated as we don’t support Python 2 anymore. Use raise/from instead.
- with_traceback()¶
Exception.with_traceback(tb) – set self.__traceback__ to tb and return self.
- args¶
- property error_category¶
Return the error category.
- Returns:
The error category.
- Return type:
- exc_value: BaseException | None¶
- inner_exception: BaseException | None¶
- property no_personal_data_message¶
Return the error message with no personal data.
- Returns:
No personal data error message.
- Return type:
- property target¶
Return the error target.
- Returns:
The error target.
- Return type:
- exception azure.ai.ml.exceptions.PipelineChildJobError(job_id: str, command: str = 'parse', prompt_studio_ui: bool = False)[source]¶
Exception that the pipeline child job is not supported.
- add_note()¶
Exception.add_note(note) – add a note to the exception
- raise_with_traceback() None¶
Raise the exception with the existing traceback.
Deprecated since version 1.22.0: This method is deprecated as we don’t support Python 2 anymore. Use raise/from instead.
- with_traceback()¶
Exception.with_traceback(tb) – set self.__traceback__ to tb and return self.
- ERROR_MESSAGE_TEMPLATE = 'az ml job {command} is not supported on pipeline child job, {prompt_message}.'¶
- PROMPT_PARENT_MESSAGE = 'please use this command on pipeline parent job'¶
- PROMPT_STUDIO_UI_MESSAGE = 'please go to studio UI to do related actions{url}'¶
- args¶
- property error_category¶
Return the error category.
- Returns:
The error category.
- Return type:
- exc_value: BaseException | None¶
- inner_exception: BaseException | None¶
- property no_personal_data_message¶
Return the error message with no personal data.
- Returns:
No personal data error message.
- Return type:
- property target¶
Return the error target.
- Returns:
The error target.
- Return type:
- exception azure.ai.ml.exceptions.RequiredLocalArtifactsNotFoundError(endpoint_name: str, required_artifact: str, required_artifact_type: str, deployment_name: str | None = None, error_category='UserError')[source]¶
Exception raised when local artifact is not provided for local endpoint.
- add_note()¶
Exception.add_note(note) – add a note to the exception
- raise_with_traceback() None¶
Raise the exception with the existing traceback.
Deprecated since version 1.22.0: This method is deprecated as we don’t support Python 2 anymore. Use raise/from instead.
- with_traceback()¶
Exception.with_traceback(tb) – set self.__traceback__ to tb and return self.
- args¶
- property error_category¶
Return the error category.
- Returns:
The error category.
- Return type:
- exc_value: BaseException | None¶
- inner_exception: BaseException | None¶
- property no_personal_data_message¶
Return the error message with no personal data.
- Returns:
No personal data error message.
- Return type:
- property target¶
Return the error target.
- Returns:
The error target.
- Return type:
- exception azure.ai.ml.exceptions.ScheduleException(message: str, no_personal_data_message: str, *args, target: ErrorTarget = 'Unknown', error_category: ErrorCategory = 'Unknown', **kwargs)[source]¶
Class for all exceptions related to Job Schedules.
- Parameters:
message (str) – A message describing the error. This is the error message the user will see.
no_personal_data_message (str) – The error message without any personal data. This will be pushed to telemetry logs.
target (ErrorTarget) – The name of the element that caused the exception to be thrown.
error_category (ErrorCategory) – The error category, defaults to Unknown.
- add_note()¶
Exception.add_note(note) – add a note to the exception
- raise_with_traceback() None¶
Raise the exception with the existing traceback.
Deprecated since version 1.22.0: This method is deprecated as we don’t support Python 2 anymore. Use raise/from instead.
- with_traceback()¶
Exception.with_traceback(tb) – set self.__traceback__ to tb and return self.
- args¶
- property error_category¶
Return the error category.
- Returns:
The error category.
- Return type:
- exc_value: BaseException | None¶
- inner_exception: BaseException | None¶
- property no_personal_data_message¶
Return the error message with no personal data.
- Returns:
No personal data error message.
- Return type:
- property target¶
Return the error target.
- Returns:
The error target.
- Return type:
- exception azure.ai.ml.exceptions.TooManyPositionalArgsError(func_name, min_number, max_number, given_number)[source]¶
Exception raised when too many positional arguments is provided in dynamic functions.
- add_note()¶
Exception.add_note(note) – add a note to the exception
- raise_with_traceback() None¶
Raise the exception with the existing traceback.
Deprecated since version 1.22.0: This method is deprecated as we don’t support Python 2 anymore. Use raise/from instead.
- with_traceback()¶
Exception.with_traceback(tb) – set self.__traceback__ to tb and return self.
- args¶
- property error_category¶
Return the error category.
- Returns:
The error category.
- Return type:
- exc_value: BaseException | None¶
- inner_exception: BaseException | None¶
- property no_personal_data_message¶
Return the error message with no personal data.
- Returns:
No personal data error message.
- Return type:
- property target¶
Return the error target.
- Returns:
The error target.
- Return type:
- exception azure.ai.ml.exceptions.UnexpectedAttributeError(keyword, keywords=None)[source]¶
Exception raised when an unexpected keyword is invoked by attribute, e.g. inputs.invalid_key.
- add_note()¶
Exception.add_note(note) – add a note to the exception
- raise_with_traceback() None¶
Raise the exception with the existing traceback.
Deprecated since version 1.22.0: This method is deprecated as we don’t support Python 2 anymore. Use raise/from instead.
- with_traceback()¶
Exception.with_traceback(tb) – set self.__traceback__ to tb and return self.
- args¶
- property error_category¶
Return the error category.
- Returns:
The error category.
- Return type:
- exc_value: BaseException | None¶
- inner_exception: BaseException | None¶
- name¶
attribute name
- property no_personal_data_message¶
Return the error message with no personal data.
- Returns:
No personal data error message.
- Return type:
- obj¶
object
- property target¶
Return the error target.
- Returns:
The error target.
- Return type:
- exception azure.ai.ml.exceptions.UnexpectedKeywordError(func_name, keyword, keywords=None)[source]¶
Exception raised when an unexpected keyword parameter is provided in dynamic functions.
- add_note()¶
Exception.add_note(note) – add a note to the exception
- raise_with_traceback() None¶
Raise the exception with the existing traceback.
Deprecated since version 1.22.0: This method is deprecated as we don’t support Python 2 anymore. Use raise/from instead.
- with_traceback()¶
Exception.with_traceback(tb) – set self.__traceback__ to tb and return self.
- args¶
- property error_category¶
Return the error category.
- Returns:
The error category.
- Return type:
- exc_value: BaseException | None¶
- inner_exception: BaseException | None¶
- property no_personal_data_message¶
Return the error message with no personal data.
- Returns:
No personal data error message.
- Return type:
- property target¶
Return the error target.
- Returns:
The error target.
- Return type:
- exception azure.ai.ml.exceptions.UnsupportedOperationError(operation_name)[source]¶
Exception raised when specified operation is not supported.
- add_note()¶
Exception.add_note(note) – add a note to the exception
- raise_with_traceback() None¶
Raise the exception with the existing traceback.
Deprecated since version 1.22.0: This method is deprecated as we don’t support Python 2 anymore. Use raise/from instead.
- with_traceback()¶
Exception.with_traceback(tb) – set self.__traceback__ to tb and return self.
- args¶
- property error_category¶
Return the error category.
- Returns:
The error category.
- Return type:
- exc_value: BaseException | None¶
- inner_exception: BaseException | None¶
- property no_personal_data_message¶
Return the error message with no personal data.
- Returns:
No personal data error message.
- Return type:
- property target¶
Return the error target.
- Returns:
The error target.
- Return type:
- exception azure.ai.ml.exceptions.UnsupportedParameterKindError(func_name, parameter_kind=None)[source]¶
Exception raised when a user try setting attributes of inputs/outputs.
- add_note()¶
Exception.add_note(note) – add a note to the exception
- raise_with_traceback() None¶
Raise the exception with the existing traceback.
Deprecated since version 1.22.0: This method is deprecated as we don’t support Python 2 anymore. Use raise/from instead.
- with_traceback()¶
Exception.with_traceback(tb) – set self.__traceback__ to tb and return self.
- args¶
- property error_category¶
Return the error category.
- Returns:
The error category.
- Return type:
- exc_value: BaseException | None¶
- inner_exception: BaseException | None¶
- property no_personal_data_message¶
Return the error message with no personal data.
- Returns:
No personal data error message.
- Return type:
- property target¶
Return the error target.
- Returns:
The error target.
- Return type:
- exception azure.ai.ml.exceptions.UserErrorException(message, no_personal_data_message=None, error_category='UserError', target: ErrorTarget = 'Pipeline')[source]¶
Exception raised when invalid or unsupported inputs are provided.
- add_note()¶
Exception.add_note(note) – add a note to the exception
- raise_with_traceback() None¶
Raise the exception with the existing traceback.
Deprecated since version 1.22.0: This method is deprecated as we don’t support Python 2 anymore. Use raise/from instead.
- with_traceback()¶
Exception.with_traceback(tb) – set self.__traceback__ to tb and return self.
- args¶
- property error_category¶
Return the error category.
- Returns:
The error category.
- Return type:
- exc_value: BaseException | None¶
- inner_exception: BaseException | None¶
- property no_personal_data_message¶
Return the error message with no personal data.
- Returns:
No personal data error message.
- Return type:
- property target¶
Return the error target.
- Returns:
The error target.
- Return type:
- exception azure.ai.ml.exceptions.VSCodeCommandNotFound(output=None, error_category='UserError')[source]¶
Exception raised when VSCode instance cannot be instantiated.
- add_note()¶
Exception.add_note(note) – add a note to the exception
- raise_with_traceback() None¶
Raise the exception with the existing traceback.
Deprecated since version 1.22.0: This method is deprecated as we don’t support Python 2 anymore. Use raise/from instead.
- with_traceback()¶
Exception.with_traceback(tb) – set self.__traceback__ to tb and return self.
- args¶
- property error_category¶
Return the error category.
- Returns:
The error category.
- Return type:
- exc_value: BaseException | None¶
- inner_exception: BaseException | None¶
- property no_personal_data_message¶
Return the error message with no personal data.
- Returns:
No personal data error message.
- Return type:
- property target¶
Return the error target.
- Returns:
The error target.
- Return type:
- exception azure.ai.ml.exceptions.ValidationException(message: str, no_personal_data_message: str, *args, error_type: ValidationErrorType = ValidationErrorType.GENERIC, target: ErrorTarget = 'Unknown', error_category: ErrorCategory = 'UserError', **kwargs)[source]¶
Class for all exceptions raised as part of client-side schema validation.
- Parameters:
message (str) – A message describing the error. This is the error message the user will see.
no_personal_data_message (str) – The error message without any personal data. This will be pushed to telemetry logs.
error_type (ValidationErrorType) – The error type, chosen from one of the values of ValidationErrorType enum class.
target (ErrorTarget) – The name of the element that caused the exception to be thrown.
error_category (ErrorCategory) – The error category, defaults to Unknown.
error (Exception) – The original exception if any.
- add_note()¶
Exception.add_note(note) – add a note to the exception
- raise_with_traceback() None¶
Raise the exception with the existing traceback.
Deprecated since version 1.22.0: This method is deprecated as we don’t support Python 2 anymore. Use raise/from instead.
- with_traceback()¶
Exception.with_traceback(tb) – set self.__traceback__ to tb and return self.
- args¶
- property error_category¶
Return the error category.
- Returns:
The error category.
- Return type:
- property error_type¶
Return the error type.
- Returns:
The error type.
- Return type:
- exc_value: BaseException | None¶
- inner_exception: BaseException | None¶
- property no_personal_data_message¶
Return the error message with no personal data.
- Returns:
No personal data error message.
- Return type:
- property target¶
Return the error target.
- Returns:
The error target.
- Return type:
- class azure.ai.ml.exceptions.ErrorCategory[source]¶
- SYSTEM_ERROR = 'SystemError'¶
- UNKNOWN = 'Unknown'¶
- USER_ERROR = 'UserError'¶
- class azure.ai.ml.exceptions.ErrorTarget[source]¶
- ARM_DEPLOYMENT = 'ArmDeployment'¶
- ARM_RESOURCE = 'ArmResource'¶
- ARTIFACT = 'Artifact'¶
- ASSET = 'Asset'¶
- AUTOML = 'AutoML'¶
- BATCH_DEPLOYMENT = 'BatchDeployment'¶
- BATCH_ENDPOINT = 'BatchEndpoint'¶
- BLOB_DATASTORE = 'BlobDatastore'¶
- CAPABILITY_HOST = 'CapabilityHost'¶
- CODE = 'Code'¶
- COMMAND_JOB = 'CommandJob'¶
- COMPONENT = 'Component'¶
- COMPUTE = 'Compute'¶
- DATA = 'Data'¶
- DATASTORE = 'Datastore'¶
- DATA_TRANSFER_JOB = 'DataTransferJob'¶
- DEPLOYMENT = 'Deployment'¶
- ENDPOINT = 'Endpoint'¶
- ENVIRONMENT = 'Environment'¶
- FEATURE_SET = 'FeatureSet'¶
- FEATURE_STORE_ENTITY = 'FeatureStoreEntity'¶
- FILE_DATASTORE = 'FileDatastore'¶
- FINETUNING = 'FineTuning'¶
- GEN1_DATASTORE = 'Gen1Datastore'¶
- GEN2_DATASTORE = 'Gen2Datastore'¶
- GENERAL = 'General'¶
- IDENTITY = 'Identity'¶
- INDEX = 'Index'¶
- JOB = 'Job'¶
- LOCAL_ENDPOINT = 'LocalEndpoint'¶
- LOCAL_JOB = 'LocalJob'¶
- MODEL = 'Model'¶
- MODEL_MONITORING = 'ModelMonitoring'¶
- ONLINE_DEPLOYMENT = 'OnlineDeployment'¶
- ONLINE_ENDPOINT = 'OnlineEndpoint'¶
- PIPELINE = 'Pipeline'¶
- REGISTRY = 'Registry'¶
- SCHEDULE = 'Schedule'¶
- SERVERLESS_ENDPOINT = 'ServerlessEndpoint'¶
- SPARK_JOB = 'SparkJob'¶
- SWEEP_JOB = 'SweepJob'¶
- UNKNOWN = 'Unknown'¶
- WORKSPACE = 'Workspace'¶
- class azure.ai.ml.exceptions.ValidationErrorType(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source]¶
Error types to be specified when using ValidationException class. Types are then used in raise_error.py to format a detailed error message for users.
When using ValidationException, specify the type that best describes the nature of the error being captured. If no type fits, add a new enum here and update raise_error.py to handle it.
Types of validation errors:
INVALID_VALUE -> One or more schema fields are invalid (e.g. incorrect type or format)
UNKNOWN_FIELD -> A least one unrecognized schema parameter is specified
MISSING_FIELD -> At least one required schema parameter is missing
FILE_OR_FOLDER_NOT_FOUND -> One or more files or folder paths do not exist
CANNOT_SERIALIZE -> Same as “Cannot dump”. One or more fields cannot be serialized by marshmallow.
CANNOT_PARSE -> YAML file cannot be parsed
RESOURCE_NOT_FOUND -> Resource could not be found
GENERIC -> Undefined placeholder. Avoid using.
- CANNOT_PARSE = 'CANNOT PARSE'¶
- CANNOT_SERIALIZE = 'CANNOT DUMP'¶
- FILE_OR_FOLDER_NOT_FOUND = 'FILE OR FOLDER NOT FOUND'¶
- GENERIC = 'GENERIC'¶
- INVALID_VALUE = 'INVALID VALUE'¶
- MISSING_FIELD = 'MISSING FIELD'¶
- RESOURCE_NOT_FOUND = 'RESOURCE NOT FOUND'¶
- UNKNOWN_FIELD = 'UNKNOWN FIELD'¶