azure.ai.ml.sweep package¶
- class azure.ai.ml.sweep.BanditPolicy(*, delay_evaluation: int = 0, evaluation_interval: int = 0, slack_amount: float = 0, slack_factor: float = 0)[source]¶
Defines an early termination policy based on slack criteria and a frequency and delay interval for evaluation.
- Keyword Arguments:
delay_evaluation (int) – Number of intervals by which to delay the first evaluation. Defaults to 0.
evaluation_interval (int) – Interval (number of runs) between policy evaluations. Defaults to 0.
slack_amount (float) – Absolute distance allowed from the best performing run. Defaults to 0.
slack_factor (float) – Ratio of the allowed distance from the best performing run. Defaults to 0.
Example:
Configuring BanditPolicy early termination of a hyperparameter sweep on a Command job.¶from azure.ai.ml import command job = command( inputs=dict(kernel="linear", penalty=1.0), compute=cpu_cluster, environment=f"{job_env.name}:{job_env.version}", code="./scripts", command="python scripts/train.py --kernel $kernel --penalty $penalty", experiment_name="sklearn-iris-flowers", ) # we can reuse an existing Command Job as a function that we can apply inputs to for the sweep configurations from azure.ai.ml.sweep import Uniform job_for_sweep = job( kernel=Uniform(min_value=0.0005, max_value=0.005), penalty=Uniform(min_value=0.9, max_value=0.99), ) from azure.ai.ml.sweep import BanditPolicy sweep_job = job_for_sweep.sweep( sampling_algorithm="random", primary_metric="best_val_acc", goal="Maximize", max_total_trials=8, max_concurrent_trials=4, early_termination_policy=BanditPolicy(slack_factor=0.15, evaluation_interval=1, delay_evaluation=10), )
- class azure.ai.ml.sweep.BayesianSamplingAlgorithm[source]¶
Bayesian Sampling Algorithm.
Example:
Assigning a Bayesian sampling algorithm for a SweepJob¶from azure.ai.ml.entities import CommandJob from azure.ai.ml.sweep import BayesianSamplingAlgorithm, Objective, SweepJob, SweepJobLimits command_job = CommandJob( inputs=dict(kernel="linear", penalty=1.0), compute=cpu_cluster, environment=f"{job_env.name}:{job_env.version}", code="./scripts", command="python scripts/train.py --kernel $kernel --penalty $penalty", experiment_name="sklearn-iris-flowers", ) sweep = SweepJob( sampling_algorithm=BayesianSamplingAlgorithm(), trial=command_job, search_space={"ss": Choice(type="choice", values=[{"space1": True}, {"space2": True}])}, inputs={"input1": {"file": "top_level.csv", "mode": "ro_mount"}}, compute="top_level", limits=SweepJobLimits(trial_timeout=600), objective=Objective(goal="maximize", primary_metric="accuracy"), )
- class azure.ai.ml.sweep.Choice(values: List[float | str | dict] | None = None, **kwargs: Any)[source]¶
Choice distribution configuration.
Example:
Using Choice distribution to set values for a hyperparameter sweep¶from azure.ai.ml import command job = command( inputs=dict(kernel="linear", penalty=1.0), compute=cpu_cluster, environment=f"{job_env.name}:{job_env.version}", code="./scripts", command="python scripts/train.py --kernel $kernel --penalty $penalty", experiment_name="sklearn-iris-flowers", ) from azure.ai.ml.sweep import Choice, LogUniform # we can reuse an existing Command Job as a function that we can apply inputs to for the sweep configurations job_for_sweep = job( kernel=LogUniform(min_value=-6, max_value=-1), penalty=Choice([0.9, 0.18, 0.36, 0.72]), )
- class azure.ai.ml.sweep.GridSamplingAlgorithm[source]¶
Grid Sampling Algorithm.
Example:
Assigning a grid sampling algorithm for a SweepJob¶from azure.ai.ml.entities import CommandJob from azure.ai.ml.sweep import GridSamplingAlgorithm, SweepJob, SweepJobLimits command_job = CommandJob( inputs=dict(kernel="linear", penalty=1.0), compute=cpu_cluster, environment=f"{job_env.name}:{job_env.version}", code="./scripts", command="python scripts/train.py --kernel $kernel --penalty $penalty", experiment_name="sklearn-iris-flowers", ) sweep = SweepJob( sampling_algorithm=GridSamplingAlgorithm(), trial=command_job, search_space={"ss": Choice(type="choice", values=[{"space1": True}, {"space2": True}])}, inputs={"input1": {"file": "top_level.csv", "mode": "ro_mount"}}, compute="top_level", limits=SweepJobLimits(trial_timeout=600), )
- class azure.ai.ml.sweep.LogNormal(mu: float | None = None, sigma: float | None = None, **kwargs: Any)[source]¶
LogNormal distribution configuration.
- Parameters:
Example:
Configuring LogNormal distributions for a hyperparameter sweep on a Command job.¶from azure.ai.ml import command job = command( inputs=dict(kernel="linear", penalty=1.0), compute=cpu_cluster, environment=f"{job_env.name}:{job_env.version}", code="./scripts", command="python scripts/train.py --kernel $kernel --penalty $penalty", experiment_name="sklearn-iris-flowers", ) from azure.ai.ml.sweep import LogNormal, QLogNormal # we can reuse an existing Command Job as a function that we can apply inputs to for the sweep configurations job_for_sweep = job( kernel=LogNormal(mu=0.0, sigma=1.0), penalty=QLogNormal(mu=5.0, sigma=2.0), )
- class azure.ai.ml.sweep.LogUniform(min_value: float | None = None, max_value: float | None = None, **kwargs: Any)[source]¶
LogUniform distribution configuration.
- Parameters:
Example:
Configuring a LogUniform distribution for a hyperparameter sweep job learning rate¶from azure.ai.ml import command job = command( inputs=dict(kernel="linear", penalty=1.0), compute=cpu_cluster, environment=f"{job_env.name}:{job_env.version}", code="./scripts", command="python scripts/train.py --kernel $kernel --penalty $penalty", experiment_name="sklearn-iris-flowers", ) from azure.ai.ml.sweep import Choice, LogUniform # we can reuse an existing Command Job as a function that we can apply inputs to for the sweep configurations job_for_sweep = job( kernel=LogUniform(min_value=-6, max_value=-1), penalty=Choice([0.9, 0.18, 0.36, 0.72]), )
- class azure.ai.ml.sweep.MedianStoppingPolicy(*, delay_evaluation: int = 0, evaluation_interval: int = 1)[source]¶
Defines an early termination policy based on a running average of the primary metric of all runs.
- Keyword Arguments:
Example:
Configuring an early termination policy for a hyperparameter sweep job using MedianStoppingPolicy¶from azure.ai.ml import command job = command( inputs=dict(kernel="linear", penalty=1.0), compute=cpu_cluster, environment=f"{job_env.name}:{job_env.version}", code="./scripts", command="python scripts/train.py --kernel $kernel --penalty $penalty", experiment_name="sklearn-iris-flowers", ) # we can reuse an existing Command Job as a function that we can apply inputs to for the sweep configurations from azure.ai.ml.sweep import MedianStoppingPolicy, Uniform job_for_sweep = job( kernel=Uniform(min_value=0.0005, max_value=0.005), penalty=Uniform(min_value=0.9, max_value=0.99), ) sweep_job = job_for_sweep.sweep( sampling_algorithm="random", primary_metric="best_val_acc", goal="Maximize", max_total_trials=8, max_concurrent_trials=4, early_termination_policy=MedianStoppingPolicy(delay_evaluation=5, evaluation_interval=2), )
- class azure.ai.ml.sweep.Normal(mu: float | None = None, sigma: float | None = None, **kwargs: Any)[source]¶
Normal distribution configuration.
- Parameters:
Example:
Configuring Normal distributions for a hyperparameter sweep on a Command job.¶from azure.ai.ml import command job = command( inputs=dict(kernel="linear", penalty=1.0), compute=cpu_cluster, environment=f"{job_env.name}:{job_env.version}", code="./scripts", command="python scripts/train.py --kernel $kernel --penalty $penalty", experiment_name="sklearn-iris-flowers", ) from azure.ai.ml.sweep import Normal, Randint # we can reuse an existing Command Job as a function that we can apply inputs to for the sweep configurations job_for_sweep = job( penalty=Randint(upper=5), kernel=Normal(mu=2.0, sigma=1.0), )
- class azure.ai.ml.sweep.Objective(goal: str | None, primary_metric: str | None = None)[source]¶
Optimization objective.
- Parameters:
Example:
Assigning an objective to a SweepJob.¶from azure.ai.ml.entities import CommandJob from azure.ai.ml.sweep import BayesianSamplingAlgorithm, Objective, SweepJob, SweepJobLimits command_job = CommandJob( inputs=dict(kernel="linear", penalty=1.0), compute=cpu_cluster, environment=f"{job_env.name}:{job_env.version}", code="./scripts", command="python scripts/train.py --kernel $kernel --penalty $penalty", experiment_name="sklearn-iris-flowers", ) sweep = SweepJob( sampling_algorithm=BayesianSamplingAlgorithm(), trial=command_job, search_space={"ss": Choice(type="choice", values=[{"space1": True}, {"space2": True}])}, inputs={"input1": {"file": "top_level.csv", "mode": "ro_mount"}}, compute="top_level", limits=SweepJobLimits(trial_timeout=600), objective=Objective(goal="maximize", primary_metric="accuracy"), )
Optimization objective.
- class azure.ai.ml.sweep.QLogNormal(mu: float | None = None, sigma: float | None = None, q: int | None = None, **kwargs: Any)[source]¶
QLogNormal distribution configuration.
- Parameters:
Example:
Configuring QLogNormal distributions for a hyperparameter sweep on a Command job.¶from azure.ai.ml import command job = command( inputs=dict(kernel="linear", penalty=1.0), compute=cpu_cluster, environment=f"{job_env.name}:{job_env.version}", code="./scripts", command="python scripts/train.py --kernel $kernel --penalty $penalty", experiment_name="sklearn-iris-flowers", ) from azure.ai.ml.sweep import LogNormal, QLogNormal # we can reuse an existing Command Job as a function that we can apply inputs to for the sweep configurations job_for_sweep = job( kernel=LogNormal(mu=0.0, sigma=1.0), penalty=QLogNormal(mu=5.0, sigma=2.0), )
- class azure.ai.ml.sweep.QLogUniform(min_value: float | None = None, max_value: float | None = None, q: int | None = None, **kwargs: Any)[source]¶
QLogUniform distribution configuration.
- Parameters:
Example:
Configuring QLogUniform distributions for a hyperparameter sweep on a Command job.¶from azure.ai.ml import command job = command( inputs=dict(kernel="linear", penalty=1.0), compute=cpu_cluster, environment=f"{job_env.name}:{job_env.version}", code="./scripts", command="python scripts/train.py --kernel $kernel --penalty $penalty", experiment_name="sklearn-iris-flowers", ) from azure.ai.ml.sweep import QLogUniform, QNormal # we can reuse an existing Command Job as a function that we can apply inputs to for the sweep configurations job_for_sweep = job( penalty=QNormal(mu=2.0, sigma=1.0, q=1), kernel=QLogUniform(min_value=1.0, max_value=5.0), )
- class azure.ai.ml.sweep.QNormal(mu: float | None = None, sigma: float | None = None, q: int | None = None, **kwargs: Any)[source]¶
QNormal distribution configuration.
- Parameters:
Example:
Configuring QNormal distributions for a hyperparameter sweep on a Command job.¶from azure.ai.ml import command job = command( inputs=dict(kernel="linear", penalty=1.0), compute=cpu_cluster, environment=f"{job_env.name}:{job_env.version}", code="./scripts", command="python scripts/train.py --kernel $kernel --penalty $penalty", experiment_name="sklearn-iris-flowers", ) from azure.ai.ml.sweep import QLogUniform, QNormal # we can reuse an existing Command Job as a function that we can apply inputs to for the sweep configurations job_for_sweep = job( penalty=QNormal(mu=2.0, sigma=1.0, q=1), kernel=QLogUniform(min_value=1.0, max_value=5.0), )
- class azure.ai.ml.sweep.QUniform(min_value: int | float | None = None, max_value: int | float | None = None, q: int | None = None, **kwargs: Any)[source]¶
QUniform distribution configuration.
- Parameters:
Example:
Configuring QUniform distributions for a hyperparameter sweep on a Command job.¶from azure.ai.ml import command job = command( inputs=dict(kernel="linear", penalty=1.0), compute=cpu_cluster, environment=f"{job_env.name}:{job_env.version}", code="./scripts", command="python scripts/train.py --kernel $kernel --penalty $penalty", experiment_name="sklearn-iris-flowers", ) # we can reuse an existing Command Job as a function that we can apply inputs to for the sweep configurations from azure.ai.ml.sweep import QUniform, TruncationSelectionPolicy, Uniform job_for_sweep = job( kernel=Uniform(min_value=0.0005, max_value=0.005), penalty=QUniform(min_value=0.05, max_value=0.75, q=1), ) sweep_job = job_for_sweep.sweep( sampling_algorithm="random", primary_metric="best_val_acc", goal="Maximize", max_total_trials=8, max_concurrent_trials=4, early_termination_policy=TruncationSelectionPolicy(delay_evaluation=5, evaluation_interval=2), )
- class azure.ai.ml.sweep.Randint(upper: int | None = None, **kwargs: Any)[source]¶
Randint distribution configuration.
- Parameters:
upper (int) – Upper bound of the distribution.
Example:
Configuring Randint distributions for a hyperparameter sweep on a Command job.¶from azure.ai.ml import command job = command( inputs=dict(kernel="linear", penalty=1.0), compute=cpu_cluster, environment=f"{job_env.name}:{job_env.version}", code="./scripts", command="python scripts/train.py --kernel $kernel --penalty $penalty", experiment_name="sklearn-iris-flowers", ) from azure.ai.ml.sweep import Normal, Randint # we can reuse an existing Command Job as a function that we can apply inputs to for the sweep configurations job_for_sweep = job( penalty=Randint(upper=5), kernel=Normal(mu=2.0, sigma=1.0), )
- class azure.ai.ml.sweep.RandomSamplingAlgorithm(*, rule: str | None = None, seed: int | None = None, logbase: float | str | None = None)[source]¶
Random Sampling Algorithm.
- Keyword Arguments:
Example:
Assigning a random sampling algorithm for a SweepJob¶from azure.ai.ml.entities import CommandJob from azure.ai.ml.sweep import RandomSamplingAlgorithm, SweepJob, SweepJobLimits command_job = CommandJob( inputs=dict(kernel="linear", penalty=1.0), compute=cpu_cluster, environment=f"{job_env.name}:{job_env.version}", code="./scripts", command="python scripts/train.py --kernel $kernel --penalty $penalty", experiment_name="sklearn-iris-flowers", ) sweep = SweepJob( sampling_algorithm=RandomSamplingAlgorithm(seed=999, rule="sobol", logbase="e"), trial=command_job, search_space={"ss": Choice(type="choice", values=[{"space1": True}, {"space2": True}])}, inputs={"input1": {"file": "top_level.csv", "mode": "ro_mount"}}, compute="top_level", limits=SweepJobLimits(trial_timeout=600), )
- class azure.ai.ml.sweep.SamplingAlgorithm[source]¶
Base class for sampling algorithms.
This class should not be instantiated directly. Instead, use one of its subclasses.
- class azure.ai.ml.sweep.SweepJob(*, name: str | None = None, description: str | None = None, tags: Dict | None = None, display_name: str | None = None, experiment_name: str | None = None, identity: ManagedIdentityConfiguration | AmlTokenConfiguration | UserIdentityConfiguration | None = None, inputs: Dict[str, Input | str | bool | int | float] | None = None, outputs: Dict | None = None, compute: str | None = None, limits: SweepJobLimits | None = None, sampling_algorithm: str | SamplingAlgorithm | None = None, search_space: Dict[str, Choice | LogNormal | LogUniform | Normal | QLogNormal | QLogUniform | QNormal | QUniform | Randint | Uniform] | None = None, objective: Objective | None = None, trial: CommandJob | CommandComponent | None = None, early_termination: EarlyTerminationPolicy | BanditPolicy | MedianStoppingPolicy | TruncationSelectionPolicy | None = None, queue_settings: QueueSettings | None = None, resources: dict | JobResourceConfiguration | None = None, **kwargs: Any)[source]¶
Sweep job for hyperparameter tuning.
Note
For sweep jobs, inputs, outputs, and parameters are accessible as environment variables using the prefix
AZUREML_SWEEP_. For example, if you have a parameter named “learning_rate”, you can access it asAZUREML_SWEEP_learning_rate.- Keyword Arguments:
name (str) – Name of the job.
display_name (str) – Display name of the job.
description (str) – Description of the job.
tags (dict[str, str]) – Tag dictionary. Tags can be added, removed, and updated.
properties (dict[str, str]) – The asset property dictionary.
experiment_name (str) – Name of the experiment the job will be created under. If None is provided, job will be created under experiment ‘Default’.
identity (Union[ ManagedIdentityConfiguration, AmlTokenConfiguration, UserIdentityConfiguration) – Identity that the training job will use while running on compute.
]
- Keyword Arguments:
inputs (dict) – Inputs to the command.
outputs (dict[str, Output]) – Mapping of output data bindings used in the job.
sampling_algorithm (str) – The hyperparameter sampling algorithm to use over the search_space. Defaults to “random”.
search_space (Dict) – Dictionary of the hyperparameter search space. The key is the name of the hyperparameter and the value is the parameter expression.
objective (Objective) – Metric to optimize for.
compute (str) – The compute target the job runs on.
trial (Union[ CommandJob, CommandComponent) – The job configuration for each trial. Each trial will be provided with a different combination of hyperparameter values that the system samples from the search_space.
]
- Keyword Arguments:
early_termination (Union[ BanditPolicy, MedianStoppingPolicy, TruncationSelectionPolicy) – The early termination policy to use. A trial job is canceled when the criteria of the specified policy are met. If omitted, no early termination policy will be applied.
]
- Keyword Arguments:
limits (SweepJobLimits) – Limits for the sweep job.
queue_settings (QueueSettings) – Queue settings for the job.
resources (Optional[Union[ResourceConfiguration]) – Compute Resource configuration for the job.
kwargs (dict) – A dictionary of additional configuration parameters.
Example:
Creating a SweepJob¶from azure.ai.ml.entities import CommandJob from azure.ai.ml.sweep import BayesianSamplingAlgorithm, Objective, SweepJob, SweepJobLimits command_job = CommandJob( inputs=dict(kernel="linear", penalty=1.0), compute=cpu_cluster, environment=f"{job_env.name}:{job_env.version}", code="./scripts", command="python scripts/train.py --kernel $kernel --penalty $penalty", experiment_name="sklearn-iris-flowers", ) sweep = SweepJob( sampling_algorithm=BayesianSamplingAlgorithm(), trial=command_job, search_space={"ss": Choice(type="choice", values=[{"space1": True}, {"space2": True}])}, inputs={"input1": {"file": "top_level.csv", "mode": "ro_mount"}}, compute="top_level", limits=SweepJobLimits(trial_timeout=600), objective=Objective(goal="maximize", primary_metric="accuracy"), )
- dump(dest: str | PathLike | IO, **kwargs: Any) None¶
Dumps the job content into a file in YAML format.
- Parameters:
dest (Union[PathLike, str, IO[AnyStr]]) – The local path or file stream to write the YAML content to. If dest is a file path, a new file will be created. If dest is an open file, the file will be written to directly.
- Raises:
FileExistsError – Raised if dest is a file path and the file already exists.
IOError – Raised if dest is an open file and the file is not writable.
- set_limits(*, max_concurrent_trials: int | None = None, max_total_trials: int | None = None, timeout: int | None = None, trial_timeout: int | None = None) None¶
Set limits for Sweep node. Leave parameters as None if you don’t want to update corresponding values.
- set_objective(*, goal: str | None = None, primary_metric: str | None = None) None¶
Set the sweep object.. Leave parameters as None if you don’t want to update corresponding values.
- set_resources(*, instance_type: str | List[str] | None = None, instance_count: int | None = None, locations: List[str] | None = None, properties: Dict | None = None, docker_args: str | None = None, shm_size: str | None = None) None¶
Set resources for Sweep.
- Keyword Arguments:
instance_type (Optional[Union[str, List[str]]]) – The instance type to use for the job.
instance_count (Optional[int]) – The number of instances to use for the job.
locations (Optional[List[str]]) – The locations to use for the job.
properties (Optional[Dict]) – The properties for the job.
docker_args (Optional[str]) – The docker arguments for the job.
shm_size (Optional[str]) – The shared memory size for the job.
- property base_path: str¶
The base path of the resource.
- Returns:
The base path of the resource.
- Return type:
- property creation_context: SystemData | None¶
The creation context of the resource.
- Returns:
The creation metadata for the resource.
- Return type:
Optional[SystemData]
- property early_termination: str | EarlyTerminationPolicy | None¶
Early termination policy for sweep job.
- Returns:
Early termination policy for sweep job.
- Return type:
EarlyTerminationPolicy
- property id: str | None¶
The resource ID.
- Returns:
The global ID of the resource, an Azure Resource Manager (ARM) ID.
- Return type:
Optional[str]
- property limits: SweepJobLimits | None¶
Limits for sweep job.
- Returns:
Limits for sweep job.
- Return type:
- property resources: dict | JobResourceConfiguration | None¶
Resources for sweep job.
- Returns:
Resources for sweep job.
- Return type:
- property sampling_algorithm: str | SamplingAlgorithm | None¶
Sampling algorithm for sweep job.
- Returns:
Sampling algorithm for sweep job.
- Return type:
- property status: str | None¶
The status of the job.
Common values returned include “Running”, “Completed”, and “Failed”. All possible values are:
NotStarted - This is a temporary state that client-side Run objects are in before cloud submission.
Starting - The Run has started being processed in the cloud. The caller has a run ID at this point.
Provisioning - On-demand compute is being created for a given job submission.
- Preparing - The run environment is being prepared and is in one of two stages:
Docker image build
conda environment setup
- Queued - The job is queued on the compute target. For example, in BatchAI, the job is in a queued state
while waiting for all the requested nodes to be ready.
Running - The job has started to run on the compute target.
Finalizing - User code execution has completed, and the run is in post-processing stages.
CancelRequested - Cancellation has been requested for the job.
- Completed - The run has completed successfully. This includes both the user code execution and run
post-processing stages.
Failed - The run failed. Usually the Error property on a run will provide details as to why.
Canceled - Follows a cancellation request and indicates that the run is now successfully cancelled.
NotResponding - For runs that have Heartbeats enabled, no heartbeat has been recently sent.
- Returns:
Status of the job.
- Return type:
Optional[str]
- class azure.ai.ml.sweep.SweepJobLimits(*, max_concurrent_trials: int | None = None, max_total_trials: int | None = None, timeout: int | None = None, trial_timeout: int | str | None = None)[source]¶
Limits for Sweep Jobs.
- Keyword Arguments:
max_concurrent_trials (Optional[int]) – The maximum number of concurrent trials for the Sweep Job.
max_total_trials (Optional[int]) – The maximum number of total trials for the Sweep Job.
timeout (Optional[int]) – The maximum run duration, in seconds, after which the job will be cancelled.
trial_timeout (Optional[int]) – The timeout value, in seconds, for each Sweep Job trial.
Example:
Assigning limits to a SweepJob¶from azure.ai.ml.entities import CommandJob from azure.ai.ml.sweep import BayesianSamplingAlgorithm, Objective, SweepJob, SweepJobLimits command_job = CommandJob( inputs=dict(kernel="linear", penalty=1.0), compute=cpu_cluster, environment=f"{job_env.name}:{job_env.version}", code="./scripts", command="python scripts/train.py --kernel $kernel --penalty $penalty", experiment_name="sklearn-iris-flowers", ) sweep = SweepJob( sampling_algorithm=BayesianSamplingAlgorithm(), trial=command_job, search_space={"ss": Choice(type="choice", values=[{"space1": True}, {"space2": True}])}, inputs={"input1": {"file": "top_level.csv", "mode": "ro_mount"}}, compute="top_level", limits=SweepJobLimits(trial_timeout=600), objective=Objective(goal="maximize", primary_metric="accuracy"), )
- class azure.ai.ml.sweep.TruncationSelectionPolicy(*, delay_evaluation: int = 0, evaluation_interval: int = 0, truncation_percentage: int = 0)[source]¶
Defines an early termination policy that cancels a given percentage of runs at each evaluation interval.
- Keyword Arguments:
delay_evaluation (int) – Number of intervals by which to delay the first evaluation. Defaults to 0.
evaluation_interval (int) – Interval (number of runs) between policy evaluations. Defaults to 0.
truncation_percentage (int) – The percentage of runs to cancel at each evaluation interval. Defaults to 0.
Example:
Configuring an early termination policy for a hyperparameter sweep job using TruncationStoppingPolicy¶from azure.ai.ml import command job = command( inputs=dict(kernel="linear", penalty=1.0), compute=cpu_cluster, environment=f"{job_env.name}:{job_env.version}", code="./scripts", command="python scripts/train.py --kernel $kernel --penalty $penalty", experiment_name="sklearn-iris-flowers", ) # we can reuse an existing Command Job as a function that we can apply inputs to for the sweep configurations from azure.ai.ml.sweep import QUniform, TruncationSelectionPolicy, Uniform job_for_sweep = job( kernel=Uniform(min_value=0.0005, max_value=0.005), penalty=QUniform(min_value=0.05, max_value=0.75, q=1), ) sweep_job = job_for_sweep.sweep( sampling_algorithm="random", primary_metric="best_val_acc", goal="Maximize", max_total_trials=8, max_concurrent_trials=4, early_termination_policy=TruncationSelectionPolicy(delay_evaluation=5, evaluation_interval=2), )
- class azure.ai.ml.sweep.Uniform(min_value: float | None = None, max_value: float | None = None, **kwargs: Any)[source]¶
Uniform distribution configuration.
- Parameters:
Example:
Configuring Uniform distributions for learning rates and momentum during a hyperparameter sweep on a Command job.¶from azure.ai.ml import command job = command( inputs=dict(kernel="linear", penalty=1.0), compute=cpu_cluster, environment=f"{job_env.name}:{job_env.version}", code="./scripts", command="python scripts/train.py --kernel $kernel --penalty $penalty", experiment_name="sklearn-iris-flowers", ) # we can reuse an existing Command Job as a function that we can apply inputs to for the sweep configurations from azure.ai.ml.sweep import Uniform job_for_sweep = job( kernel=Uniform(min_value=0.0005, max_value=0.005), penalty=Uniform(min_value=0.9, max_value=0.99), )