azure.ai.ml.dsl package

azure.ai.ml.dsl.pipeline(func: Callable[[P], T] | None = None, *, name: str | None = None, version: str | None = None, display_name: str | None = None, description: str | None = None, experiment_name: str | None = None, tags: Dict[str, str] | str | None = None, **kwargs: Any) Callable[[Callable[[P], T]], Callable[[P], PipelineJob]] | Callable[[P], PipelineJob][source]

Build a pipeline which contains all component nodes defined in this function.

Parameters:

func (types.FunctionType) – The user pipeline function to be decorated.

Keyword Arguments:
  • name (str) – The name of pipeline component, defaults to function name.

  • version (str) – The version of pipeline component, defaults to “1”.

  • display_name (str) – The display name of pipeline component, defaults to function name.

  • description (str) – The description of the built pipeline.

  • experiment_name (str) – Name of the experiment the job will be created under, if None is provided, experiment will be set to current directory.

  • tags (dict[str, str]) – The tags of pipeline component.

Returns:

Either * A decorator, if func is None * The decorated func

Return type:

Union[

Callable[[Callable], Callable[…, ~azure.ai.ml.entities.PipelineJob]], Callable[P, ~azure.ai.ml.entities.PipelineJob]

]

Example:

Shows how to create a pipeline using this decorator.
from azure.ai.ml import load_component
from azure.ai.ml.dsl import pipeline

component_func = load_component(
    source="./sdk/ml/azure-ai-ml/tests/test_configs/components/helloworld_component.yml"
)

# Define a pipeline with decorator
@pipeline(name="sample_pipeline", description="pipeline description")
def sample_pipeline_func(pipeline_input1, pipeline_input2):
    # component1 and component2 will be added into the current pipeline
    component1 = component_func(component_in_number=pipeline_input1, component_in_path=uri_file_input)
    component2 = component_func(component_in_number=pipeline_input2, component_in_path=uri_file_input)
    # A decorated pipeline function needs to return outputs.
    # In this case, the pipeline has two outputs: component1's output1 and component2's output1,
    # and let's rename them to 'pipeline_output1' and 'pipeline_output2'
    return {
        "pipeline_output1": component1.outputs.component_out_path,
        "pipeline_output2": component2.outputs.component_out_path,
    }

# E.g.: This call returns a pipeline job with nodes=[component1, component2],
pipeline_job = sample_pipeline_func(
    pipeline_input1=1.0,
    pipeline_input2=2.0,
)
ml_client.jobs.create_or_update(pipeline_job, experiment_name="pipeline_samples", compute="cpu-cluster")