Parameterized pipelines are in the beta phase of development and may not be available on your enrollment. Functionality may change during active development. Contact Palantir Support to request access.
Parameterized pipelines enable you to run the same transform logic multiple times with different parameter values, creating separate deployments that can be managed independently. Each pipeline maintains its own parameter configuration and produces outputs that are aggregated into union datasets, allowing you to process and analyze data across all deployments. This feature is particularly when you need to execute parameterized transformations at scale from user-facing applications.
Before using parameterized pipelines, ensure the following requirements are met:
10.24.0 or later. Update the transformsVersion in your pipeline configuration if needed.To use parameterized pipelines, you must first declare parameters in your Python transforms alongside other inputs. Parameters define the configurable values that distinguish each deployment.
Copied!1 2 3 4 5 6 7 8 9 10@transform.spark.using( output=Output('/path/to/output'), town=Input('/path/to/input_towns'), power_link=Input('/path/to/power_link'), risk_factor=IntegerParam(5), ) def process_data(ctx, output, town, power_link, risk_factor): ... riskiness = risk_factor.value ...
Foundry supports several parameter types for use in transforms:
Learn more about working with parameters in Python transforms.
After defining parameters in your transforms, configure deployment settings for your pipeline through Data Lineage:

Each output will create a single union view that will automatically aggregate the output data of each deployment build, allowing you to query and analyze data across all parameter configurations in a single dataset. Be sure your transform adds a column to your output dataset that contains the deployment key parameter; this will help you understand which rows in the union output correspond to which pipeline builds.
After configuring the deployments in Data Lineage, navigate to the Build Schedules application to create and manage individual deployments.

To build a deployment and generate its output data:
Once the build succeeds, the union datasets will be updated to include data from this deployment. Each row in the union dataset includes the deployment key value, allowing you to filter and analyze data by deployment.
To modify the parameter values for an existing deployment:
The next time the deployment builds, it will use the updated parameter values. Previous build outputs remain in the union dataset with the old parameter values until the deployment is rebuilt.
To remove a deployment and its data from union datasets:
When a deployment is deleted, its corresponding rows are removed from all union datasets. This operation cannot be undone.
Parameterized pipelines are in active development. The following limitations currently apply: