We generally do not recommend using export tasks to write data back to external sources. However, export tasks may be available and supported for some source types based on your Foundry enrollment.
The following export task documentation is intended for users of export tasks who have not yet transitioned to our recommended export workflow.
Export tasks allow you to export data from Foundry to various external data sources. The configuration for an export task consists of two parts:
An export task is triggered when a job is started on the task output dataset.
To configure an export task:
inputSourceinputDatasetoutputDatasetThe input source must be named inputSource and the output dataset must be named outputDataset. Failure to use these specific names may result in task errors.

Data Connection export tasks support writing to a wide range of common enterprise systems:
All export types support the incrementalType parameter, which controls how data is exported over time. For JDBC exports, this parameter is specified per dataset rather than globally.
When set to snapshot (the default value), the export task exports all files visible in the current view of the input dataset. This means every export will include the complete dataset, regardless of what was previously exported.
When set to incremental, the export behavior changes to optimize for efficiency. The first export behaves like a snapshot, exporting all available data. On subsequent exports, only new transactions added since the last export will be included, provided the initial exported transaction is still present in the dataset. If the initial transaction is no longer available (for example, due to a dataset rebuild), the system automatically falls back to a full snapshot export.
Example for file-based exports:
Copied!1incrementalType: incremental
File-based exports support the rewritePaths configuration option for customizing file names and paths during export. This field accepts a map of regular expressions and substitution templates:
Copied!1 2rewritePaths: "^spark/(.*)": "$1" # Removes the spark/ prefix
The substitution templates support several dynamic replacement patterns:
$1, $2, and so on to reference matched portions of the original path.${dt:yyyy-MM-dd}, which use Java's DateTimeFormatter syntax to insert the current date and time.${transaction}${dataset}