Announcements

REMINDER: Sign up for the Foundry Newsletter to receive a summary of new products, features, and improvements across the platform directly to your inbox. For more information on how to subscribe, see the Foundry Newsletter and Product Feedback channels announcement.

Share your thoughts about these announcements in our Developer Community Forum ↗.


Palantir MCP enables AI IDEs and agents to design, build, edit, and review in the Palantir platform [Beta]

Date published: 2025-07-17

Palantir Model Context Protocol (MCP) is now available in beta across all enrollments as of the week of July 14. Palantir MCP enables AI IDEs and AI agents to autonomously design, build, edit, and review end-to-end applications within the Palantir platform. An implementation of Model Context Protocol ↗, Palantir MCP supports everything from data integration to ontology configuration and application development, all performed within the platform.

Key capabilities of Palantir MCP

Vibe code production applications: Enables developers to use AI to produce production-grade applications on top of the ontology while following Palantir's security best practices.

Data integration: Powers Python transforms generation by enabling AI IDEs to get context from Compass, dataset schemas, and execute SQL commands entirely locally.

Ontology configuration: Allows developers to configure their ontology locally without leaving the IDE.

Application development: Integrates with your OSDK to enable the development of TypeScript applications on top of your ontology.

Start using Palantir MCP

To get started, follow the installation steps and read the user guide for examples and best practices. We strongly encourage all local developers to install and regularly update the Palantir MCP to take advantage of the latest changes and tool releases.


Updated language models now available in TypeScript functions repositories

Date published: 2025-07-17

Updated language models are now available in TypeScript functions repositories. These updates provide better consistency between model APIs, making it easier to interchange underlying models. Model capabilities have also been enhanced, with improved support for vision and streaming.

We highly recommend updating your functions repositories with the new models to ensure you stay up to date with the latest AIP features. Review the updated documentation for language models in functions to learn how to update your repository.

Viewing model capabilities when importing updated language models.

Viewing model capabilities when importing updated language models.

Share your feedback

Share your feedback about functions by contacting our Palantir Support teams, or let us know in our Developer Community ↗ using the functions tag ↗.


Protect the main branch of your resources and define approval policies for your projects

Date published: 2025-07-09

You can now protect the main branch of your Workshop modules and define custom approval policies. While this only applies to Workshop for now, all types of resources will eventually be supported, with support for ontology and Pipeline Builder resources coming next.

To safeguard critical workflows and maintain development best practices, you can protect the main branch of your resources. This means that any change to a protected resource must be made on a branch and will require approval to take effect.

Approval Flow in a protected Workshop application.

Approval Flow in a protected Workshop application.

Once a resource is protected, any change to that resource will have to be made on a branch and go through an approval process. The approval policy is set at the project level, and defines whose approval is required in order to merge changes to protected resources.

Project with default approval policy.

Project with default approval policy.

Project with custom approval policy.

Project with custom approval policy.

Approval policies have three customizable parameters:

  • Eligible reviewers: Define the users or groups that are allowed to review and approve changes to the main branch of a protected resource.
  • Number of approvals required: Define the minimum number of approvals needed to enable merging of a change. Options include any eligible reviewer, all eligible reviewers, or custom (specify the number of eligible reviewers).
  • Additional requirements: Control whether reviewers can approve changes to files they have contributed to in the proposed branch. A contributor is defined as any user who has made a change to that resource on the branch.

Note that branch protection currently only applies to Workshop resources, but support for protecting ontology resources is coming soon.


Combine multiple object sets and manual test cases in AIP evaluation suites

Date published: 2025-07-08

AIP Evals now supports combining multiple object sets and manual test cases within a single evaluation suite. The test case creation experience has been simplified, allowing you to add, delete, and duplicate object sets as needed. This flexibility enables you to leverage object sets while also adding specific manual test cases for comprehensive function testing.

You can now combine multiple object sets and manual test cases to an evaluation suite.

You can now combine multiple object sets and manual test cases to an evaluation suite.

Learn more about adding test cases in AIP Evals.

We want to hear from you

As we continue to build upon AIP Evals, we want to hear about your experiences and welcome your feedback. Share your thoughts with Palantir Support channels or our Developer Community ↗.


Annotate and tag text in Workshop with the Markdown widget

Date published: 2025-07-03

The Markdown widget in Workshop now supports text tagging with the new Annotation feature. With this feature, builders can seamlessly display, create, and interact with annotation objects on text directly in the Markdown widget.

An example of a Markdown widget with a configured create annotation action.

An example of a Markdown widget with a configured "create annotation" action.

Key highlights of this feature include:

  • Visual tags: Display annotation objects as highlighted or underlined text with configurable colors.
  • On-click interactions: Users can interact with existing annotation objects in the widget by configuring actions and events.
  • User tagging: Enable the creation of new annotation objects on specific portions of text.

An example of a Markdown widget with configured annotation interactions.

An example of a Markdown widget with configured annotation interactions.

To learn more about configuring Annotations, refer to the Markdown widget documentation.

Your feedback matters

We want to hear about your experience with Workshop and welcome your feedback. Share your thoughts with Palantir Support channels or on our Developer Community ↗ using the workshop tag ↗.


Limit batch size of incremental inputs to save time and compute costs

Date published: 2025-07-01

When running an incremental transform, you may encounter the following situations:

  • An output is built as a SNAPSHOT because the entire input needs to be read from the beginning (for example, the semantic version of the incremental transform was increased).
  • An output is built incrementally, but one or more inputs to the transform receive numerous transactions that collectively contain a lot of unprocessed data.

Typically, when an output dataset is built incrementally, all unprocessed transactions of each input dataset are processed in the same job. This job can take days to finish, often with no incremental progress. If the job fails halfway through, all progress is lost, and the output would need to be rebuilt. This process often results in undesirable costs and errors and does not address pipelines where large amounts of data need to be frequently processed.

Limiting the maximum number of transactions that should be processed per job offers a solution to this time-consuming problem.

An animation of incremental transform builds. On the left the transform without transaction limits is constantly working on one job without noticeable progress. On the right the transform has set a transaction limit of 3 for the input and is progressing through jobs consistently.

An animation of incremental transform builds. On the left, the transform without transaction limits is constantly working on one job without noticeable progress. On the right, the transform has set a transaction limit of 3 for the input and is progressing through jobs consistently.

Add transaction limits to inputs

If a transform and its inputs satisfy all requirements, you can configure each incremental input using the transaction_limit setting. Each input can be configured with a different limit. The example below configures an incremental transform to use the following:

  • Two incremental inputs, each with a different transaction limit
  • An incremental input that does not use a transaction limit
  • A snapshot input
Copied!
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 from transforms.api import transform, Input, Output, incremental @incremental( v2_semantics=True, strict_append=True, snapshot_inputs=["snapshot_input"] ) @transform( # Incremental input configured to read a maximum of 3 transactions input_1=Input("/examples/input_1", transaction_limit=3), # Incremental input configured to read a maximum of 2 transactions input_2=Input("/examples/input_2", transaction_limit=2), # Incremental input without a transaction limit input_3=Input("/examples/input_3"), # Snapshot input whose entire view is read each time snapshot_input=Input("/examples/input_4"), output=Output("/examples/output") ) def compute(input_1, input_2, input_3, snapshot_input, output): ...

Next steps and additional resources

After configuring your incremental transform with transaction limits, you can continue to configure and monitor your builds with the following features and tools:

  • Create a build schedule: Configure a schedule in Data Lineage to build at a regular interval, and enable an option that ensures your data is never stale.

Ensure your data is always up-to-date by configuring a build schedule.

Ensure your data is always up-to-date by configuring a build schedule.

Requirements and limitations

To use transaction limits in an incremental transform, ensure you have access to the necessary tools and services and that the transforms and datasets meet the requirements below.

The transform must meet the following conditions:

  • The incremental decorator is used, and the v2_semantics argument is set to True.
  • It is configured to use Python transforms version 3.25.0 or higher. Configure a job with module pinning to use a specific version of Python transforms.
  • It cannot be a lightweight transform.

Input datasets must meet the following conditions to be configured with a transaction limit:

Your feedback matters

We want to hear about your experiences when configuring incremental transforms with transaction limits, and we welcome your feedback. Share your thoughts with Palantir Support channels or our Developer Community ↗.


Roll back pipelines to a previous state in Data Lineage [Beta]

Date published: 2025-07-01

When building your pipeline, you may need to roll back a dataset and all of its downstream dependents to an earlier version. There can be many reason for this, including the following:

  • You identified a mistake in the logic required to build a dataset and need to revert it.
  • Incorrect data was pushed into your pipeline from an upstream source.
  • An outage occurred, and you want to quickly navigate back to an earlier state of your pipeline.

The pipeline rollback feature allows you to revert back to a transaction of an upstream dataset. When performing a rollback, the data provenance of the upstream dataset transaction is used to identify its downstream datasets and their corresponding transactions to create a final pipeline rollback state. Typically, this process would require several steps to properly roll back each affected dataset. With pipeline rollback, this is reduced to a few simple steps discussed below, along with the ability to preview the final pipeline state before confirming and proceeding with the rollback. Pipeline rollback also ensures that the incrementality of your pipeline is preserved.

As you set up your rollback, you can choose to exclude any downstream datasets; these datasets will remain unchanged as the pipeline is rolled back to the selected transaction.

This feature is currently in the beta stage of development, and functionality may change before it is generally available.

Execute a pipeline rollback

  1. Navigate to a Data Lineage graph containing the upstream dataset you would like to roll back.
  2. Select the dataset in the graph. Then, from the branch selector at the top of the graph, select the branch on which you would like to perform the rollback.
  3. Select View node properties in the panel on the right.

The right editor panel in Data Lineage with the option to View node properties.

The right editor panel in Data Lineage, with the option to View node properties.

  1. Select Actions, then Rollback.

  2. Under Selected transaction, choose the transaction to which you would like to roll back.

An example of a selected transaction.

An example of a selected transaction.

After choosing the transaction, downstream datasets will automatically be found and the states they will revert to if the rollback is actioned will be displayed.

Resource types that are unable to be rolled back, including streaming datasets, media sets, and restricted views, will be displayed under the unsupported resources section. Transactional datasets on which you do not have Edit access will also be included in this list.

  1. Select the timestamp under each dataset to navigate to the History page of the input, where the corresponding transaction will be highlighted.

A list of datasets with timestamps of the builds.

A list of datasets with timestamps of the builds.

  1. Select any datasets to exclude from the rollback by selecting  to the right of the dataset name. Once excluded from rollback, the dataset will appear in the Datasets excluded from rollback section.

A list of datasets selected for rollback that you can exclude.

A list of datasets selected for rollback that you can exclude.

  1. To add an excluded dataset back to the rollback, select + to the right of the dataset name.

A dataset excluded from rollback that you can choose to add back.

A dataset excluded from rollback that you can choose to add back.

  1. After finalizing the state of your desired rollback, select Rollback. A confirmation dialog will appear.

A confirmation dialog confirming the rollback of five dataset transactions and incremental state resets of two datasets.

A confirmation dialog confirming the rollback of five dataset transactions and incremental state resets of two datasets.

  1. Enter the branch name as confirmation, then select Confirm rollback to proceed.

A confirmation of seven successful dataset rollbacks.

A confirmation of seven successful dataset rollbacks.

  1. Once the rollback is complete, navigate to the History tab of the datasets and notice that the rolled back transaction is now crossed out, as shown below:

An example of a dataset that was rolled back with the rolled back transaction crossed out.

An example of a dataset that was rolled back, with the rolled back transaction crossed out.

Additional resources and support

To learn more about pipeline rollbacks, review our public documentation. We also invite you to share your feedback and any questions you have with Palantir Support or our Developer Community ↗.