Announcements

REMINDER: Sign up for the Foundry Newsletter to receive a summary of new products, features, and improvements across the platform directly to your inbox. For more information on how to subscribe, see the Foundry Newsletter and Product Feedback channels announcement.

Share your thoughts about these announcements in our Developer Community Forum ↗.


Allow users to switch platform version from the Account menu [Beta]

Date published: 2025-04-17

Platform administrators can now enable users to switch platform versions using the Account menu located in the workspace navigation sidebar. Once enabled by an administrator, users can use the platform switcher to select between three available platform versions:

  • Stable: The current stable release
  • Beta: The future stable release
  • Prior: The previous stable release

Note that changing the platform version only affects features in the user interface; saved changes in the platform will persist, regardless of whether the current version renders it.

The platform version switcher is located in Account > Platform version dropdown menu.

The platform version switcher is located in Account > Platform version dropdown menu.

To set up platform version switching, platform administrators can navigate to the Platform experience page in Control Panel. Here, administrators can also configure groups of users to view the Beta version by default.

Platform version switching is in the beta phase of development and is disabled for everyone by default. Administrators can opt-in to the feature as it is released.

The Platform version configuration tab in Control Panel, located on the Platform experience page.

The Platform version configuration tab in Control Panel, located on the Platform experience page.

For more information on this feature, review the documentation on configuring the platform experience.

Share your feedback

We want to hear what you think about our updates to the platform. Send your feedback to our Palantir Support teams, or share in our Developer Community ↗.


Explore automation insights in Workflow Builder

Date published: 2025-04-17

We are excited to introduce features to improve Automate insights in Workflow Builder. These enhancements will make it much easier to debug what triggered your automation, what exact properties were used, and what dependencies these automations have.

You can now explore the following automation details:

  • Property usages and dependencies: Review the Selection details sidebar to view property usages and dependencies. The Condition ontology dependencies section provides a detailed breakdown of the specific object properties the automation condition relies on. Hover over the number displayed on the right to view the exact property.

Review automation property usages and dependencies in Workflow Builder.

  • Action and function triggers: Toggle on the purple lightning bolt icon located at the top left of the graph to discover which actions and functions trigger the automation.

Toggle on the purple lightning bolt icon to review automation action and function triggers.

For automations that activate when a property reaches a specific value, Workflow Builder identifies and links the actions or functions that modify the property to that value.

A function that triggers and alert automation is linked by a dotted line in the Workflow Builder graph.

Learn more about how Workflow Builder can help with automation insights and debugging.

Your feedback matters

Your insights are crucial in helping us understand how we can improve Workflow Builder. Share your feedback through Palantir Support channels and our Developer Community ↗ using the workflow-builder tag ↗.


Foundry Branching [Beta] now supports transforms code repositories

Date published: 2025-04-17

Starting the week of April 16, Foundry Branching [Beta] supports transforms code repositories across all enrollments. Foundry Branching provides a unified experience to make changes across multiple applications on a single branch, test those changes end-to-end without disrupting production workflows, and merge the changes with a single click. To enable this feature on your enrollment and participate in beta testing, contact Palantir Support. We recommend trying Foundry Branching with a restricted set of users first before broadening usage.

With the new support for transforms code repositories, Foundry Branching adds to its additional support for Pipeline Builder, the Ontology, and Workshop. Through Foundry Branching, you can now modify your data pipeline in Code Repositories, edit Ontology definitions, and build on those changes in your Workshop modules from one branch.

Note that support for TypeScript function repositories is currently under development.

Modifying a code repository on a branch.

Modifying a code repository on a branch.

When merging a branch that contains code repository changes, the Foundry Branching merge dialog will show all datasets that are about to be built. If your proposal's datasets are reliant on other datasets that were not modified on your branch, an option to build all the necessary datasets during the merge process will appear.

The merge proposal dialog provides two options:

  • Build all affected resources: All resources affected by changes on your branch will be built, so that data in upstream changes flow downstream as required.
  • Build modified resources only: Only resources directly changed on this branch will be built. You may need to build resources manually if they depend on upstream changes to this branch.

Building all affected resources in the merge dialog.

Building all affected resources in the merge dialog.

Building modified and affected datasets in the merge process.

Building modified and affected datasets in the merge process.

For more information, review the Foundry Branching documentation.

Your feedback matters

We want to hear about your experiences with Foundry Branching and welcome your feedback. Share your thoughts with Palantir Support channels or on our Developer Community ↗ using the foundry-branching tag ↗.


Announcing Bring-Your-Own-Model in AIP

Date published: 2025-04-10

Bring-your-own-model (BYOM), also known as "registered models" in the Palantir platform, is a capability that provides first-class support for customers who want to connect their own LLMs or accounts to use in AIP with all Palantir developer products. These products include AIP Logic, Pipeline Builder, Agent Studio, Workshop, and more.

Once you have registered your LLM, you can select it from the model dropdown menu in AIP Logic.

Once you have registered your LLM, you can select it from the model dropdown menu in AIP Logic.

When to use

Based on LLM support and viability, we generally recommend using Palantir-provided models from model providers (for example: OpenAI, Azure OpenAI, AWS Bedrock, xAI, GCP Vertex), or self-hosted open-source models by Palantir (such as Llama models).

However, you may prefer to bring your own models to AIP. We recommend using these registered models only when you cannot use Palantir-provided models for legal and compliance reasons, or when you have your own fine-tuned or otherwise unique LLM that you would like to leverage in AIP.

Learn more

To get started with registering your own model, review the following documentation:

Let us know how we are doing

As we continue to develop on registered models, we want to hear about your experiences and welcome your feedback. Share your thoughts with Palantir Support channels or our Developer Community ↗.


View LLM token and request usage with the new AIP usage views tool

Date published: 2025-04-10

Introducing the new AIP usage views tool in Resource Management, which provides visibility into LLM token and request usage of all Palantir-provided models for all projects and resources in your enrollment. With this new tool, administrators can gain full visibility in managing LLM capacity and handling rate limits.

You can access this tool in Resource Management by navigating to the AIP usage and limits page, then select the View usage tab.

Unlock comprehensive insights with the new AIP usage views tool, created to enhance your understanding of LLM capacity and rate limits and help you identify opportunities for optimization across all your projects and resources.

Unlock comprehensive insights with the new AIP usage views tool, created to enhance your understanding of LLM capacity and rate limits and help you identify opportunities for optimization across all your projects and resources.

This tool is primarily built to help with the capacity management and rate limits problem. A few key highlights include the following:

  • Track token and request usage per minute, given that LLM capacity is managed at the token per minute (TPM) and request per minute (RPM) level.
  • Drill down to a single model at a time, as capacity is managed for each model separately.
  • View the enrollment usage overview and zoom in to project-level usage, given that LLM capacity has both an enrollment-level limit and a project-level limit for each project, as explained above.
  • View the rate limits threshold; the toggle in the upper right visualizes when project or enrollment limits are hit by displaying a dashed line. The limits vary by model and by project. Two rate limit lines are displayed: the enrollment/project limit, and the “batch limit” which is capped to 80% of the total capacity for the specific project and for the entire enrollment. Read more about prioritizing interactive requests below.
  • Filter down to a certain time range to view two weeks of data, down to the minute. Users can drill down to a specific time range either by using the date range filter on the left sidebar, or by using a drag-and-drop time range filter over the chart itself. When the time range is shorter than six hours, the chart will include segmentation to projects (at the enrollment level) or to resources (at the project level).
  • View a usage overview in a table. Below the chart, the table includes the aggregate of tokens and requests per project (or per resource when filtered to a single project). The table is affected by all filters (time range, model, and project filter if applied).

Learn about AIP usage views and how to take action based on them, and explore additional tools for LLM capacity and cost management.

Share your feedback

We want to hear what you think about these updates. Send your feedback to our Palantir Support teams, or share in our Developer Community ↗ using the language-model-service tag ↗.


Introducing the split transform in Pipeline Builder

Date published: 2025-04-09

The new split transform feature in Pipeline Builder allows you to partition your data input into two outputs, based on custom conditions. For example, you could use the split transform to divide a dataset of customer orders into subsets for further analysis.

The new split transform feature is accessible from the right-click menu by selecting Split.

The new split transform feature is accessible from the right-click menu by selecting Split.

What is the split transform?

The split transform evaluates each row of your input data against a specified condition before directing the rows into two distinct outputs based on whether the row meets the condition. That is, whether the condition evaluates as True or False.

Rows where the condition is true will be sent to the first output (the True output), and rows where the condition is false will be sent to the second output (the False output).

This enables efficient categorization of your data and facilitates further processing tailored to each category. Additionally, this feature enhances the clarity of your pipeline, making it easier to understand at a glance.

Example use case

Imagine you have a dataset of customer orders and wish to categorize them into high-value and low-value orders. By defining the condition order_value > 1000, the split transform will direct orders exceeding $1000 to the True output, while all other orders will be channeled to the False output.

This notional example splits orders between the true and false output channels depending on whether the order value is over 1000.

This notional example splits orders between the true and false output channels depending on whether the order value is over 1000.

This allows for targeted analysis and processing of high-value orders.

Learn more about the split transform and experience streamlined, condition-based data partitioning today.

Tell us what you think

As we continue to develop Pipeline Builder, we want to hear about your experiences and welcome your feedback. Share your thoughts with Palantir Support channels or our Developer Community ↗ and use the pipeline builder tag.


Introducing source-based Python transforms and functions in Code Repositories, and transforms in VS Code Workspaces [GA]

Date published: 2025-04-09

External Python transforms can now be created as source-based external transforms, supporting all of the features of egress-based external transforms and more. Source-based external connections are now also supported for functions.

Key advantages of source-based external transforms include support for the following:

  • An improved developer experience when working with external connections
  • Connecting to systems not accessible from the Internet through agent proxies
  • Rotating or updating credentials without requiring code changes
  • Sharing connection configuration across multiple repositories
  • Improved and simplified governance workflows
  • Simplified governance for egress, exportable markings, and credentials

Configure a source

Configure a source to allow code-based connections by enabling exports to the source and code imports. Allowing exports provides the ability to egress to the source, while allowing code imports allows access to properties of the source, including secret values.

Navigate to Connection settings > Export Configuration, and toggle on Enable exports to this source. Then, navigate to the Code important configuration page and toggle on Allow this source to be imported into code repositories.

Toggle Enable exports to this source within the source connection settings.

Toggle Enable exports to this source within the source connection settings.

Toggle Allow this source to be imported into code repositories within the source connection settings.

Toggle Allow this source to be imported into code repositories within the source connection settings.

External Python transforms

Source-based external transforms are the recommended way to create external transforms. Users should note that the egress-based approach will soon be considered in the legacy phase of development.

Start from the code repository or VS Code Workspace sidebar, and select the External systems tab. Follow the provided prompts to install the transforms-external-systems library, add the source to the repository, and view the example usage provided by the source.

Use VS Code Workspaces to get the most up-to-date development experience for source-based external transforms.

Select the External systems tab, import a source, and use the example provided by the source.

Select the External systems tab, import a source, and use the example provided by the source.

External functions

Start from the code repository sidebar and select the Resource imports tab. Add the source to the repository, and view the example usage provided by the source.

Create an external TypeScript function using the ExternalSystems decorator.

Create an external TypeScript function using the @ExternalSystems decorator.

Learn more

If you would like to learn more about the topics above, consider reviewing the following resources:

Your feedback matters

Your insights are crucial in helping us understand how we can improve data connections. Share your feedback through Palantir Support channels and our Developer Community ↗ using the data-connection tag ↗.


Introducing new PDF Viewer widget capabilities and configurations

Date published: 2025-04-03

We are excited to share that various new configuration options have been added to the PDF Viewer widget enabling new capabilities such as inline actions on existing annotations, automatic scrolling to annotation objects, events on new selections, and more:

Inline Edit Annotation action on hover of an existing text annotation.

Inline Edit Annotation action on hover of an existing text annotation.

  • Inline actions on existing annotations: Builders may now configure inline actions on existing annotations. Actions may be configured to show up within the tooltip popover on hover of an annotation. The hovered object may be referenced and passed in as an action input parameter using the hovered object variable.

  • Automatic scrolling to annotations: Builders may now set an object set variable containing a single annotation object within the annotation object set for the widget to scroll to.

  • Events on new selections: Builders may now additionally configure events to be triggered on new text and/or area selections. Previously, only actions could be configured to trigger on new selections.

  • Active page number: A numeric variable may now be used to either capture the page number a user is currently on and/or to change the current page being displayed by the widget.

  • New Output variables for user selections: Two new output variables have been added to the PDF Viewer widget allowing builders to capture and use a user’s selections within the PDF. Output user selected coordinates captures a user’s selection coordinates on the PDF within a string variable as an output. Output user selected page number captures the page number the user has made a selection on within a numeric variable as an output.

Review PDF Viewer widget documentation to learn more about the new configuration options.

We want to hear from you

As we continue to develop Workshop, we want to hear about your experiences and welcome your feedback. Share your thoughts with Palantir Support channels or our Developer Community ↗ and use the workshop tag.


Redefining application variables in AIP agents

Date published: 2025-04-01

To better integrate the AIP Interactive Widget with other widgets in your Workshop modules, we have significantly improved the application variable system for AIP agents. Application variables within AIP agents can be used as deterministic outputs from tools and ontology context. The AIP Interactive Widget also offers additional variables that can be used to create new sessions from external widgets and automatically send messages using Workshop events.

Access mode configuration replaced with new application variable update tool

The configuration for access mode previously allowed you to determine whether the agent could update a variable or if the value was "read-only." Internally, this involved using a tool to update application state, which required the agent to specify the variable UUID for updates. However, this method was potentially unreliable, as the agent sometimes failed to apply updates before returning the final response. To address these issues, we replaced this access mode with the Update application variable tool, featuring enhanced prompting. When creating a variable, you will now need to manually add the variable with this tool to align with the read/write access configuration.

Existing agents do not need to be updated, as we performed a migration in the backend.

This Update application variable tool enhances transparency by clearly revealing the underlying processes and allows users greater flexibility in configuring the system. Additionally, it enables the language model to specify updates by variable name rather than by ID, resulting in improved performance. Consequently, the variable ID is no longer included in the prompt.

The update application variable option in the Add tool dropdown menu.

The update application variable option in the Add tool dropdown menu.

Introducing value visibility for variables

The LLM does not need to know about every variable; for example, a variable you use as input to function RAG, ontology context RAG, and so on may have no purpose in the compiled prompt. Before this update, each variable was automatically included in the prompt (comprising the name, current value, and description). You can now choose to remove the variable's visibility from the compiled system prompt. We recommend only including variable visibility when necessary, as reducing the amount of context provided to the LLM can decrease confusion and improve accuracy.

New option to hide the variable value from an agent.

New option to hide the variable value from an agent.

Deterministic updates for variables

The Update application variable tool introduces an additional step in the thought process that is not always necessary. We observed that users often anticipate variable updates following the ontology context RAG, functions, and the object query tool. To accommodate this, we are introducing the capability to configure a variable as a "deterministic" output for each of these scenarios. When using the tool, ensure that the variable type matches the output type of the respective tool or context.

In most cases, we strongly recommend prioritizing deterministic updates whenever feasible, rather than relying on the Update application variable tool. For the Object query tool, you can designate an output object set variable for each configured object type. With the Call function tool, you can map functions that have either string or object set outputs to a corresponding variable of the same type. Regarding ontology RAG, you can select an output object type variable that will update with the K most relevant objects after each response.

Select an object set output variable to update with the K most relevant objects after each response.

Select an object set output variable to update with the K most relevant objects after each response.

Deterministic input for object query tool

The Object query tool can also be provided with a initial variable rather than having the agent specify the starting object set. This can be done by mapping the input variable for each object type in the tool configuration. This is useful if you want the object query tool to start from a pre-filtered object set without any additional prompting.

Where desired, provide an initial variable instead of having the agent specify the starting object set.

Where desired, provide an initial variable instead of having the agent specify the starting object set.

Ontology context citation object set variable

Selecting an ontology context citation in the agent response will link out to object views in a new tab. To keep users within the same module, we added a citation variable to the ontology context configuration. When a citation is selected, this variable will update with a static object set containing just the citation object. This is useful for showing a preview of the object in another widget alongside the interactive widget, and more.

Select a citation variable in the ontology context configuration to update with a static object set containing just the citation object.

Select a citation variable in the ontology context configuration to update with a static object set containing just the citation object.

AIP Interactive Widget configuration updates

The AIP Interactive Widget has an updated configuration panel to improve integration within your Workshop module. The new textbox variable refers to the text of the user textbox. As the user enters text, the variable automatically updates to match the value in the textbox. If the variable is changed from outside the AIP Interactive Widget, a new message with the current value of the variable is sent.

Additionally, to connect the active session to a string Workshop variable, we introduced an active session identifier variable which is always up-to-date with the current sessionRid. If you change this variable from outside the AIP Interactive Widget, either a new session will be automatically created or an existing session will be switched to if the change contains a sessionRid.

We also added a Boolean toggle to hide session history by default. If this toggle is true, regardless of the width of the widget, session history will automatically be collapsed when the module loads.

Share your feedback

We want to hear what you think about these updates. Send your feedback to our Palantir Support teams, or share in our Developer Community ↗ using the aip-agents tag ↗.