REMINDER: Sign up for the Foundry Newsletter to receive a summary of new products, features, and improvements across the platform directly to your inbox. For more information on how to subscribe, see the Foundry Newsletter and Product Feedback channels announcement.
Share your thoughts about these announcements in our Developer Community Forum ↗.
use_sidecar
: Run models in dedicated containers within Python transformsDate published: 2025-06-12
Starting from palantir_models
version 0.1673.0, the ModelInput
class exposes a use_sidecar
parameter in Python transforms. When use_sidecar
is set to True
, the model is run in a separate container provisioned on top of the machines running the Spark transform itself, thereby ensuring easy, portable and reliable production usage of models across the platform. This feature prevents dependency conflicts that can occur when importing models built in a different repositories or code workspaces into Python transforms for inference. Furthermore, this guarantees that your models operate with the exact dependencies with which they were built, protecting users from unexpected behavior or runtime failures.
Note that use_sidecar
is not supported in lightweight transforms, and previewing transforms with a sidecar ModelInput
is also not supported.
use_sidecar
ensures your model runs in a controlled environment with its original dependencies, preventing conflicts with your transform's libraries.use_sidecar
automatically manages the loading of the correct model adapter code. This removes the need to manually update dependencies in your transform's repository and run checks if the adapter code or dependencies changed with a new model version, and allows you to import multiple models into the same repository without worrying about clashes.sidecar_resources
parameter.To load the model in a container, simply set use_sidecar=True
. No other code changes are necessary.
Copied!1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
from transforms.api import Input, Output, transform, TransformInput, TransformOutput from palantir_models.transforms import ModelInput, ModelAdapter @transform( out=Output('path/to/output'), model_input=ModelInput( "path/to/my/model", use_sidecar=True, sidecar_resources={ "cpus": 2.0, "memory_gb": 4.0, "gpus": 1 } ), data_in=Input("path/to/input") ) def my_transform(out: TransformOutput, model_input: ModelAdapter, data_in: TransformInput) -> None: inference_outputs = model_input.transform(data_in) out.write_pandas(inference_outputs.output_df)
To learn more, review the ModelInput
class reference documentation.
Date published: 2025-06-10
You can now create custom roles to grant granular enrollment-level workflows from the Enrollment permissions page in Control Panel.
Configure your enrollment custom roles through the Enrollment permissions configuration page in Control Panel.
Custom roles are useful in situations when users or groups require permissions for particular workflows that do not match existing default roles. For example, by creating a custom IT group, you can allow permissions for that group to add or modify domains without granting permissions to change ingress or egress settings.
Select the individual workflows to grant to members of a new custom role.
To get started creating custom roles, navigate to Enrollment Permissions settings in Control Panel, or learn more in our public documentation.
For additional assistance with custom roles, contact Palantir Support or visit our Community Forum ↗.
Date published: 2025-06-05
Machinery is an application for modeling real-world events, such as healthcare procedures, insurance assessments, and government operations, as processes that can be explored in real-time through custom AI-powered applications tailored to your needs. As of the first week of June, Machinery is now generally available across all enrollments.
Use Machinery to mine or implement a process, identify unwanted behaviors, and make measured progress towards achieving desired outcomes. Additionally, facilitate human intervention to reduce inefficiencies and improve your process performance over time.
Implement a process from scratch, review, and optimize with Machinery.
Common workflows for Machinery include:
Implementing a process in the Palantir platform involves many individual resources, such as object types, actions, and automations. Machinery now provides a comprehensive view for all these components and lets you define an ordered flow of automations and manual actions. Its unique state-centric perspective allows you to make incremental progress towards desired outcomes while handling and resolving the edge cases of your organization. Value types and submission criteria can then provide an additional layer of conformance guarantees.
Automation nodes can be built into a Machinery graph.
Machinery boasts a custom layout algorithm with the auto-layout feature allowing for visually appealing graphs without need of manual manipulation. Users can also disable this feature and freely move elements, allowing for customized adjustments tailored to individual preferences.
Create subprocesses and parallel processes to benefit from greater flexibility and control over workflows. Subprocesses allow you to create nested processes within your main Machinery process, providing a structured way to manage complex tasks and enabling seamless integration into the larger workflow. This modular approach also supports parallel processes, allowing multiple processes to run concurrently, thereby enhancing efficiency and reducing bottlenecks.
You can now build multiple linked processes in Machinery, allowing you to model processes acting across your whole organization.
The focus view feature further elevates user experience by allowing users to zoom in on specific subprocesses, providing a detailed view that simplifies navigation and management of intricate workflows. With these capabilities, users can manage complexity across the organization for a higher level of process automation, ultimately leading to improved productivity and streamlined operations.
After configuring the log ontology, users can enter mining mode, which presents a distinct graph highlighting both existing states and potential edits in sepia color. Users can exclude certain states from mining, effectively reducing noise and focusing on relevant data. Use the transition frequency slider to filter out less important nodes, ensuring that only the most critical transitions are highlighted while benefiting from the clean auto-layout graph.
Mining an entire process in Machinery mining mode.
Machinery mining mode, with a transition filter to filter out nodes that have less than 71.5% objects passing through.
To operationalize your process in the Palantir platform, you can quickly bootstrap a Workshop module ("Machinery Express application") with a single click to initiate your application development. The express application serves as a dynamic playground where you can conduct analysis and intervene with your agentic workflows in real-time. Then, jump back into Machinery to refine, update, or optimize your processes with actions, automations, and AIP logic functions.
Whether you choose to use Machinery Express as a ready-to-use analysis tool or as a foundation to build your own applications, it facilitates a fluid interaction between process exploration and refinement. Additionally, the Machinery Express application enables you to immediately share this process with your operational users, providing them with the necessary context to effectively engage with the system in real time.
Generate a Machinery Express application to help get you started on application development in Workshop.
If you prefer to build a new application from scratch or add Machinery to an existing Workshop module, you may add a Machinery Process Overview widget in Workshop.
For more information on Machinery, review the documentation.
We want to hear about your experiences and welcome your feedback. Share your thoughts with Palantir Support channels or our Developer Community ↗ and use the machinery
↗ tag.
Date published: 2025-06-05
As part of Palantir’s partnership with Databricks, enhanced connectivity options offer a range of capabilities on top of data, compute, and models for a more seamless integration of the Palantir and Databricks platforms. In particular, virtual tables, compute pushdown, and external models are now generally available.
Capability | Status |
---|---|
Exploration | 🟢 Generally available |
Bulk import | 🟢 Generally available |
Incremental | 🟢 Generally available |
Virtual tables | 🟢 Generally available |
Compute pushdown | 🟢 Generally available |
External models | 🟢 Generally available |
For detailed guides, see Palantir’s updated Databricks documentation.
Palantir now offers enhanced virtual table capabilities on top of data in Databricks, including:
See the Databricks virtual tables documentation for more details on registering and using virtual tables from Databricks.
The ability to push down compute to Databricks is now available. When using virtual tables as inputs and outputs to a pipeline that are registered to the same Databricks source, it is possible to fully federate the compute to Databricks. This capability leverages Databricks Connect ↗ and is currently available in Python transforms.
See the Databricks compute pushdown documentation for syntax details and a quickstart example.
Databricks models registered in Unity Catalog can be integrated into the Palantir platform via externally-hosted models and external transforms. This allows Databricks models to be leveraged operationally by Palantir users, pipelines, and workflow applications.
For more details, see the Databricks external models documentation.
Share your feedback about Palantir’s Databricks integration by contacting our Palantir Support teams, or let us know in our Developer Community ↗ using the databricks
tag ↗.
Date published: 2025-06-05
Virtual table outputs are now supported in Pipeline Builder and Code Repositories. A virtual table acts as a pointer to a table in a source system outside the Palantir platform, and allows you to use that data in-platform without ingesting it. Virtual tables were previously only available as inputs to Foundry data transformations, meaning any output datasets would be stored in Foundry. Now you can orchestrate entire pipelines with logic authored in Foundry, and data stored externally.
You can add virtual table outputs as you would any other Pipeline Builder output. Select a node and choose the new virtual table output type.
A virtual table output in Pipeline Builder.
When configuring your output in Code Repositories, select the new virtual table type. You will then be prompted to configure your output source.
Configuring a virtual table output in Code Repositories file templates.
Note that query compute may be split between Foundry and the source system for:
Table support is improving across the Palantir platform. Upcoming work includes:
Share your feedback about virtual table outputs by contacting our Palantir Support teams Palantir Support teams, or let us know in our Developer Community ↗ using the virtual-tables
tag ↗.
Date published: 2025-06-05
Virtual tables can now be created in bulk for tabular source types, such as Databricks, BigQuery, and Snowflake. Select Create virtual table, and you will now be able to create one or more virtual tables at once for supported sources.
Creating a new virtual table.
To bulk register virtual tables in Data Connection, select external tables in the left panel, and choose where to save your new virtual tables in Foundry in the right panel.
An example of virtual table bulk registration in Data Connection.
Learn more about bulk registering virtual tables.
Share your feedback about virtual table outputs by contacting Palantir Support, or let us know in our Developer Community ↗ using the virtual-tables
tag ↗.
Date published: 2025-06-03
New transform input types are now available to simplify the creation of transforms for vision LLM-based extraction workflows. These transform input types abstract common logic and enable users to select their desired level of customization. Writing transforms to convert PDF content into Markdown is now more efficient, while maintaining flexibility for users that want to customize their workflows.
Vision LLMs can extract information from complex documents with mixed content, such as tables, figures, and charts, with high accuracy. To implement these vision LLM-based workflows, custom logic needs to be written in transforms that are applied to media sets containing PDF documents. Previously, multiple complex steps had to be implemented, such as image conversion and encoding. We now provide transform input types that simplify and expedite this process.
The following transform input types are now available:
VisionLLMDocumentsExtractorInput
: Processes PDF media sets by taking each media item and splitting it into individual pages. These pages are converted into images and sent to the vision LLM. This option is recommended for cases where custom image processing is not necessary, and a solution that handles every step of the process is preferred.VisionLLMDocumentPageExtractorInput
: Processes individual pages of a PDF document. This option is recommended in cases where users want more flexibility and control over the extraction process. For example, users can apply custom image processing, or handle splitting PDF pages with custom logic.These new transform inputs abstract common document extraction logic, including image conversion, resizing, encoding, and a default prompt that is carefully tuned for document extraction. In addition to a simplified interface, users have the option to provide a custom prompt, and can customize image processing as needed with the VisionLLMDocumentPageExtractorInput
type.
Below is a sample implementation, demonstrating a significantly shorter and simplified Python transform:
Copied!1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
from transforms.api import Output, transform from transforms.mediasets import MediaSetInput from palantir_models.transforms import VisionLLMDocumentsExtractorInput @transform( output=Output("ri.foundry.main.dataset.abc"), input=MediaSetInput("ri.mio.main.media-set.abc"), extractor=VisionLLMDocumentsExtractorInput( "ri.language-model-service.language-model.anthropic-claude-3-7-sonnet") ) def compute(ctx, input, output, extractor): extracted_data = extractor.create_extraction(input, with_ocr=False) output.write_dataframe( extracted_data, column_typeclasses={ "mediaReference": [{"kind": "reference", "name": "media_reference"}] }, )
A Python transform implementation using the new VisionLLMDocumentsExtractorInput
.
Vision LLM-based document extraction and parsing has become one of the most prevalent workflows in Foundry, and creating transforms for these workflows is now more efficient than ever before. To learn more, review the vision LLM-based extraction documentation.
We want to hear about your experiences with transforms, and welcome your feedback. Share your thoughts with Palantir Support channels or on our Developer Community ↗ using the transforms-python
↗ and language-model-service
↗ tags.
Date published: 2025-06-03
The Platform SDK Resources page is now available in Developer Console, allowing developers to manage application access to Foundry resources. Developers can now configure application access to projects, define client-allowed operations, and control access to designated API namespaces. The Platform SDK Resources page offers comprehensive security and compliance settings to help ensure that integrations align with organizational policies, while facilitating secure and seamless interactions with Foundry resources.
The new Platform SDK Resources page, displaying the Project access and Client-allowed operations sections.
In the example shown above, the Project access section allows developers to choose the projects that an application can interact with, while the Client-allowed operations section allows developers to choose the methods that can be used to interact with the selected projects.
As of Spring 2025, new Developer Console applications enforce API-level security for scoped applications, ensuring that every endpoint called by these applications is explicitly added to the client-allowed operations in the Platform SDK resources page.
With this new level of security, access is only granted to API namespaces, ensuring that application administrators can control the actions that applications take in their organization. Prior to these changes, granting an API namespace scope provided access to the namespace's endpoints as well as any dependent endpoints in other namespaces. These new, more secure API scopes are isolated, providing access only to the endpoints shown in the Client-allowed operations section.
This new level of security applies to all new Developer Console applications; to benefit from these new security features, migrate your application by following our step-by-step guide.
The migration callout for legacy Developer Console applications.
To get started with the Platform SDK Resources page, navigate to the SDK section of your Developer Console application and select Resources > Platform SDK Resources.
Navigating to the Platform SDK resources page in Developer Console.
On the Platform SDK Resources page, you can manage the resources and operations that your application client has access to. To add additional resources to the client, select Add Project and choose the project you want to add the client's scope. After saving, your application will have access to the resources in the project.
To modify the operations that can be performed by the client, navigate to the Client-allowed operations section and use the toggle to define the operations that your application has access to. When you make a change in this section it will apply to all existing SDK versions without the need to generate a new SDK.
Using the Platform SDK Resources page to modify the resources and operations available to the application client.
We are currently working on documentation for our TypeScript, Python, and Java platform SDKs, in addition to enhancing the application client creation flow to improve the developer experience.
We are always happy to engage with you in our Developer Community ↗. Share your thoughts and questions about the OSDK and Developer Console with Palantir Support channels or on our Developer Community using the ontology-sdk
↗ tag.
For more information, refer to the platform SDK resources and API security and migration documentation.