Announcements

REMINDER: Sign up for the Foundry Newsletter to receive a summary of new products, features, and improvements across the platform directly to your inbox. For more information on how to subscribe, see the Foundry Newsletter and Product Feedback channels announcement.

Share your thoughts about these announcements in our Developer Community Forum ↗.


New Media Set Transformation API in Python transforms

Date published: 2025-10-16

A new media set transformation API is now available in Python transforms across all enrollments. This API enables users to perform both media and tabular transformations on media sets, with the ability to output both media sets and datasets. Previously, users needed to construct complex requests to interact with media set transformations. Now, the API provides comprehensive methods for all supported transformations across different media set schema types.

With this new API, users no longer need to write custom logic for tasks such as iterating over pages in document media sets or implementing parallel processing. Transformations can be applied to entire media sets or individual media items. Additionally, the API supports chaining transformations for media-to-media workflows. For example, you can slice a document media set and then convert the resulting pages to images in a single line.

Code example using the new API.

Code example using the new API.

Check out the complete API reference and list of available transformations for each media schema, with examples.

Your feedback matters

We want to hear about your experience and welcome your feedback as we develop the media set experience in Python transforms. Share your thoughts with Palantir Support channels or on our Developer Community using the media-sets tag.


Remove inherited organization markings from inputs in Pipeline Builder

Date published: 2025-10-16

In Pipeline Builder, you can now remove inherited organizations from outputs, in addition to markings. Note that this removal will only apply to current organizations - future organization changes will not be automatically removed, and data access continues to rely on project-level organizations.

Remove inherited organizations

Previously, you could only remove inherited markings from outputs. Now, with the right permissions, you can also remove inherited organizations at an input level directly in Pipeline Builder. Note that data access continue to rely on project-level organizations and any future organization changes will not be automatically removed.

Use the Remove all inputs option or remove inputs one by one to remove inherited organizations from a set of inputs.

Use the Remove all inputs option or remove inputs one by one to remove inherited organizations from a set of inputs.

To do this, first protect your main branch, and make a branch off of that protected branch. Then, navigate to Pipeline outputs on the right side of your screen and select Edit on the output.

Select Edit on the output on which you would like to remove inherited markings and organizations.

Select Edit on the output on which you would like to remove inherited markings and organizations.

After going to your output, select Configure Markings, and then navigate to the Organizations tab. On this tab, you can remove inherited organizations by using the Remove all inputs option, or you can remove them on an input level. This gives you greater flexibility and control over access requirements for your outputs, aligning with how you manage markings.

Example of an organization marking removal

To fully remove an organization marking, you must remove all inputs containing that organization. For example, if you wanted to remove the Testers organization in the screenshot below, you would need to remove both the first and second inputs (assuming none of the other inputs have Testers organization).

Remove an organization marking by deleting all inputs containing it. In this example this means both inputs with the Testers organization.

Remove an organization marking by deleting all inputs containing it. In this example, this means both inputs with the Testers organization).

Learn more about removing organization and markings in Pipeline Builder.

Your feedback matters

We want to hear about your experience with Pipeline Builder and welcome your feedback. Share your thoughts with Palantir Support channels, or on our Developer Community ↗ using the pipeline-builder tag ↗.


Ontology resources now support branch protection and project-level policies

Date published: 2025-10-16

The Ontology now supports fine-grained governance through main branch protection and project-level policies when using Foundry Branching. This capability is available for resources that have been migrated to project permissions, extending the same change control processes previously available only for Workshop modules.

What’s new

  • Resource protection: Protect Ontology resources individually or in bulk from your file system. Protected resources require changes to be made through branches, ensuring greater oversight.
  • Customizable approval policies: Define granular approval policies that apply to protected resources in a given project, specifying which users or groups must approve proposed changes before deployment.

Why it matters

This enhancement is part of an ongoing commitment to empower and expand the builder community, while still maintaining tight controls over change management. By extending these change control processes to ontology resources, project and resource owners benefit from more flexibility, security, and confidence when collaborating with others when using Foundry Branching.

To read more about about this feature, review documentation on protecting resources.

You may also review the previous Workshop announcement when this feature was first released for more information.


Claude 4.5 Sonnet now available in AIP

Date published: 2025-10-14

Claude 4.5 Sonnet is now available from Vertex, Bedrock, and Anthropic Direct for US and EU enrollments.

Model overview

Claude 4.5 Sonnet is a high-performance model that is currently regarded as Anthropic’s best model for complex agents and coding capabilities. Comparisons between Sonnet 4.5 and other models in the Anthropic family can be found in the Anthropic documentation ↗.

  • Context Window: 200,000 tokens
  • Modalities: Text and image input | Text output
  • Capabilities: Extended thinking, Function calling

Getting started

To use these models:

Your feedback matters

We want to hear about your experiences using language models in the Palantir platform and welcome your feedback. Share your thoughts with Palantir Support channels or on our Developer Community ↗ using the  language-model-service tag ↗.


No-code model training and deployment with Model Studio

Date published: 2025-10-14

Model Studio, a new workspace that allows users to train and deploy machine learning models, will be available in beta the week of October 13. Model Studio transforms the complex task of building production-grade models into a streamlined no-code process that makes advanced machine learning more accessible. Whether you are a data scientist looking to accelerate your workflow, or a business user eager to unlock insights from your data, Model Studio provides essential tools and a user-friendly interface that simplifies the journey from data to model.

The Model Studio home page displaying recent training runs and run details.

The Model Studio home page, displaying recent training runs and run details.

What is Model Studio?

Model Studio is a no-code model development tool that allows you to train models in tasks such as forecasting, classification, and regression. With Model Studio, you can maximize model performance for your use cases by training models with custom data while retaining customization and control over the training process with optional parameter configuration.

Building useful, production-ready models traditionally requires deep technical expertise and significant time investment, but Model Studio changes that by providing the following features:

  • A streamlined point-and-click interface for configuring model training jobs; no coding required.
  • Built-in production-grade model trainers tailored for common use cases such as time series forecasting, regression, and classification.
  • Smart defaults and guided workflows that empower you get started quickly, even if you are new to machine learning.
  • In-depth experiment tracking with integrated performance metrics that allow you to monitor and refine your models with confidence.
  • Full data lineage and secure access controls built on top of the Palantir platform, ensuring transparency and security at every step.

Who should use Model Studio?

Model Studio is perfect for technical and non-technical users alike. Business users who want to leverage machine learning without coding and data scientists who want to accelerate prototyping and model deployment can both benefit from Model Studio's tools and simplified process. Additionally, organizations can benefit from Model Studio by lowering the barrier to AI adoption and empowering more teams to build and use models.

Getting started

To get started with Model Studio, navigate to the Model Studio application and create your own model studio. From there, you can take the following steps to get started with model training:

  • Select the best model trainer for your use case (time series forecasting, classification, or regression).
  • Choose your input datasets.
  • Configure your model using intuitive options, or stick with the recommended defaults.

After configuring your model, you can launch a training run and review model performance in real time with clear metrics and experiment tracking.

What's next on the development roadmap?

As Model Studio continues to evolve, we are committed to enhancing the user experience. To do so, we will introduce features such as enhanced experiment logging for deeper training performance insights, and an expanded set of supported modeling tasks.

Tell us what you think

As we continue to develop Model Studio, we want to hear about your experiences and welcome your feedback. Share your thoughts with Palantir Support channels or our developer community ↗.

Learn more about Model Studio.


Optimize AI performance and cost with experiments in AIP Evals

Date published: 2025-10-09

Experiments are now available in AIP Evals, enabling users to test function parameters such as prompts and models to identify the values that deliver the highest quality outputs and the best balance between performance and cost. Previously, systematic testing of parameter values in AIP Evals was a time-consuming manual process that required individual runs for each parameter value. With experiments, users can automate testing and optimize AI solutions more efficiently.

What are experiments?

Experiments in AIP Evals allow you to launch a collection of parameterized evaluation runs to help optimize the performance and cost of your tested functions. You can define multiple parameter values at once, which AIP Evals will test in all possible combinations using grid search in separate evaluation suite runs. Afterwards, you can analyze experiment results to identify the parameter values with the best performance.

A step-by-step representation of the experiments process.

A step-by-step representation of the experiments process.

Leverage experiments

Experiments have been used to discover significant optimization opportunities, and can be used with AIP Logic functions, agents published as functions, and functions on objects.

Some example use cases include the following:

  • Testing whether lightweight LLMs can deliver high consistency with prior production outputs at a fraction of the cost of flagship models.
  • Identifying common tasks that can be implemented using regular models before defaulting to premium options.
  • Improving prompt engineering by efficiently testing how changes such as adding context or few-shot examples, affect performance.

Getting started

To get started with experiments, refer to the documentation on preparing your function and setting up your experiment. You can parameterize parts of your function, define the experiment parameters you want to test with different values, and specify the value options you want to explore in the experiment.

Defining experiment parameters in the Run configuration dialog.

Defining experiment parameters in the Run configuration dialog.

When the evaluation runs have been completed, you can analyze the results in the Runs table, where you can group by parameter values to easily compare aggregate metrics and determine which option performed best. You can select up to four runs to compare, and drill down into test case results and logs.

The Runs table in AIP Evals filtered down to an experiment with evaluation runs grouped by model.

The Runs table in AIP Evals, filtered down to an experiment with evaluation runs grouped by model.

Through automating parameter testing and surfacing the best-performing configurations, experiments can help you refine your AI workflows and deliver higher quality results. Explore this feature to streamline your evaluation process and unlock new opportunities to optimize AI-driven initiatives.

Learn more about experiments in AIP Evals.

Your feedback matters

As we continue to develop new AIP Evals features and improvements, we want to hear about your experiences and welcome your feedback. Share your thoughts with Palantir Support channels or our Developer Community ↗ using the aip-evals tag ↗.


Extract and analyze document context with AIP Document Intelligence

Date published: 2025-10-08

AIP Document Intelligence is now available in beta, enabling you to extract and analyze content from document media sets in Foundry. As a beta product, functionality and appearance may change as we continue active development.

The AIP Document Intelligence application displaying the configuration page for document extraction.

The AIP Document Intelligence application, displaying the configuration page for document extraction.

Why this matters

Document extraction is foundational to enterprise AI workflows. The quality of AI solutions depends heavily on extracting and preparing domain-specific data for LLMs. Our most critical customers consistently highlight document extraction as essential yet time-consuming; complex strategies leveraging VLMs (Vision Language Models), OCR (Optical Character Recognition), and layout extraction often require hours of developer time and workarounds for product limitations.

Key capabilities

AIP Document Intelligence streamlines this process. Users can now:

  • Choose between traditional extraction methods (raw text, OCR, layout-aware OCR) and generative AI approaches
  • Combine preprocessing techniques with VLMs for complex documents, giving models additional context for better accuracy
  • Quickly execute state-of-the-art extraction strategies on sample enterprise documents
  • View evaluations of quality, speed, and token cost across different approaches
  • Deploy the optimal extraction strategy to a Python transform with a single click to process entire media sets

This beta release supports text and table extraction into Markdown format. Future releases will expand to entity extraction and complex engineering diagrams.

An example of an output preview against a raw PDF in AIP Document Intelligence.

An example of an output preview against a raw PDF in AIP Document Intelligence.

Getting started

To enable AIP Document Intelligence on your enrollment, navigate to Application access in Control Panel.

We want to hear from you

With this beta release, we are eager to hear about your experience and feedback using extraction methods with AIP Document Intelligence. Share your feedback with our Support channels or in our Developer Community ↗  using the aip-document-intelligence tag ↗ .


Debug view for AIP Evals now available in AIP Logic and Agent Studio sidebars

Date published: 2025-10-07

AIP Evals now provides an integrated debug view directly within the Results dialog accessible from the AIP Logic and Agent Studio sidebars. This new view allows you to access debugging information without opening the separate metrics dashboard, making it easier to analyze evaluation results. The debug view allows you to:

  • Navigate between test case results and debug information in a single view

  • Use the native Logic debugger for tested functions and evaluation functions

  • Preview syntax-highlighted code for TypeScript and Python functions

  • Review evaluator inputs and pass/fail reasoning

Debug view for a test case.

Debug view for a test case.

For more information, review the documentation.

What's next

The debug view will be integrated into the native AIP Evals application in a future release. As we continue to refine the design and user experience, you may notice incremental UI improvements over time.

Share your feedback

Let us know what you think about this feature by sharing your thoughts with Palantir Support channels, or on our Developer Community ↗ using the aip-evals tag ↗.


Grok 4 Fast Reasoning, Grok 4 Fast Non-Reasoning, and Grok Code Fast 1 (xAI) are now available in AIP

Date published: 2025-10-07

Three new Grok models from xAI are now available in AIP: Grok 4 Fast Reasoning, Grok 4 Fast Non-Reasoning, and Grok Code Fast 1. These models are available for enrollments with xAI enabled in the US and other supported regions. Enrollment administrators can enable these models for their teams.

Model overview

Grok 4 Fast Reasoning is best suited for complex reasoning tasks, advanced analysis, and decision-making. It delivers high-level performance for complex decision support, research, and operational planning at a fraction of the cost of Grok 4.

Grok 4 Fast Non-Reasoning is optimized for lightweight tasks such as summarization, extraction, and routine classification.

Grok Code Fast 1 is designed for high-speed code generation and debugging, making it ideal for software development, automation, and technical problem-solving.

Comparisons between these Grok models and guidance on their optimal use cases can be found in xAI’s documentation ↗. As with all new models, use-case-specific evaluations are the best way to benchmark performance on your task.

Getting started

To use these models:

Your feedback matters

We want to hear about your experiences using language models in the Palantir platform and welcome your feedback. Share your thoughts with Palantir Support channels or on our Developer Community ↗ using the  language-model-service tag ↗.


Spreadsheets are now supported as a media set schema type

Date published: 2025-10-02

Spreadsheet media sets are now generally available, allowing you to upload, preview, and process spreadsheet (XLSX) files directly within Foundry's media sets and enabling powerful LLM-driven workflows with tabular data that was previously difficult to handle.

Organizations frequently need to archive and process data from various poorly defined sources like manufacturing quotes, progress reports, and status updates that come in spreadsheet format. Until now, media sets did not support previews for spreadsheets, and tools for converting spreadsheets to datasets were not suitable for the workflows.

What are spreadsheet media sets?

Spreadsheet media sets allow you to work with tabular data designed for human consumption that is difficult to automate using traditional programming methods. The primary format supported is XLSX (Excel) files.

Spreadsheet media sets are ideal for processing unstructured spreadsheets in scenarios such as:

  • Files with significant formatting differences between versions
  • Spreadsheets where the structure is not known ahead of time (including email attachments, ad-hoc reports, and third-party vendors)
  • Storing and displaying source data alongside processed datasets
  • Supporting LLM-driven extraction and analysis workflows

Spreadsheet media sets are also an excellent way to maintain your original source of truth for referencing from downstream transformations or ingestions.

Key capabilities

  • Upload and preview: Upload XLSX files to media sets and view interactive previews that render spreadsheet content directly in Foundry. The preview provides a familiar tabular view of your data without requiring file downloads.

A preview of spreadsheet content uploaded to a media set.

A preview of spreadsheet content uploaded to a media set.

  • Text extraction for LLM processing: Extract spreadsheet content as JSON for use in LLM-powered workflows. This enables intelligent processing of tabular data that might have inconsistent formatting or meaningful layout structure such as merged cells.
  • Workshop integration: Spreadsheet media sets are fully integrated with Workshop, allowing you to preview spreadsheets directly in your workflow, view and create annotations, and scroll through content seamlessly.
  • Pipeline Builder support: Use Pipeline Builder expressions to extract and transform spreadsheet data within your pipelines, making it easy to incorporate spreadsheet processing into your workflows.
  • Python transforms in Code Workspaces: Perform advanced transformations in Code Workspaces using the transforms-media package.

What's next?

In upcoming releases, we plan to enhance spreadsheet media sets with additional Workshop annotation features, enhanced formatting extraction, more options for text extraction, and improved support for edge cases and embedded data.

Your feedback matters

We want to hear about your experience with spreadsheet media sets and welcome your feedback. Share your thoughts with Palantir Support channels, or on our Developer Community ↗ using the media-sets tag ↗.