REMINDER: Sign up for the Foundry Newsletter to receive a summary of new products, features, and improvements across the platform directly to your inbox. For more information on how to subscribe, see the Foundry Newsletter and Product Feedback channels announcement.
Share your thoughts about these announcements in our Developer Community Forum ↗.
Date published: 2025-10-16
A new media set transformation API is now available in Python transforms across all enrollments. This API enables users to perform both media and tabular transformations on media sets, with the ability to output both media sets and datasets. Previously, users needed to construct complex requests to interact with media set transformations. Now, the API provides comprehensive methods for all supported transformations across different media set schema types.
With this new API, users no longer need to write custom logic for tasks such as iterating over pages in document media sets or implementing parallel processing. Transformations can be applied to entire media sets or individual media items. Additionally, the API supports chaining transformations for media-to-media workflows. For example, you can slice a document media set and then convert the resulting pages to images in a single line.
Code example using the new API.
We want to hear about your experience and welcome your feedback as we develop the media set experience in Python transforms. Share your thoughts with Palantir Support channels or on our Developer Community using the media-sets tag.
Date published: 2025-10-16
In Pipeline Builder, you can now remove inherited organizations from outputs, in addition to markings. Note that this removal will only apply to current organizations - future organization changes will not be automatically removed, and data access continues to rely on project-level organizations.
Previously, you could only remove inherited markings from outputs. Now, with the right permissions, you can also remove inherited organizations at an input level directly in Pipeline Builder. Note that data access continue to rely on project-level organizations and any future organization changes will not be automatically removed.
Use the Remove all inputs option or remove inputs one by one to remove inherited organizations from a set of inputs.
To do this, first protect your main branch, and make a branch off of that protected branch. Then, navigate to Pipeline outputs on the right side of your screen and select Edit on the output.
Select Edit on the output on which you would like to remove inherited markings and organizations.
After going to your output, select Configure Markings, and then navigate to the Organizations tab. On this tab, you can remove inherited organizations by using the Remove all inputs option, or you can remove them on an input level. This gives you greater flexibility and control over access requirements for your outputs, aligning with how you manage markings.
To fully remove an organization marking, you must remove all inputs containing that organization. For example, if you wanted to remove the Testers
organization in the screenshot below, you would need to remove both the first and second inputs (assuming none of the other inputs have Testers
organization).
Remove an organization marking by deleting all inputs containing it. In this example, this means both inputs with the Testers
organization).
Learn more about removing organization and markings in Pipeline Builder.
We want to hear about your experience with Pipeline Builder and welcome your feedback. Share your thoughts with Palantir Support channels, or on our Developer Community ↗ using the pipeline-builder tag ↗.
Date published: 2025-10-16
The Ontology now supports fine-grained governance through main branch protection and project-level policies when using Foundry Branching. This capability is available for resources that have been migrated to project permissions, extending the same change control processes previously available only for Workshop modules.
This enhancement is part of an ongoing commitment to empower and expand the builder community, while still maintaining tight controls over change management. By extending these change control processes to ontology resources, project and resource owners benefit from more flexibility, security, and confidence when collaborating with others when using Foundry Branching.
To read more about about this feature, review documentation on protecting resources.
You may also review the previous Workshop announcement when this feature was first released for more information.
Date published: 2025-10-14
Claude 4.5 Sonnet is now available from Vertex, Bedrock, and Anthropic Direct for US and EU enrollments.
Claude 4.5 Sonnet is a high-performance model that is currently regarded as Anthropic’s best model for complex agents and coding capabilities. Comparisons between Sonnet 4.5 and other models in the Anthropic family can be found in the Anthropic documentation ↗.
To use these models:
We want to hear about your experiences using language models in the Palantir platform and welcome your feedback. Share your thoughts with Palantir Support channels or on our Developer Community ↗ using the language-model-service tag ↗.
Date published: 2025-10-14
Model Studio, a new workspace that allows users to train and deploy machine learning models, will be available in beta the week of October 13. Model Studio transforms the complex task of building production-grade models into a streamlined no-code process that makes advanced machine learning more accessible. Whether you are a data scientist looking to accelerate your workflow, or a business user eager to unlock insights from your data, Model Studio provides essential tools and a user-friendly interface that simplifies the journey from data to model.
The Model Studio home page, displaying recent training runs and run details.
Model Studio is a no-code model development tool that allows you to train models in tasks such as forecasting, classification, and regression. With Model Studio, you can maximize model performance for your use cases by training models with custom data while retaining customization and control over the training process with optional parameter configuration.
Building useful, production-ready models traditionally requires deep technical expertise and significant time investment, but Model Studio changes that by providing the following features:
Model Studio is perfect for technical and non-technical users alike. Business users who want to leverage machine learning without coding and data scientists who want to accelerate prototyping and model deployment can both benefit from Model Studio's tools and simplified process. Additionally, organizations can benefit from Model Studio by lowering the barrier to AI adoption and empowering more teams to build and use models.
To get started with Model Studio, navigate to the Model Studio application and create your own model studio. From there, you can take the following steps to get started with model training:
After configuring your model, you can launch a training run and review model performance in real time with clear metrics and experiment tracking.
As Model Studio continues to evolve, we are committed to enhancing the user experience. To do so, we will introduce features such as enhanced experiment logging for deeper training performance insights, and an expanded set of supported modeling tasks.
As we continue to develop Model Studio, we want to hear about your experiences and welcome your feedback. Share your thoughts with Palantir Support channels or our developer community ↗.
Learn more about Model Studio.
Date published: 2025-10-09
Experiments are now available in AIP Evals, enabling users to test function parameters such as prompts and models to identify the values that deliver the highest quality outputs and the best balance between performance and cost. Previously, systematic testing of parameter values in AIP Evals was a time-consuming manual process that required individual runs for each parameter value. With experiments, users can automate testing and optimize AI solutions more efficiently.
Experiments in AIP Evals allow you to launch a collection of parameterized evaluation runs to help optimize the performance and cost of your tested functions. You can define multiple parameter values at once, which AIP Evals will test in all possible combinations using grid search in separate evaluation suite runs. Afterwards, you can analyze experiment results to identify the parameter values with the best performance.
A step-by-step representation of the experiments process.
Experiments have been used to discover significant optimization opportunities, and can be used with AIP Logic functions, agents published as functions, and functions on objects.
Some example use cases include the following:
To get started with experiments, refer to the documentation on preparing your function and setting up your experiment. You can parameterize parts of your function, define the experiment parameters you want to test with different values, and specify the value options you want to explore in the experiment.
Defining experiment parameters in the Run configuration dialog.
When the evaluation runs have been completed, you can analyze the results in the Runs table, where you can group by parameter values to easily compare aggregate metrics and determine which option performed best. You can select up to four runs to compare, and drill down into test case results and logs.
The Runs table in AIP Evals, filtered down to an experiment with evaluation runs grouped by model.
Through automating parameter testing and surfacing the best-performing configurations, experiments can help you refine your AI workflows and deliver higher quality results. Explore this feature to streamline your evaluation process and unlock new opportunities to optimize AI-driven initiatives.
Learn more about experiments in AIP Evals.
As we continue to develop new AIP Evals features and improvements, we want to hear about your experiences and welcome your feedback. Share your thoughts with Palantir Support channels or our Developer Community ↗ using the aip-evals
tag ↗.
Date published: 2025-10-08
AIP Document Intelligence is now available in beta, enabling you to extract and analyze content from document media sets in Foundry. As a beta product, functionality and appearance may change as we continue active development.
The AIP Document Intelligence application, displaying the configuration page for document extraction.
Document extraction is foundational to enterprise AI workflows. The quality of AI solutions depends heavily on extracting and preparing domain-specific data for LLMs. Our most critical customers consistently highlight document extraction as essential yet time-consuming; complex strategies leveraging VLMs (Vision Language Models), OCR (Optical Character Recognition), and layout extraction often require hours of developer time and workarounds for product limitations.
AIP Document Intelligence streamlines this process. Users can now:
This beta release supports text and table extraction into Markdown format. Future releases will expand to entity extraction and complex engineering diagrams.
An example of an output preview against a raw PDF in AIP Document Intelligence.
To enable AIP Document Intelligence on your enrollment, navigate to Application access in Control Panel.
With this beta release, we are eager to hear about your experience and feedback using extraction methods with AIP Document Intelligence. Share your feedback with our Support channels or in our Developer Community ↗ using the aip-document-intelligence
tag ↗ .
Date published: 2025-10-07
AIP Evals now provides an integrated debug view directly within the Results dialog accessible from the AIP Logic and Agent Studio sidebars. This new view allows you to access debugging information without opening the separate metrics dashboard, making it easier to analyze evaluation results. The debug view allows you to:
Navigate between test case results and debug information in a single view
Use the native Logic debugger for tested functions and evaluation functions
Preview syntax-highlighted code for TypeScript and Python functions
Review evaluator inputs and pass/fail reasoning
Debug view for a test case.
For more information, review the documentation.
The debug view will be integrated into the native AIP Evals application in a future release. As we continue to refine the design and user experience, you may notice incremental UI improvements over time.
Let us know what you think about this feature by sharing your thoughts with Palantir Support channels, or on our Developer Community ↗ using the aip-evals tag ↗.
Date published: 2025-10-07
Three new Grok models from xAI are now available in AIP: Grok 4 Fast Reasoning, Grok 4 Fast Non-Reasoning, and Grok Code Fast 1. These models are available for enrollments with xAI enabled in the US and other supported regions. Enrollment administrators can enable these models for their teams.
Grok 4 Fast Reasoning is best suited for complex reasoning tasks, advanced analysis, and decision-making. It delivers high-level performance for complex decision support, research, and operational planning at a fraction of the cost of Grok 4.
Grok 4 Fast Non-Reasoning is optimized for lightweight tasks such as summarization, extraction, and routine classification.
Grok Code Fast 1 is designed for high-speed code generation and debugging, making it ideal for software development, automation, and technical problem-solving.
Comparisons between these Grok models and guidance on their optimal use cases can be found in xAI’s documentation ↗. As with all new models, use-case-specific evaluations are the best way to benchmark performance on your task.
To use these models:
Confirm your enrollment administrator has enabled the xAI model family.
Review Token costs and pricing.
See the complete list of all the models available in AIP.
We want to hear about your experiences using language models in the Palantir platform and welcome your feedback. Share your thoughts with Palantir Support channels or on our Developer Community ↗ using the language-model-service tag ↗.
Date published: 2025-10-02
Spreadsheet media sets are now generally available, allowing you to upload, preview, and process spreadsheet (XLSX) files directly within Foundry's media sets and enabling powerful LLM-driven workflows with tabular data that was previously difficult to handle.
Organizations frequently need to archive and process data from various poorly defined sources like manufacturing quotes, progress reports, and status updates that come in spreadsheet format. Until now, media sets did not support previews for spreadsheets, and tools for converting spreadsheets to datasets were not suitable for the workflows.
Spreadsheet media sets allow you to work with tabular data designed for human consumption that is difficult to automate using traditional programming methods. The primary format supported is XLSX (Excel) files.
Spreadsheet media sets are ideal for processing unstructured spreadsheets in scenarios such as:
Spreadsheet media sets are also an excellent way to maintain your original source of truth for referencing from downstream transformations or ingestions.
A preview of spreadsheet content uploaded to a media set.
transforms-media
package.In upcoming releases, we plan to enhance spreadsheet media sets with additional Workshop annotation features, enhanced formatting extraction, more options for text extraction, and improved support for edge cases and embedded data.
We want to hear about your experience with spreadsheet media sets and welcome your feedback. Share your thoughts with Palantir Support channels, or on our Developer Community ↗ using the media-sets
tag ↗.