REMINDER: Sign up for the Foundry Newsletter to receive a summary of new products, features, and improvements across the platform directly to your inbox. For more information on how to subscribe, see the Foundry Newsletter and Product Feedback channels announcement.
Share your thoughts about these announcements in our Developer Community Forum ↗.
Date published: 2026-03-19
GPT-5.4 mini↗ and GPT-5.4 nano↗ are now available directly from OpenAI for non-georestricted enrollments.
GPT-5.4 mini is enhanced over GPT-5 mini for coding, reasoning, tool use, computer use, and multimodal tasks, while running twice as fast with performance nearing GPT-5.4. GPT-5.4 nano is the smallest and most affordable GPT-5.4 variant, ideal for classification, data extraction, ranking, and lightweight coding tasks. Learn more about these models in OpenAI's announcement ↗.
To use these models:
We want to hear about your experiences using language models in the Palantir platform and welcome your feedback. Share your thoughts with Palantir Support channels or on our Developer Community ↗ using the language-model-service ↗ tag.
Date published: 2026-03-19
You can now configure Pipeline Builder to require manual confirmation before running previews, preventing unintended runs and saving compute resources. To enable this setting, navigate to Control Panel > Pipeline Builder, and toggle the Enable preview confirmation by default option.

Pipeline Builder's Enable preview confirmation by default setting in Control Panel.
With this new option, enrollment administrators can choose whether previews run automatically or require manual confirmation for all users in an enrollment. User preference settings in Pipeline Builder remain available, allowing individuals to override the enrollment default with their own preview behavior configuration.

The Automatic preview behavior option in the Pipeline Builder User preferences menu.
This update gives organizations greater control and consistency in how data previews are managed, making it easier to enforce best practices and optimize compute usage across enrollments.
Learn more about pipeline preview in Pipeline Builder.
Date published: 2026-03-19
You can now configure health checks for virtual as well as managed and virtual Iceberg tables to enable monitoring and alerting for common issues, such as:
This functionality extends to virtual tables sourced from Databricks, Snowflake, and BigQuery.
To configure a new health check, select Add checks from your table's Health tab before choosing a check to configure.

Choose health checks to configure after selecting Add checks in your table's Health tab.
After you configure one or multiple health checks on a virtual or Iceberg table, Foundry displays each in the same Health tab, where you can view its status, timing, monitoring view, and history.

Use the Checks panel of the Health tab to view health checks you configure for a virtual or Iceberg table.
Learn more about when to use virtual or Iceberg tables instead of datasets in Foundry.
Date published: 2026-03-19
As teams chain together automations, logic functions, and actions in Foundry, understanding how those systems behave becomes difficult. Autopilot, now available in beta, provides the visibility needed to understand how automations connect, trace objects through your workflow, and debug failures in one place. As a beta product, functionality and appearance may change during active development.

The Kanban board view of the Autopilot workbench.

The dependency graph view in Autopilot.
Open Autopilot from the application portal or select Open in Autopilot from the Actions menu on any automation overview page.
Once your workbench is configured, explore the flow graph to visualize your automation system and select any object to trace its path through your workflow.

Organize and define states from the sidebar.
For detailed guidance, review our documentation.
The following improvements are in active development:
Autopilot is being shaped by teams building real-world automation workflows. To share feedback or tell us about your use case, contact our Palantir Support channels or join the conversation in our Developer Community using the aip-autopilot tag ↗ .
Date published: 2026-03-18
Ontology admins can now promote an object type to mark it as a core, critical resource. Promoted object types will be annotated with a purple "verified" checkmark and will appear higher in search results in applications across the platform including Object Explorer, Gaia, AIP Logic, Slate, and Workshop. Favorited object types will also appear more prominently in search results. See the status documentation for more details.

Example of promotion in Ontology Manager and multiple examples of increased prominence across the platform.
Date published: 2026-03-17
AIP Document Intelligence now supports chunking and embedding extracted text across all enrollments. Alongside document extraction powered by vision language models and OCR, you can now process documents end-to-end directly within the platform. Chunking is a critical step in document-centric workflows — it determines the granularity of text passed to models in RAG systems, directly impacting retrieval accuracy and downstream generation quality.
The new chunking strategy is optimized for Markdown and handles complex structures such as bullet points and tables, improving on the existing raw-text chunking available in Pipeline Builder and AIP Logic. Access this capability through AIP Document Intelligence's text extraction workflow, or deploy it via Python transforms with an option to generate embeddings to support RAG-based workflows.
For more details, review the documentation on Deploy extraction strategies to Python transforms.
We want to hear about your experiences using AIP Document Intelligence. Share your thoughts through Palantir Support channels or on our Developer Community ↗ using the aip-document-intelligence tag ↗.
Date published: 2026-03-17
You can now ensure incremental execution in your pipelines with the Require incremental execution setting in Pipeline Builder. With this setting enabled, jobs configured to run incrementally will automatically fail if they cannot do so. This helps prevent accidentally snapshotted inputs, forced snapshots from output schema changes, and other unintended snapshot scenarios.

The Require incremental execution setting in the Build settings menu.
To configure enforced incremental execution, open Build settings by selecting the configuration icon to the right of the Deploy button in your pipeline. Scroll down to Advanced configuration, and set Require incremental execution to True. The default value for this setting is No value.
Note the following considerations when enabling this setting for your pipelines:
This feature was previously only available in PySpark and lightweight incremental transforms by setting require_incremental=True in the @incremental decorator, and has now been made available in Pipeline Builder to bridge the gap between low-code and pro-code workflows.
As we continue to add features to Pipeline Builder, we want to hear about your experiences and welcome your feedback. Share your thoughts with Palantir Support channels or our Developer Community ↗ using the pipeline-builder tag ↗.
Date published: 2026-03-17
Branch creators can now assign users roles to control who can manage and merge their Foundry branches. Previously, only the branch creator could manage and merge a branch. With role-based security, branch creators can delegate ownership to other users and groups, removing bottlenecks while maintaining control over who can access and manage the branch. Branch visibility and permissions are governed by two mechanisms working together:
owner role with full management permissions, including editing metadata, managing roles, and merging proposals. Branch owners can grant the owner role to other users and groups. Space administrators automatically hold identical permissions as branch owners.Branch owners and Space administrators can manage roles and organizations from the Security tab on the branch page.

Branch security settings page showing the roles and organizations for a Foundry Branch.
For full details, review the Branch security documentation.
We want to hear about your experiences with Foundry Branching in the Palantir platform and welcome your feedback. Share your thoughts with Palantir Support channels or on our Developer Community ↗ using the foundry-branching ↗ tag.
Date published: 2026-03-12
AI FDE, the AI forward deployed engineer, is now generally available for enrollments with AIP enabled. AI FDE allows you to operate Foundry with natural language, using conversations to unlock the power of the Palantir platform. AI FDE makes platform interactions more intuitive and accessible for all users, regardless of technical expertise, while maintaining complete control and visibility into tool use and data access.
With AI FDE, you can perform data transformations, manage code repositories, build and maintain your ontology, and more. AI FDE can accelerate your efforts with the following features:
To use AI FDE, ensure that AIP is enabled on your enrollment. For the best experience, Foundry Branching should also be enabled to support ontology edits. Once enabled, you can begin interacting with AI FDE by providing natural language requests.
AI FDE uses modes and skills to accomplish tasks and provide an easy way to manage the agent's context. Modes are the broad task at hand, such as data integration or ontology editing, while skills are granular capabilities that can be used across different modes. To get started, describe your task in the input field and allow the agent to pick a mode based on your task, or select a mode manually. For some modes, you can configure additional settings, such as function language or whether to use Python transforms instead of Pipeline Builder.

The AI FDE Modes menu, which allows users to select a mode with additional configuration for certain modes.
Modes limit the documentation and tools available to the agent to only those relevant for the current task. You can open the Skills menu to see the skills currently available to the agent, and expand the agent's context by sharing resources or documentation. If needed for your task, additional tools can be enabled using the tool icon below the input field.

The AI FDE prompt input field. The open Skills menu displays the skills that are available to an agent in a given session.
After configuring context and tools manually or by selecting a mode, you can use AI FDE to help you perform a variety of powerful actions in Foundry, including the following:
Unlock natural-language commands with AI FDE, and transform how you work in Foundry while maintaining security and complete visibility into every action.
As we continue to develop AI FDE, we want to hear about your experiences and welcome your feedback. Share your thoughts with Palantir Support channels or our Developer Community ↗ using the ai-fde tag ↗.
Date published: 2026-03-12
GPT-5.4 is now available directly from OpenAI and Azure for non-georestricted enrollments.
GPT-5.4 ↗ is OpenAI's most capable and efficient frontier model. It combines the industry-leading coding capabilities of GPT-5.3-Codex with major improvements in knowledge work, native computer use, and tool calling. GPT-5.4 is also OpenAI's most token-efficient reasoning model yet, using significantly fewer tokens than GPT-5.2 to solve problems, translating to reduced token usage and faster speeds.
To use this model:
We want to hear about your experiences using language models in the Palantir platform and welcome your feedback. Share your thoughts with Palantir Support channels or on our Developer Community ↗ using the language-model-service ↗ tag.
Date published: 2026-03-12
Gemini 3.1 Flash-Lite is now available directly from Google VertexAI for non-georestricted enrollments.
Gemini 3.1 Flash-Lite ↗ is Google's fastest and most cost-efficient Gemini 3 series model, built for high-volume developer workloads at scale. Gemini 3.1 Flash-Lite has adjustable thinking levels, giving builders control over how much the model reasons for a given task, which is useful for managing cost and latency across high-frequency workloads.
To use this model:
We want to hear about your experiences using language models in the Palantir platform and welcome your feedback. Share your thoughts with Palantir Support channels or on our Developer Community ↗ using the language-model-service ↗ tag.
Date published: 2026-03-05
Pilot is an AI-powered application builder that lets you create full-stack applications on top of your ontology using natural language prompts. Pilot will be available in beta for enrollments with AIP enabled starting the week of March 9. To use Pilot, describe the application you want to build, and Pilot will generate the ontology, design, and front-end code in an isolated workspace with no manual data wiring or UI coding required.
With Pilot, building an ontology-backed application starts with a single prompt. Rather than separately defining object types, writing action types, designing a UI, and wiring OSDK hooks, Pilot handles the development lifecycle from description to deployable application, allowing you to focus on what you want to build rather than how to build it.

The Pilot landing page, where you can describe the application you want to build.
When you describe your application, Pilot will spin up an isolated container and break up the work into structured tasks. First, the Ontology builder agent creates the data model for your application, including object types, action types, and relationships. You can review the generated ontology in the Ontology tab and refine it through conversational follow-ups in the chat panel.

Pilot generates object types, action types, and relationships based on your application description.
Next, the Designer agent reads the ontology and your requirements to produce a detailed design specification covering color palette, typography, layout, interaction patterns, and forms. This specification ensures that the generated frontend is polished and production-ready from the start.
The App builder agent implements the user interface using the ontology and design specification. It builds a React application with real-time data loading using OSDK hooks, functional forms, status management, and filtering; all wired directly to your ontology actions. When generation is complete, a live preview of your application will be displayed in the Pilot workspace, giving you an immediate view of the result.
You can continue to iterate on any aspect of the application by chatting with Pilot. For example, you can ask Pilot to add new fields to the ontology, change the layout, or introduce additional functionality. Pilot tracks each change as a structured task, making it straightforward to follow the evolution of your application.

An application generated by Pilot, with live preview and iterative chat refinement.
Pilot can generate realistic seed data within the container to let you test your application without exposing real datasets. Because seed data lives in the container's local datastore, you can safely iterate on your application without impacting production data. If any import issues arise, Pilot surfaces and resolves them automatically.
When your application is ready, Pilot will provide a guided deployment workflow that walks you through promoting ontology changes using Foundry Branching, configuring a Developer Console application, running CI checks, and tagging a release. The result is a production-hosted application served at a custom subdomain, with OSDK-powered ontology operations and no manual API wiring required.
We want to hear about your experiences and welcome your feedback. Share your thoughts with Palantir Support channels or our Developer Community ↗.
Date published: 2026-03-05
You can now use LLMs to generate richer, more flexible datasets in manually entered tables in Pipeline Builder. Describe the data you want, reference other columns in your prompt for dynamic generation, and preview up to 10 rows of LLM-generated data before applying changes to your full table. You can also lock and unlock columns to control which data gets regenerated and which stays the same. These two new features are now available on all enrollments.

An example of notional, LLM-generated student feedback in a manually entered table, with a column prompt that references the score column to produce dynamic, context-aware feedback data.
For manually entered tables in Pipeline Builder, you can now use LLMs to generate richer, more flexible datasets:
/[name of column].You can also lock and unlock columns, giving you more control over which data will be regenerated and which should remain unchanged.

The score and class columns are locked, ensuring their current values remain unchanged when other columns get regenerated.
Learn more about generating notional data using LLMs.
We want to hear about your experiences using Pipeline Builder and welcome your feedback. Share your thoughts with Palantir Support channels or our Developer Community ↗ using the pipeline-builder tag ↗.
Date published: 2026-03-05
Workshop now includes a built-in Metrics tab in the editor sidebar, giving module builders direct visibility into how their applications are being used. Usage metrics track two categories of data—action submissions and layout views—so builders can understand which parts of their module are most active and identify trends over time. All metrics are aggregate counts and are not attributable to any specific user.

The Metrics tab in the editor sidebar showing action submission counts over the selected time period.
The Metrics panel displays the total number of successful action submissions across the module, along with the percentage change compared to the previous equivalent period. Individual actions are listed with their submission count and a proportional bar showing relative usage. Selecting an action reveals which widgets in the module reference it, making it straightforward to trace how actions are connected to the module's interface.
Action metrics are available by default for all modules and require no additional setup.
Builders can also track how many times each page, tab, and overlay in their module has been viewed. The layout views overview shows the total view count with a per-layout breakdown listing individual pages, overlays, and tabs. Select a layout item to navigate directly to it in the editor.
To start collecting layout view data, open Module settings, navigate to the Metrics tab and toggle on Enable granular metrics. After enabling, it may take up to 24 hours before view data begins to appear. Views are only recorded when users interact with the module in view mode on the main branch.

Enable granular layout metrics by toggling on usage metrics tracking.
Both action and view metrics support a configurable time window of 7, 30, or 90 days, selectable from the period picker at the top of the panel. Each overview card compares the current period against the previous equivalent period, displaying the percentage change so you can spot usage trends at a glance.
Learn more about tracking your Workshop applications with usage metrics in the documentation.
We want to hear about your experiences using Workshop in the Palantir platform and welcome your feedback. Share your thoughts through Palantir Support channels or on our Developer Community ↗ using the workshop tag ↗.
Date published: 2026-03-03
GPT-5.3 Codex is now available directly from OpenAI for non-georestricted enrollments.
GPT-5.3 Codex ↗ is OpenAI's best coding model, optimized for agentic coding tasks, an attention to detail without sacrificing speed. GPT-5.3-Codex supports low, medium, high, and xhigh reasoning effort values for all types of agentic tasks.
To use these models:
We want to hear about your experiences using language models in the Palantir platform and welcome your feedback. Share your thoughts with Palantir Support channels or on our Developer Community ↗ using the language-model-service ↗ tag.
Date published: 2026-03-03
Quiver now includes a redesigned analysis creation experience, making it easier to choose the right workspace for your task. When creating a new analysis, you can now select from three analysis types: Quiver analysis, time series analysis, and object set path analysis.

The updated Create new analysis page in Quiver, allowing you to choose from three available analysis workspaces.
As part of this update, we are introducing the time series analysis workspace, a dedicated interface purpose-built for ad-hoc time series analysis. It provides a streamlined environment for visualizing and comparing time series data without the full complexity of a Quiver analysis, making it accessible to a wider range of users. When a more advanced workspace is needed, a time series analysis can be opened directly in Quiver.
The Time Series Analysis widget brings the same interface and tooling to Workshop, allowing application builders to embed time series analysis directly in their applications. The widget includes fine-grained configuration options to tailor the experience for operational users:
Users can also open their analysis in Quiver directly from the widget for more advanced workflows. Note that changes made in Quiver are not reflected back in the Workshop widget.
To create a new time series analysis, navigate to the New Analysis button on the Quiver splash page or Foundry side panel. Choose a name and location for your file and select Time Series Analysis before saving.

The new time series analysis view in Quiver.
For more information, review the Quiver analysis types documentation and the Time Series Analysis widget documentation.
We want to hear about your experiences creating time series analyses in Quiver and Workshop. Share your thoughts with our Palantir Support channels or Developer Community ↗ using the quiver ↗ or workshop ↗ tags.
Date published: 2026-03-03
You can now open Workflow Lineage graphs from more locations across the platform using the Cmd+i (macOS) or Ctrl+i (Windows) shortcut, as well as a dedicated navigation option available on various resource types.

Examples of the Open in Workflow Lineage option in Agent Studio and Notepad, often found under File or Actions in the top navigation bar.
Cmd+i (macOS) or Ctrl+i (Windows) keyboard shortcuts to open Workflow Lineage, or select the Open in Workflow Lineage option on a resource where available.The following applications support these navigation features:

The dedicated navigation option in Pipeline Builder.
Leverage this feature to better explore and understand your workflows from different applications across the Palantir ecosystem.
As we continue to develop Workflow Lineage, we want to hear about your experiences and welcome your feedback. Share your thoughts with Palantir Support channels or our Developer Community ↗ using the Workflow-lineage ↗ tag.