Announcements

REMINDER: Sign up for the Foundry Newsletter to receive a summary of new products, features, and improvements across the platform directly to your inbox. For more information on how to subscribe, see the Foundry Newsletter and Product Feedback channels announcement.

Share your thoughts about these announcements in our Developer Community Forum ↗.


Introducing Pilot, Foundry's AI-powered tool for building React OSDK applications

Date published: 2026-03-05

Pilot is an AI-powered application builder that lets you create full-stack applications on top of your ontology using natural language prompts. Pilot will be available in beta for enrollments with AIP enabled starting the week of March 9. To use Pilot, describe the application you want to build, and Pilot will generate the ontology, design, and front-end code in an isolated workspace with no manual data wiring or UI coding required.

With Pilot, building an ontology-backed application starts with a single prompt. Rather than separately defining object types, writing action types, designing a UI, and wiring OSDK hooks, Pilot handles the development lifecycle from description to deployable application, allowing you to focus on what you want to build rather than how to build it.

The Pilot landing page where you can describe the application you want to build.

The Pilot landing page, where you can describe the application you want to build.

Prompt-driven ontology and design generation

When you describe your application, Pilot will spin up an isolated container and break up the work into structured tasks. First, the Ontology builder agent creates the data model for your application, including object types, action types, and relationships. You can review the generated ontology in the Ontology tab and refine it through conversational follow-ups in the chat panel.

Pilot generates object types action types and relationships based on your application description.

Pilot generates object types, action types, and relationships based on your application description.

Next, the Designer agent reads the ontology and your requirements to produce a detailed design specification covering color palette, typography, layout, interaction patterns, and forms. This specification ensures that the generated frontend is polished and production-ready from the start.

Front-end generation with live preview

The App builder agent implements the user interface using the ontology and design specification. It builds a React application with real-time data loading using OSDK hooks, functional forms, status management, and filtering; all wired directly to your ontology actions. When generation is complete, a live preview of your application will be displayed in the Pilot workspace, giving you an immediate view of the result.

You can continue to iterate on any aspect of the application by chatting with Pilot. For example, you can ask Pilot to add new fields to the ontology, change the layout, or introduce additional functionality. Pilot tracks each change as a structured task, making it straightforward to follow the evolution of your application.

An application generated by Pilot with live preview and iterative chat refinement.

An application generated by Pilot, with live preview and iterative chat refinement.

Safe testing with seed data

Pilot can generate realistic seed data within the container to let you test your application without exposing real datasets. Because seed data lives in the container's local datastore, you can safely iterate on your application without impacting production data. If any import issues arise, Pilot surfaces and resolves them automatically.

Guided deployment to production

When your application is ready, Pilot will provide a guided deployment workflow that walks you through promoting ontology changes using Foundry Branching, configuring a Developer Console application, running CI checks, and tagging a release. The result is a production-hosted application served at a custom subdomain, with OSDK-powered ontology operations and no manual API wiring required.

We want to hear from you

We want to hear about your experiences and welcome your feedback. Share your thoughts with Palantir Support channels or our Developer Community ↗.


Track your Workshop applications with usage metrics

Date published: 2026-03-05

Workshop now includes a built-in Metrics tab in the editor sidebar, giving module builders direct visibility into how their applications are being used. Usage metrics track two categories of data—action submissions and layout views—so builders can understand which parts of their module are most active and identify trends over time. All metrics are aggregate counts and are not attributable to any specific user.

The Metrics tab in the editor sidebar showing action submission counts over the selected time period.

The Metrics tab in the editor sidebar showing action submission counts over the selected time period.

Track action submissions with action metrics

The Metrics panel displays the total number of successful action submissions across the module, along with the percentage change compared to the previous equivalent period. Individual actions are listed with their submission count and a proportional bar showing relative usage. Selecting an action reveals which widgets in the module reference it, making it straightforward to trace how actions are connected to the module's interface.

Action metrics are available by default for all modules and require no additional setup.

Monitor layout views with layout view metrics

Builders can also track how many times each page, tab, and overlay in their module has been viewed. The layout views overview shows the total view count with a per-layout breakdown listing individual pages, overlays, and tabs. Select a layout item to navigate directly to it in the editor.

To start collecting layout view data, open Module settings, navigate to the Metrics tab and toggle on Enable granular metrics. After enabling, it may take up to 24 hours before view data begins to appear. Views are only recorded when users interact with the module in view mode on the main branch.

Enable granular layout metrics by toggling on usage metrics tracking.

Enable granular layout metrics by toggling on usage metrics tracking.

Compare usage over time with configurable time periods

Both action and view metrics support a configurable time window of 7, 30, or 90 days, selectable from the period picker at the top of the panel. Each overview card compares the current period against the previous equivalent period, displaying the percentage change so you can spot usage trends at a glance.

Learn more about tracking your Workshop applications with usage metrics in the documentation.

Share your feedback

We want to hear about your experiences using Workshop in the Palantir platform and welcome your feedback. Share your thoughts through Palantir Support channels or on our Developer Community ↗ using the workshop tag ↗.


Generate notional data with LLMs in Pipeline Builder

Date published: 2026-03-05

You can now use LLMs to generate richer, more flexible datasets in manually entered tables in Pipeline Builder. Describe the data you want, reference other columns in your prompt for dynamic generation, and preview up to 10 rows of LLM-generated data before applying changes to your full table. You can also lock and unlock columns to control which data gets regenerated and which stays the same. These two new features are now available on all enrollments.

An example of notional LLM-generated student feedback in a manually entered table with a column prompt that references the score column to produce dynamic context-aware feedback data.

An example of notional, LLM-generated student feedback in a manually entered table, with a column prompt that references the score column to produce dynamic, context-aware feedback data.

What’s new?

For manually entered tables in Pipeline Builder, you can now use LLMs to generate richer, more flexible datasets:

  • Generate data with LLMs: Select Generate with LLMs for the specified column
  • Reference other columns: You can reference other columns directly in your prompt for more dynamic data generation.
  • Preview before generating: Preview up to 10 rows of LLM generated data before applying changes to your entire table.
  • Lock and unlock Columns: Gain greater control by locking or unlocking columns to manage which data should be regenerated or remain the same.

How it works

  • Create a manually-entered table and select the column you want to generate.
  • Under Auto-populate with, select Generate with LLM.
  • Enter a clear description of the data you want in the column prompt. Reference other columns dynamically using /[name of column].
  • Add example cell values in Example cell value to help the LLM understand the type and format of data you expect.

Control regeneration with column locking

You can also lock and unlock columns, giving you more control over which data will be regenerated and which should remain unchanged.

The score and class columns are locked ensuring their current values remain unchanged when other columns get regenerated.

The score and class columns are locked, ensuring their current values remain unchanged when other columns get regenerated.

Learn more about generating notional data using LLMs.

Your feedback matters

We want to hear about your experiences using Pipeline Builder and welcome your feedback. Share your thoughts with Palantir Support channels or our Developer Community ↗ using the pipeline-builder tag ↗.


GPT-5.3 Codex now available in AIP

Date published: 2026-03-03

GPT-5.3 Codex is now available directly from OpenAI for non-georestricted enrollments.

Model overview

GPT-5.3 Codex ↗ is OpenAI's best coding model, optimized for agentic coding tasks, an attention to detail without sacrificing speed. GPT-5.3-Codex supports lowmediumhigh, and xhigh reasoning effort values for all types of agentic tasks.

  • Context window: 400,000 tokens
  • Knowledge cutoff: August 2025
  • Modalities: Text, image
  • Capabilities: Responses API, structured outputs, function calling, streaming

Getting started

To use these models:

Your feedback matters

We want to hear about your experiences using language models in the Palantir platform and welcome your feedback. Share your thoughts with Palantir Support channels or on our Developer Community ↗ using the language-model-service tag.


Perform time series analysis with a dedicated workspace in Quiver

Date published: 2026-03-03

Quiver now includes a redesigned analysis creation experience, making it easier to choose the right workspace for your task. When creating a new analysis, you can now select from three analysis types: Quiver analysis, time series analysis, and object set path analysis.

The updated Create new analysis page in Quiver allowing you to choose from three available analysis workspaces.

The updated Create new analysis page in Quiver, allowing you to choose from three available analysis workspaces.

As part of this update, we are introducing the time series analysis workspace, a dedicated interface purpose-built for ad-hoc time series analysis. It provides a streamlined environment for visualizing and comparing time series data without the full complexity of a Quiver analysis, making it accessible to a wider range of users. When a more advanced workspace is needed, a time series analysis can be opened directly in Quiver.

Key features

  • Add and explore time series data: Add time series from the Ontology using a familiar search experience, then view and configure plot properties such as axis assignment, units, root object, interpolation, and statistics from the details panel.
  • Perform time series operations: Apply operations such as rolling aggregates, formulas, filters, and event statistics directly to plots without leaving the analysis context.
  • Visualize event sets: Overlay events from linked objects or configurable conditions on charts alongside time series data.
  • Organize across multiple canvases: Work across multiple chart canvases with synchronized or independent x-axes. Reorder, move, and hide plots as needed, with automatic axis grouping by unit.
  • Save and share analyses: Save analyses as Foundry resources for later use, and open them in a dedicated resource viewer or load them into a Workshop widget using the analysis RID.

Build a guided time series analysis experience in Workshop

The Time Series Analysis widget brings the same interface and tooling to Workshop, allowing application builders to embed time series analysis directly in their applications. The widget includes fine-grained configuration options to tailor the experience for operational users:

  • Filter which Ontology series are available for users to add to their analysis
  • Control initial time series and event sets using object set variables
  • Customize which plot types and event set types are available
  • Set chart display options including default view range, tooltip display, and x-axis syncing behavior across canvases
  • Configure how analyses are saved and loaded, including default save location and autoloading

Users can also open their analysis in Quiver directly from the widget for more advanced workflows. Note that changes made in Quiver are not reflected back in the Workshop widget.

Getting started

To create a new time series analysis, navigate to the New Analysis button on the Quiver splash page or Foundry side panel. Choose a name and location for your file and select Time Series Analysis before saving.

The new time series analysis view in Quiver.For more information review the Quiver analysis types documentation../quiver/analysis-types.mdtime-series-analysis and the Time Series Analysis widget documentation../workshop/widget-time-series-analysis.md.

The new time series analysis view in Quiver.

For more information, review the Quiver analysis types documentation and the Time Series Analysis widget documentation.

Share your feedback

We want to hear about your experiences creating time series analyses in Quiver and Workshop. Share your thoughts with our Palantir Support channels or Developer Community ↗ using the quiver ↗ or workshop ↗ tags.


Expanded Workflow Lineage access across the Palantir platform

Date published: 2026-03-03

You can now open Workflow Lineage graphs from more locations across the platform using the Cmd+i (macOS) or Ctrl+i (Windows) shortcut, as well as a dedicated navigation option available on various resource types.

Examples of the Open in Workflow Lineage option in Agent Studio and Notepad often found under File or Actions in the top navigation bar. Examples of the Open in Workflow Lineage option in Agent Studio and Notepad often found under File or Actions in the top navigation bar.

Examples of the Open in Workflow Lineage option in Agent Studio and Notepad, often found under File or Actions in the top navigation bar.

Access Workflow Lineage

  • Use the Cmd+i (macOS) or Ctrl+i (Windows) keyboard shortcuts to open Workflow Lineage, or select the Open in Workflow Lineage option on a resource where available.
  • You will be redirected to the resource's Workflow Lineage graph displaying the selected node, plus any direct upstream and downstream nodes. Note that some resources may not display actions, functions, or objects if those concepts do not apply.

The following applications support these navigation features:

  • Workshop
  • Objects in Ontology Manager
  • Function repositories
  • Quiver dashboards
  • Machinery
  • Slate
  • Agent Studio
  • Automate
  • Third-party applications
  • Developer Console (Keyboard shortcut only)
  • Marketplace (Keyboard shortcut only, in a draft resource's overview tab.)
  • Notepad (Navigation option only)
  • Object types in Pipeline Builder (Navigation option only)

The dedicated navigation option in Pipeline Builder.

The dedicated navigation option in Pipeline Builder.

Leverage this feature to better explore and understand your workflows from different applications across the Palantir ecosystem.

We want to hear from you

As we continue to develop Workflow Lineage, we want to hear about your experiences and welcome your feedback. Share your thoughts with Palantir Support channels or our Developer Community ↗ using the Workflow-lineage tag.

Learn more about Workflow Lineage.