REMINDER: Sign up for the Foundry Newsletter to receive a summary of new products, features, and improvements across the platform directly to your inbox. For more information on how to subscribe, see the Foundry Newsletter and Product Feedback channels announcement.
Share your thoughts about these announcements in our Developer Community Forum ↗.
Date published: 2026-04-09
On May 1 2026, the legacy mode of the AIP Agent widget (formerly the AIP Interactive widget) in Workshop will be fully deprecated and deleted from Foundry.
Legacy mode has been marked as deprecated in Workshop since January 2025. It has not received new features since work on AIP Agent Studio began over two years ago, and removing it is necessary to support the architectural changes required to deliver the next generation of AIP Agents.

The deprecated legacy configuration of the AIP Agent widget in Workshop.
The most recent LLMs supported in legacy mode are GPT-4o and Claude 3 Haiku, compared to the latest models and additional feature development available in AIP Agent Studio.

The available LLMs in legacy mode of the AIP Agent widget.
Review the AIP Agent Studio documentation and AIP Agent widget documentation for more information.
If you are still using legacy mode in your Workshops, select Upgrade to an AIP Agent in the Legacy tab of the widget's configuration panel or create a new AIP Agent before May 1 to avoid disruptions.
Date published: 2026-04-09
The ability to upload, preview, and transform email (.eml) files directly within media sets is now generally available across Foundry enrollments, enabling you to parse email content at scale.
Email media sets allow you to work with .eml files as first-class media items in Foundry. They are particularly useful when you need to extract and process attachments from emails—such as spreadsheets, documents, or images—while also retaining access to email metadata and body content for downstream processing.
.eml files to media sets and view interactive previews that render email content directly in Foundry, such as within a Workshop module's Media Preview widget. The preview displays message headers, content, metadata, and a list of attachments.
A preview of an email uploaded to a media set.

View attachment previews for supported media formats or download unsupported attachment file types.
We want to hear about your experience with email media sets and welcome your feedback. Share your thoughts with Palantir Support channels or on our Developer Community ↗ using the media-sets tag ↗.
Date published: 2026-04-07
Grok 4.20 (Reasoning) and Grok 4.20 (Non-Reasoning) are now available for enrollments with xAI enabled in the US and other supported regions.
Grok 4.20 (Reasoning) is designed for complex, multi-step logic, high-accuracy tasks, and deep analysis.
Grok 4.20 (Non-Reasoning) is focused on high-speed, efficient responses for straightforward queries, simple summarization, and other lightweight tasks as part of agentic workflows.
To use these models:
We want to hear about your experiences using language models in the Palantir platform and welcome your feedback. Share your thoughts with Palantir Support channels or on our Developer Community ↗ using the language-model-service tag ↗.
Date published: 2026-04-07
Nvidia's Nemotron 3 Super 120B and Nemotron 3 Nano 30B models hosted by AWS Bedrock are now available for enablement in AIP on non-georestricted enrollments as well as enrollments georestricted in select regions.
Nvidia Nemotron 3 Super 120B ↗ is Nvidia's leading model for coding, reasoning, math, and long context tasks suitable for high-volume enterprise automation, multi-agent collaboration, and advanced coding tasks. Currently available on non-georestricted enrollments as well as enrollments georestricted in the US, EU, UK, and JP regions.
Nvidia Nemotron 3 Nano 30B ↗ is Nvidia's model optimized for high-throughput, low-latency, and low-cost deployments. Optimized for single-agent tasks and fast inference, it is most effective for chatbots, local edge deployment, summarization, and data extraction tasks. Currently available on non-georestricted enrollments and enrollments georestricted in the US.
To use these models:
We want to hear about your experiences using language models in the Palantir platform and welcome your feedback. Share your thoughts with Palantir Support channels or on our Developer Community ↗ using the language-model-service ↗ tag.
Date published: 2026-04-07
Pipeline Builder now supports incremental processing for media set inputs. Select a media set input node and choose Incremental to enable this feature. By leveraging the build history of the media set, incremental computation avoids the need to recompute the entire output every time a transform is run, saving time and compute costs.

A media set input with the incremental option selected.
Share your feedback through Palantir Support channels or our Developer Community ↗ using the pipeline-builder tag ↗.
Date published: 2026-04-02
Users can now use machine learning models for inference directly in Pipeline Builder — no code required. By bringing models into Pipeline Builder, we have significantly lowered the barrier to building and iterating on inference workflows. Together with Model Studio, this enables a fully no-code path from model training to production inference.
Only Spark (batch) pipelines are supported. Streaming and Lightweight pipelines are not yet available. Models must have exactly one tabular input and one tabular output, and time series models are not yet fully supported.
1. Configure your pipeline: Ensure you are working with a Spark (batch) pipeline and that warm pool is turned off.

Batch Pipeline Builder with warm pool turned off.
2. Import your model: Navigate to Reusables > Trained models in the import menu and follow the resource import flow to make your model available to the pipeline.

Reusable logic selector.
3. Add the model node: Select a node in your pipeline canvas and select Trained model to insert it.

From the available options, select Trained model.
4.Configure inputs and outputs: Map your input and output columns to the model's expected API schema.

Input and output configuration for a model node in Pipeline Builder.
Preview and streaming support are coming soon. We are actively working on adding Lightweight support, additional input types, time series support, and Marketplace integration.
To learn more, review the Pipeline Builder documentation on Trained models.
To share feedback or tell us about your modeling use case, contact our Palantir Support channels or join the conversation in our Developer Community using the modeling tag ↗ .