Announcements

REMINDER: Sign up for the Foundry Newsletter to receive a summary of new products, features, and improvements across the platform directly to your inbox. For more information on how to subscribe, see the Foundry Newsletter and Product Feedback channels announcement.

Share your thoughts about these announcements in our Developer Community Forum ↗.


Rate limit hit tracking now available in Resource Management

Date published: 2026-01-27

The Resource Management application now includes rate limit hit tracking in the AIP usage and limits view. Rate limit hits provide visibility into when users, projects, or applications reach their assigned rate limits in Foundry. This feature enables administrators to proactively monitor usage, identify capacity constraints, and troubleshoot issues related to rate limiting.

Key benefits

  • Proactive capacity management: Identify where and when rate limits are being reached across resources.
  • Faster troubleshooting: Quickly pinpoint usage bottlenecks and root causes of failures related to model consumption.
  • Improved planning: Make informed decisions about capacity requests and model usage based on real-time data.

Access rate limit hits

To view rate limit hit information, navigate to Resource Management > AIP usage and limits > View usage. Select the model, project, or resource, and view the associated rate limit hits in the table. If rate limit hits are not visible, the feature may not yet be available in your environment.

Your feedback matters

We want to hear about your experiences using reserved capacity in the Palantir platform and welcome your feedback. Share your thoughts with Palantir Support channels or on our Developer Community ↗ using the language-model-service tag ↗.


GPT-4.1 mini now available in additional regions

Date published: 2026-01-27

GPT-4.1 mini is now available from Azure OpenAI for georestricted enrollments in Australia, Canada, Japan, and the United Kingdom.

Model overviews

GPT-4.1 mini ↗ is a lightweight alternative to the GPT-4.1 model from OpenAI, with faster and cheaper responses on average.

  • Context window: 1,000,000 tokens
  • Knowledge cutoff: June 2024
  • Modalities: Text, image
  • Capabilities: Tool calling, structured outputs

Getting started

To use these models:

Your feedback matters

We want to hear about your experiences using language models in the Palantir platform and welcome your feedback. Share your thoughts with Palantir Support channels or on our Developer Community ↗ using the language-model-service tag.


Active preview for Python transforms

Date published: 2026-01-27

Active preview is now available for users of the Palantir extension for Visual Studio Code, enabling automatic updates to Python transform previews every time code is saved. Active preview eliminates the need to manually trigger previews after each code change, so your preview panel maintains synchronization and provides continuous feedback during development. Intelligent caching keeps subsequent previews fast by reusing code-defined filter results, project resources, and dependencies.

When to use active preview

Active preview is ideal for iterative development workflows with frequent changes to transform logic. It is particularly effective when working with code-defined filters, as cached filter results significantly accelerate the preview process.

The Active preview toggle in the Preview tab.

The Active preview toggle in the Preview tab.

After initiating a preview, you can enable active preview using the toggle in the Preview panel. Use active preview for continuous feedback, faster preview times, and more efficient iteration.

Learn more about active preview.


Reserved capacity available for LLMs in AIP

Date published: 2026-01-22

Reserved capacity is now available by default on many LLMs for non-georestricted enrollments and some US/EU georestricted enrollments. Reserved capacity helps ensure uninterrupted operations by protecting critical production workflows from shared project or enrollment limits on tokens and requests.

What is reserved capacity?

Reserved capacity allows you to reserve dedicated tokens per minute (TPM) and requests per minute (RPM) on LLMs for critical production workflows, ensuring they are not impacted by shared project or enrollment limits. Learn more about this feature in the documentation.

There is no additional cost to use reserved capacity.

How to view your reserved capacity

To view all reserved capacity offerings on an enrollment:

  • Navigate to Resource Management > AIP usage & limits > Reserved capacity.
  • Select the model from the dropdown to see available capacity.
  • If the Reserved capacity tab is not present, it means reserved capacity is not currently available for your environment.

You can view reserved capacity in Resource Management.

You can view reserved capacity in Resource Management.

When a project reaches its reserved capacity limit, that project will seamlessly continue operating by using the standard project and enrollment limits. Reserved capacity provides extra throughput on top of your existing limits and is available only for projects specifically enabled by an enrollment administrator.

Your feedback matters

We want to hear about your experiences using reserved capacity in the Palantir platform and welcome your feedback. Share your thoughts with Palantir Support channels or on our Developer Community ↗ using the  language-model-service tag ↗.


Configurable markings now available on media reference properties

Date published: 2026-01-22

Managing data access and compliance for media sets often requires precise control over how markings are inherited and applied across your ontology. Configurable markings for media reference properties in Ontology Manager are now available on all Foundry enrollments. This capability allows you to define which markings are inherited per media source when configuring media reference properties.

The markings configuration is accessible through a multi-step dialog in the Capabilities tab of Ontology Manager when adding or editing a media reference property.

Inherited markings configuration dialog of a media reference property.

Inherited markings configuration dialog of a media reference property.

Your feedback matters

We want to hear about your experiences with ontology management and welcome your feedback. Share your thoughts through Palantir Support channels or our Developer Community ↗ using the ontology-management ↗ tag.


The VertexAI model family can now be enabled on IL2 enrollments

Date published: 2026-01-15

Google Cloud Platform's VertexAI models are now available to be enabled in AIP on IL2 enrollments.

Model overviews

VertexAI enabled IL2 enrollments will be able to use Google’s best in class Gemini models. The Gemini 2.5 series delivers advanced reasoning, coding, and multimodal capabilities across a range of powerful models. Each variant is tailored for different performance needs, from high-complexity workloads to fast, lightweight tasks.

  • Gemini 2.5 Pro↗ is Google's most powerful model, best at complex reasoning, coding, and large contexts.
  • Gemini 2.5 Flash↗ is Google's middle weight model, which generate fast, context-aware responses. It is optimized for quick tasks.
  • Gemini 2.5 Flash Lite↗ is Google's lightweight model, which is efficient for lower-compute tasks, yet is still strong at reasoning.

Additionally, IL2 enrollments with the VertexAI model family enabled will gain access to the Anthropic Claude 4.5 and Claude 4 model families. These models deliver cutting-edge intelligence, advanced coding, and robust agentic capabilities across a range of performance and efficiency levels.

  • Claude Opus 4.5↗ is Anthropic’s most powerful model, designed for complex reasoning, coding, agentic workflows, creative problem-solving, and long-running research tasks.
  • Claude Sonnet 4.5↗ is a versatile and accurate model, ideal for agents, software development, business analysis, and extended reasoning with a large context window.
  • Claude Haiku 4.5↗ is a lightweight and fast model, delivering near-frontier coding and computer use performance with exceptional efficiency and lower costs.
  • Claude Sonnet 4↗ offers a strong balance of performance, speed, and cost, making it well-suited for high-volume tasks such as customer service AI, content generation, and efficient code development.

Getting started

To use these models:

Your feedback matters

We want to hear about your experiences using language models in the Palantir platform and welcome your feedback. Share your thoughts with Palantir Support channels or on our Developer Community ↗ using the language-model-service tag ↗.


Gemini 3 Pro and Gemini 3 Flash now available via VertexAI

Date published: 2026-01-15

Gemini 3 series models are now available from VertexAI on commercial, non-georestricted enrollments.

Model overviews

Gemini 3 Pro ↗ is Google's most powerful model to date, best suited for agentic tasks, advanced coding, long context understanding, multimodal understanding, and algorithmic development.

Gemini 3 Flash ↗ is Google's most efficient model to date, best suited for every day tasks, agentic coding, advanced reasoning, multimodal understanding, and long context understanding.

Both models share the following specifications:

  • Context window: 1,000,000 tokens
  • Knowledge cutoff: January 2025
  • Modalities: Text, image
  • Capabilities: Function calling, structured output

Note that Gemini 3 Pro and Gemini 3 Flash are still in preview status from GCP. Within AIP, Gemini 3 Pro and Gemini 3 Flash have all of the characteristics and behavior of a generally available AIP model.

Getting started

To use these models:

Your feedback matters

We want to hear about your experiences using language models in the Palantir platform and welcome your feedback. Share your thoughts with Palantir Support channels or on our Developer Community ↗ using the language-model-service tag ↗.


Communicate between Foundry instances with the Foundry connector

Date published: 2026-01-15

A new connector is now generally available in Data Connection for all enrollments, allowing Foundry instances to interface with each other. Users can now benefit from first class support for communication between Foundry instances, in addition to existing support for communication between Foundry and systems like PostgreSQL, AWS S3, Snowflake, and hundreds more.

The Foundry connector works with both direct connections and agent workers, supports batch, incremental, and streaming ingests, and is designed to accommodate multiple forms of authentication. By enabling a Foundry instance to treat another as a source, the Foundry connector makes sensitive data transfers and migrations between instances a seamless experience.

Learn more about the Foundry connector.


Resources can now be pinned to the top of the Files page in Compass

Date published: 2026-01-15

In Compass, you can now pin important files like documentation, user guides, and frequently used resources to the top of the Files page for quick access.

In Compass you can now pin resources to appear at the top of the Files tab page.

In Compass, you can now pin resources to appear at the top of the Files tab page.

This feature replaces the previous Project Catalog tab. By bringing pinned resources directly into the Files tab, your most important materials are now front and center where you naturally look first.

To pin a resource, select it and use the Pin in project option:

Pin a resource using the Pin the project icon or right click the selected file for the same option.

Pin a resource using the Pin the project icon or right click the selected file for the same option.

Portfolio Catalogs remain an aggregation of all pinned resources across the contained projects.

Tell us what you think

Let us know about your experience using Compass. Leave feedback with our Palantir Support channels or in our Developer Community ↗ using the  compass tag ↗.


Manage custom space roles in Control Panel

Date published: 2026-01-15

Administrators can now manage space roles and workflow permissions from the Space permissions page in Control Panel. Each space comes with a set of default roles and the ability to create custom roles for greater flexibility in managing permissions. For each role, you can open the workflows dropdown menu to view the permissions granted with the role. Select a role to view the role grants in the panel on the right, where you can add or remove users.

The Space permissions page in Control Panel showing the various workflows granted with the Contributor role.

The Space permissions page in Control Panel, showing the various workflows granted with the Contributor role.

To create a custom role, select + New role in the top right of the page, then select the workflows to include with this role. After creating the custom role, you can grant that role to users the same way you would for other roles. Custom roles can be edited or deleted through the Actions menu in the top right of each custom role.

Note that custom roles are "frozen", meaning that new workflows added to default roles will not automatically apply to custom roles. To include new workflows in a custom role, select Edit role and add them manually.

For more information on space and organization management, review our documentation.


New ontologies will now use project permissions

Date published: 2026-01-15

All ontologies created after Wednesday, January 21, will use the project permissions system by default. With project-based permissions, the ability to view, edit, and manage ontology resources is managed through Compass, the Palantir platform's filesystem. This is the same permissions system other resource types use.

This project-based permissions approach replaces the previous permission models: ontology roles and datasource-derived permissions.

Key benefits of the project permissions system

The project-based permission model offers multiple benefits:

  • Unified permission model: Ontology resources now use the same permission system as other resource types, so you only need to learn and manage permissions in one place.
  • Bulk management: Set permissions at the folder or project level to control access across multiple resources at once, eliminating the need to set permissions on individual items.
  • Clearer visibility: The Security tab and sidebar now display permissions and project context for all resources, including ontologies.
  • Increased functionality: As project resources, ontologies gain access to Compass features like folders, access requests, markings, and tags.

To migrate existing Ontology resources to project-based permissions, review migration guidance.

Opting out of this change

To continue the use of a legacy permissions system for new ontologies instead, navigate to the Configuration tab in Ontology Manager and turn off New ontology resources will be saved in a project.

To prevent newly-created ontologies from defaulting to the project-based permissions system, navigate to the Configuration tab in Ontology Manager and turn off the New ontology resources will be saved in a project setting.

To prevent newly-created ontologies from defaulting to the project-based permissions system, navigate to the Configuration tab in Ontology Manager and turn off the New ontology resources will be saved in a project setting.

Current limitations

Project permissions for the ontology are not available yet with classification-based access controls (CBAC) or default ontologies.

We want to hear from you

We hope these enhancements improve and simplify your permission workflows. Share your thoughts through Palantir Support channels or on our Developer Community ↗ or post using the Ontology management tag ↗.


Connect AI agents to your Ontology using Model Context Protocol (MCP)

Date published: 2026-01-13

Ontology MCP enables developers to expose Developer Console applications to AI agents through the Model Context Protocol (MCP), an open standard for connecting agents to data sources. Resources in your Developer Console application (including objects, actions, and queries) become tools that agents can discover and use. This works with pro-code frameworks (LangChain, CrewAI, custom Python/TypeScript) and low-code platforms (Copilot Studio, other agent builders). Ontology MCP will be available in beta for all users with Developer Console access by the end of January 2026.

The Ontology MCP configuration page in Developer Console with the Claude Code agent selected.

The Ontology MCP configuration page in Developer Console with the Claude Code agent selected.

What's new?

Previously, connecting AI agents to the Ontology required building custom integrations for each source system and agent framework, requiring deep knowledge of both the framework's API patterns and Foundry's SDK. With Ontology MCP, you can configure your MCP server once and connect any compatible agent framework. The MCP protocol handles authentication, tool discovery, and execution. Your Developer Console application defines which resources agents can access, and MCP ensures agents can only interact with the resources you have explicitly exposed.

When to use Ontology MCP

  • You are building AI agents that need to interact with Ontology data or actions
  • You want to support multiple agent frameworks without maintaining separate integrations
  • You need to enable teams with different development preferences to access the same Ontology resources
  • You need fine-grained control over which Ontology resources agents can access

Getting started

To start using Ontology MCP, open your Developer Console application and navigate to the MCP tab on the left. Choose to Enable MCP clients to connect to this application using MCP, and add a Markdown-supported description for the MCP server. Then, select your agent framework from the list to view framework-specific installation instructions

Ontology MCP and AIP Agents

Ontology MCP extends Ontology access to external agent frameworks. AIP Agents provide native agent-building capabilities within the platform. Use AIP Agents for in-platform workflows and Ontology MCP when connecting external frameworks to your Ontology.

For more information about Ontology MCP and how to use agents in Foundry and your Developer Console applications, watch the YouTube video ↗ from our DevCon 4 presentation.


Introducing listeners in Data Connection for capturing inbound webhooks

Date published: 2026-01-08

Listeners, a new Data Connection feature, enables you to receive inbound webhook events directly into the Palantir platform. This feature will be released in public beta the week of January 5th. Integrating real-time events from external systems into Foundry has traditionally been challenging when those systems lack OAuth 2.0 authentication support or cannot format payloads to match standard Foundry API endpoints. Listeners address this gap by provisioning URL endpoints that implement system-specific message signing and verification schemes—agnostic to data shape—providing a simple, low-latency mechanism to accept event streams from external sources.

For more information, including a list of supported listeners and set up guides, review the Listeners documentation.

Select from Data Connections supported listeners to configure and receive inbound webhook events.

Select from Data Connection's supported listeners to configure and receive inbound webhook events.

How listeners work

To accept inbound events from external systems, Data Connection listeners provision a URL endpoint, implement the specific message signing or other verification schemes for specific external systems, and allow a simple and low-latency mechanism to receive event streams into the Palantir platform. Leverage the listener output stream with streaming pipelines, automations, or batch analysis to create powerful event-processing workflows.

Subdomain configuration and zero-downtime endpoint rotation

You can generate a subdomain for your listener to establish a distinct ingress point with a wider range of network ingress compared to the rest of your Foundry enrollment. This isolates webhook traffic from other platform operations, provides additional control over external system connections, and enables unified governance with additional security benefits.

Listeners also come with an endpoint rotation capability that provides protection if a listener endpoint is compromised. Migrate to a new endpoint with zero downtime if the URL is accidentally exposed. When rotating your endpoint, you can set an expiration date for seamless zero-downtime transitions, or delete the old endpoint immediately if faster action is required. Once an endpoint expires, it will no longer process events.

Data Connection listeners expand Foundry's integration capabilities by removing authentication and payload formatting barriers that previously prevented real-time event ingestion from external systems.

For more information, see the listener subdomains documentation.

Get started with listeners

Enrollment administrators can enable listeners by toggling the feature on in the Data Connection page under Control Panel.

Once enabled, users can access the Listeners tab from within the Data Connection application to connect the Palantir platform to external systems and workflows.

Supported listeners

The Palantir platform currently provides support for the following listeners:


All product names, logos, and brands mentioned are trademarks of their respective owners. All company, product, and service names used in this document are for identification purposes only.