REMINDER: Sign up for the Foundry Newsletter to receive a summary of new products, features, and improvements across the platform directly to your inbox. For more information on how to subscribe, see the Foundry Newsletter and Product Feedback channels announcement.
Share your thoughts about these announcements in our Developer Community Forum ↗.
Date published: 2026-01-27
The Resource Management application now includes rate limit hit tracking in the AIP usage and limits view. Rate limit hits provide visibility into when users, projects, or applications reach their assigned rate limits in Foundry. This feature enables administrators to proactively monitor usage, identify capacity constraints, and troubleshoot issues related to rate limiting.
To view rate limit hit information, navigate to Resource Management > AIP usage and limits > View usage. Select the model, project, or resource, and view the associated rate limit hits in the table. If rate limit hits are not visible, the feature may not yet be available in your environment.
We want to hear about your experiences using reserved capacity in the Palantir platform and welcome your feedback. Share your thoughts with Palantir Support channels or on our Developer Community ↗ using the language-model-service tag ↗.
Date published: 2026-01-27
GPT-4.1 mini is now available from Azure OpenAI for georestricted enrollments in Australia, Canada, Japan, and the United Kingdom.
GPT-4.1 mini ↗ is a lightweight alternative to the GPT-4.1 model from OpenAI, with faster and cheaper responses on average.
To use these models:
We want to hear about your experiences using language models in the Palantir platform and welcome your feedback. Share your thoughts with Palantir Support channels or on our Developer Community ↗ using the language-model-service ↗ tag.
Date published: 2026-01-27
Active preview is now available for users of the Palantir extension for Visual Studio Code, enabling automatic updates to Python transform previews every time code is saved. Active preview eliminates the need to manually trigger previews after each code change, so your preview panel maintains synchronization and provides continuous feedback during development. Intelligent caching keeps subsequent previews fast by reusing code-defined filter results, project resources, and dependencies.
Active preview is ideal for iterative development workflows with frequent changes to transform logic. It is particularly effective when working with code-defined filters, as cached filter results significantly accelerate the preview process.

The Active preview toggle in the Preview tab.
After initiating a preview, you can enable active preview using the toggle in the Preview panel. Use active preview for continuous feedback, faster preview times, and more efficient iteration.
Learn more about active preview.
Date published: 2026-01-22
Reserved capacity is now available by default on many LLMs for non-georestricted enrollments and some US/EU georestricted enrollments. Reserved capacity helps ensure uninterrupted operations by protecting critical production workflows from shared project or enrollment limits on tokens and requests.
Reserved capacity allows you to reserve dedicated tokens per minute (TPM) and requests per minute (RPM) on LLMs for critical production workflows, ensuring they are not impacted by shared project or enrollment limits. Learn more about this feature in the documentation.
There is no additional cost to use reserved capacity.
To view all reserved capacity offerings on an enrollment:

You can view reserved capacity in Resource Management.
When a project reaches its reserved capacity limit, that project will seamlessly continue operating by using the standard project and enrollment limits. Reserved capacity provides extra throughput on top of your existing limits and is available only for projects specifically enabled by an enrollment administrator.
We want to hear about your experiences using reserved capacity in the Palantir platform and welcome your feedback. Share your thoughts with Palantir Support channels or on our Developer Community ↗ using the language-model-service tag ↗.
Date published: 2026-01-22
Managing data access and compliance for media sets often requires precise control over how markings are inherited and applied across your ontology. Configurable markings for media reference properties in Ontology Manager are now available on all Foundry enrollments. This capability allows you to define which markings are inherited per media source when configuring media reference properties.
The markings configuration is accessible through a multi-step dialog in the Capabilities tab of Ontology Manager when adding or editing a media reference property.

Inherited markings configuration dialog of a media reference property.
We want to hear about your experiences with ontology management and welcome your feedback. Share your thoughts through Palantir Support channels or our Developer Community ↗ using the ontology-management ↗ tag.
Date published: 2026-01-15
Google Cloud Platform's VertexAI models are now available to be enabled in AIP on IL2 enrollments.
VertexAI enabled IL2 enrollments will be able to use Google’s best in class Gemini models. The Gemini 2.5 series delivers advanced reasoning, coding, and multimodal capabilities across a range of powerful models. Each variant is tailored for different performance needs, from high-complexity workloads to fast, lightweight tasks.
Additionally, IL2 enrollments with the VertexAI model family enabled will gain access to the Anthropic Claude 4.5 and Claude 4 model families. These models deliver cutting-edge intelligence, advanced coding, and robust agentic capabilities across a range of performance and efficiency levels.
To use these models:
We want to hear about your experiences using language models in the Palantir platform and welcome your feedback. Share your thoughts with Palantir Support channels or on our Developer Community ↗ using the language-model-service tag ↗.
Date published: 2026-01-15
Gemini 3 series models are now available from VertexAI on commercial, non-georestricted enrollments.
Gemini 3 Pro ↗ is Google's most powerful model to date, best suited for agentic tasks, advanced coding, long context understanding, multimodal understanding, and algorithmic development.
Gemini 3 Flash ↗ is Google's most efficient model to date, best suited for every day tasks, agentic coding, advanced reasoning, multimodal understanding, and long context understanding.
Both models share the following specifications:
Note that Gemini 3 Pro and Gemini 3 Flash are still in preview status from GCP. Within AIP, Gemini 3 Pro and Gemini 3 Flash have all of the characteristics and behavior of a generally available AIP model.
To use these models:
We want to hear about your experiences using language models in the Palantir platform and welcome your feedback. Share your thoughts with Palantir Support channels or on our Developer Community ↗ using the language-model-service tag ↗.
Date published: 2026-01-15
A new connector is now generally available in Data Connection for all enrollments, allowing Foundry instances to interface with each other. Users can now benefit from first class support for communication between Foundry instances, in addition to existing support for communication between Foundry and systems like PostgreSQL, AWS S3, Snowflake, and hundreds more.
The Foundry connector works with both direct connections and agent workers, supports batch, incremental, and streaming ingests, and is designed to accommodate multiple forms of authentication. By enabling a Foundry instance to treat another as a source, the Foundry connector makes sensitive data transfers and migrations between instances a seamless experience.
Learn more about the Foundry connector.
Date published: 2026-01-15
In Compass, you can now pin important files like documentation, user guides, and frequently used resources to the top of the Files page for quick access.

In Compass, you can now pin resources to appear at the top of the Files tab page.
This feature replaces the previous Project Catalog tab. By bringing pinned resources directly into the Files tab, your most important materials are now front and center where you naturally look first.
To pin a resource, select it and use the Pin in project option:

Pin a resource using the Pin the project icon or right click the selected file for the same option.
Portfolio Catalogs remain an aggregation of all pinned resources across the contained projects.
Let us know about your experience using Compass. Leave feedback with our Palantir Support channels or in our Developer Community ↗ using the compass tag ↗.
Date published: 2026-01-15
Administrators can now manage space roles and workflow permissions from the Space permissions page in Control Panel. Each space comes with a set of default roles and the ability to create custom roles for greater flexibility in managing permissions. For each role, you can open the workflows dropdown menu to view the permissions granted with the role. Select a role to view the role grants in the panel on the right, where you can add or remove users.

The Space permissions page in Control Panel, showing the various workflows granted with the Contributor role.
To create a custom role, select + New role in the top right of the page, then select the workflows to include with this role. After creating the custom role, you can grant that role to users the same way you would for other roles. Custom roles can be edited or deleted through the Actions menu in the top right of each custom role.
Note that custom roles are "frozen", meaning that new workflows added to default roles will not automatically apply to custom roles. To include new workflows in a custom role, select Edit role and add them manually.
For more information on space and organization management, review our documentation.
Date published: 2026-01-15
All ontologies created after Wednesday, January 21, will use the project permissions system by default. With project-based permissions, the ability to view, edit, and manage ontology resources is managed through Compass, the Palantir platform's filesystem. This is the same permissions system other resource types use.
This project-based permissions approach replaces the previous permission models: ontology roles and datasource-derived permissions.
The project-based permission model offers multiple benefits:
To migrate existing Ontology resources to project-based permissions, review migration guidance.
To continue the use of a legacy permissions system for new ontologies instead, navigate to the Configuration tab in Ontology Manager and turn off New ontology resources will be saved in a project.

To prevent newly-created ontologies from defaulting to the project-based permissions system, navigate to the Configuration tab in Ontology Manager and turn off the New ontology resources will be saved in a project setting.
Project permissions for the ontology are not available yet with classification-based access controls (CBAC) or default ontologies.
We hope these enhancements improve and simplify your permission workflows. Share your thoughts through Palantir Support channels or on our Developer Community ↗ or post using the Ontology management tag ↗.
Date published: 2026-01-13
Ontology MCP enables developers to expose Developer Console applications to AI agents through the Model Context Protocol (MCP), an open standard for connecting agents to data sources. Resources in your Developer Console application (including objects, actions, and queries) become tools that agents can discover and use. This works with pro-code frameworks (LangChain, CrewAI, custom Python/TypeScript) and low-code platforms (Copilot Studio, other agent builders). Ontology MCP will be available in beta for all users with Developer Console access by the end of January 2026.

The Ontology MCP configuration page in Developer Console with the Claude Code agent selected.
Previously, connecting AI agents to the Ontology required building custom integrations for each source system and agent framework, requiring deep knowledge of both the framework's API patterns and Foundry's SDK. With Ontology MCP, you can configure your MCP server once and connect any compatible agent framework. The MCP protocol handles authentication, tool discovery, and execution. Your Developer Console application defines which resources agents can access, and MCP ensures agents can only interact with the resources you have explicitly exposed.
To start using Ontology MCP, open your Developer Console application and navigate to the MCP tab on the left. Choose to Enable MCP clients to connect to this application using MCP, and add a Markdown-supported description for the MCP server. Then, select your agent framework from the list to view framework-specific installation instructions
Ontology MCP extends Ontology access to external agent frameworks. AIP Agents provide native agent-building capabilities within the platform. Use AIP Agents for in-platform workflows and Ontology MCP when connecting external frameworks to your Ontology.
For more information about Ontology MCP and how to use agents in Foundry and your Developer Console applications, watch the YouTube video ↗ from our DevCon 4 presentation.
Date published: 2026-01-08
Listeners, a new Data Connection feature, enables you to receive inbound webhook events directly into the Palantir platform. This feature will be released in public beta the week of January 5th. Integrating real-time events from external systems into Foundry has traditionally been challenging when those systems lack OAuth 2.0 authentication support or cannot format payloads to match standard Foundry API endpoints. Listeners address this gap by provisioning URL endpoints that implement system-specific message signing and verification schemes—agnostic to data shape—providing a simple, low-latency mechanism to accept event streams from external sources.

Select from Data Connection's supported listeners to configure and receive inbound webhook events.
To accept inbound events from external systems, Data Connection listeners provision a URL endpoint, implement the specific message signing or other verification schemes for specific external systems, and allow a simple and low-latency mechanism to receive event streams into the Palantir platform. Leverage the listener output stream with streaming pipelines, automations, or batch analysis to create powerful event-processing workflows.
You can generate a subdomain for your listener to establish a distinct ingress point with a wider range of network ingress compared to the rest of your Foundry enrollment. This isolates webhook traffic from other platform operations, provides additional control over external system connections, and enables unified governance with additional security benefits.
Listeners also come with an endpoint rotation capability that provides protection if a listener endpoint is compromised. Migrate to a new endpoint with zero downtime if the URL is accidentally exposed. When rotating your endpoint, you can set an expiration date for seamless zero-downtime transitions, or delete the old endpoint immediately if faster action is required. Once an endpoint expires, it will no longer process events.
Data Connection listeners expand Foundry's integration capabilities by removing authentication and payload formatting barriers that previously prevented real-time event ingestion from external systems.
For more information, see the listener subdomains documentation.
Enrollment administrators can enable listeners by toggling the feature on in the Data Connection page under Control Panel.
Once enabled, users can access the Listeners tab from within the Data Connection application to connect the Palantir platform to external systems and workflows.
The Palantir platform currently provides support for the following listeners:
All product names, logos, and brands mentioned are trademarks of their respective owners. All company, product, and service names used in this document are for identification purposes only.