Announcements

REMINDER: Sign up for the Foundry Newsletter to receive a summary of new products, features, and improvements across the platform directly to your inbox. For more information on how to subscribe, see the Foundry Newsletter and Product Feedback channels announcement.

Share your thoughts about these announcements in our Developer Community Forum ↗.


Claude Sonnet 4.5, Claude Haiku 4.5 now available in Japan region

Date published: 2026-02-05

Claude Sonnet 4.5 and Claude Haiku 4.5 models are now available from AWS Bedrock on Japan georestricted enrollments.

Model overviews

Claude Sonnet 4.5 ↗ is Anthropic's latest medium weight model with strong performance in coding, math, reasoning, and tool calling, all at a reasonable cost and speed.

  • Context window: 200k tokens
  • Knowledge cutoff: January 2025
  • Modalities: Text, image
  • Capabilities: Tool calling, vision, coding

Claude Haiku 4.5 ↗ is Anthropic's most powerful small model, ideal for real-time, lightweight tasks where speed, cost, and performance are critical.

  • Context window: 200k tokens
  • Knowledge cutoff: February 2025
  • Modalities: Text, image
  • Capabilities: Tool calling, vision, coding

Getting started

To use these models:

Your feedback matters

We want to hear about your experiences using language models in the Palantir platform and welcome your feedback. Share your thoughts in Palantir Support channels or on our Developer Community ↗ using the language-model-service tag.


Deploy containers with compute modules

Date published: 2026-02-05

Compute modules are now generally available in Foundry. With compute modules, you can run containers that scale dynamically based on load, bringing your existing code, in any language, into Foundry without rewriting it.

What you can build

Compute modules enable several key workflows in Foundry:

Custom functions and APIs: Create functions that can be called from Workshop, Slate, Ontology SDK applications, and other Foundry environments. Host custom or open-source models from platforms like Hugging Face and query them directly from your applications.

Data pipelines: Connect to external data sources and ingest data into Foundry streams, datasets, or media sets in real time. Use your own transformation logic to process data before writing it to outputs.

Legacy code integration: Bring business-critical code written in any language into Foundry without translation. Use this code to back pipelines, Workshop modules, AIP Logic functions, or custom Ontology SDK applications.

An example of a compute module overview in Foundry with information about the job status functions and container metadeta.

An example of a compute module overview in Foundry, with information about the job status, functions, and container metadeta.

Why it matters

Compute modules solve the challenge of integrating existing code into Foundry. Instead of rewriting your logic in a Foundry-supported language, containerize it and deploy it directly. The platform handles scaling, authentication, and connections to other Foundry resources automatically.

Key features include:

  • Dynamic horizontal scaling based on current and predicted load
  • Zero-downtime updates when deploying new container versions
  • Native connections to Foundry datasets, Ontology resources, and APIs
  • External connections using REST, WebSockets, SSE, or other protocols
  • Marketplace compatibility for sharing modules across organizations

Get started

Review the compute modules documentation to build your first function or pipeline.


Deploy document extraction workflows with AIP Document Intelligence

Date published: 2026-02-03

AIP Document Intelligence will be generally available on February 4, 2026 and is enabled by default for all AIP enrollments. AIP Document Intelligence is a low-code application for configuring and deploying document extraction workflows. Users can upload sample documents, experiment with different extraction strategies, and evaluate results based on quality, speed, and cost—all before deploying at scale. AIP Document Intelligence then generates Python transforms that can process entire document collections using the selected strategy, converting PDFs and images into structured Markdown with preserved tables and formatting.

Learn more about AIP Document Intelligence.

Result of Layout-aware OCR  Vision LLM extraction with metrics on cost speed and token usage.

Result of Layout-aware OCR + Vision LLM extraction with metrics on cost, speed, and token usage.

Compare extraction strategies

AIP Document Intelligence provides multiple extraction approaches, from traditional OCR to vision-language models. You can test each method on your specific documents and view side-by-side comparisons of extraction quality, processing time, and compute costs. This experimentation phase helps teams select the right approach for their use case without writing custom code.

Comparison of Vision LLM Extraction vs. Layout-aware OCR  Vision LLM Extraction shows drastic improvement in complex table extraction quality.

Comparison of Vision LLM Extraction vs. Layout-aware OCR + Vision LLM Extraction shows drastic improvement in complex table extraction quality.

Deploy extraction pipelines in one click

Once a strategy is configured, AIP Document Intelligence generates production-ready Python transforms that process documents at scale. The latest deployment uses lightweight transforms rather than Spark, significantly improving processing speed. Workflows that previously took days extracting data from document collections can now complete the same work in hours. Refer to the documentation for more detailed instruction on how to deploy and customize your Python transforms.

Choose a validated extraction strategy and deploy to a Python transforms to batch process documents.

Choose a validated extraction strategy and deploy to a Python transforms to batch process documents.

Maintain quality across diverse document types

Enterprise documents vary widely in structure, formatting, and content density. AIP Document Intelligence handles this diversity through configurable extraction strategies that can adapt to multi-column layouts, embedded tables, and mixed-language content. Users working with maintenance manuals, regulatory filings, invoices have successfully extracted structured data while preserving critical formatting and relationships.

When to use AIP Document Intelligence

AIP Document Intelligence is designed for workflows where document content needs to be extracted and structured for downstream AI applications. This includes:

  • Populating vector databases for retrieval-augmented generation (RAG) systems
  • Extracting tabular data from reports, invoices, or forms for analysis
  • Converting legacy documentation into searchable, structured formats
  • Preparing training data for domain-specific language models

For workflows that require extracting specific entities (like part numbers, dates, or named entities) rather than full document content, upcoming entity extraction capabilities will provide more targeted functionality.

What's next on the development roadmap?

  • Entity extraction from documents: The team is developing capabilities to extract structured entities, such as equipment identifiers, monetary values, dates, and custom domain concepts, directly from documents. This will enable direct population of Ontology objects from unstructured sources.
  • Broader use of AIP Document Intelligence: Allow extraction configurations to be called directly from AIP Logic and Ontology functions to expand Document Intelligence beyond Python transforms to broader workflow automation scenarios.

Your feedback matters

We want to hear about your experiences using AIP Document Intelligence and welcome your feedback. Share your thoughts with Palantir Support channels or on our Developer Community ↗ using the aip-document-intelligence tag ↗.


GPT-5.2 Codex now available in AIP

Date published: 2026-02-03

GPT-5.2 Codex is now available directly from OpenAI for non-georestricted enrollments.

Model overviews

GPT-5.2 Codex ↗ is a coding optimized version of the GPT-5.2 model from OpenAI, with improvements in agentic coding capabilities, context compaction, and stronger performance on large code changes like refactors and migrations.

  • Context window: 400,000 tokens
  • Knowledge cutoff: August 2025
  • Modalities: Text, image
  • Capabilities: Responses API, structured outputs, function calling

Getting started

To use these models:

Your feedback matters

We want to hear about your experiences using language models in the Palantir platform and welcome your feedback. Share your thoughts with Palantir Support channels or on our Developer Community ↗ using the language-model-service tag.


Multi-ontology support in Workflow Lineage

Date published: 2026-02-03

Workflow Lineage just became a lot more robust - you can now visualize resources across multiple ontologies in one unified graph. Instantly identify cross-ontology relationships, spot external resources at a glance, and switch between ontologies without leaving your workflow view.

The Workflow Lineage graph now displays resources across multiple ontologies with visual indicators highlighting nodes from outside the selected ontology.

The Workflow Lineage graph now displays resources across multiple ontologies, with visual indicators highlighting nodes from outside the selected ontology.

What's new?

  • Unified visualization: The Workflow Lineage graph now displays all resource nodes across different ontologies in a single view.
  • Cross-ontology awareness: Object, interface and action nodes from other ontologies appear grayed out with a warning icon, so you can instantly identify their origin.
  • Smart warnings: When multiple ontologies are present, a warning icon appears next to the ontology icon at the upper right side of the graph, keeping you informed at a glance.
  • Ontology switching: View all ontologies present in your graph and easily switch between them using the ontology icon.

Easily switch between ontologies using the ontology blue cube icon to view and navigate all ontologies present in your graph.

Easily switch between ontologies using the ontology (blue cube) icon to view and navigate all ontologies present in your graph.

For action-type nodes from outside the selected ontology, functionality is limited. For example, bulk updates are only possible for function-backed actions within your currently selected ontology.

Share your thoughts

We welcome your feedback about Workflow Lineage in our Palantir Support channels, and on our Developer Community ↗ using the workflow-lineage tag ↗.