Selecting the right modeling tool

Palantir's modeling suite of products enables users to develop, manage, and operationalize models. This page compares different products to help you choose the right tool for your needs.

Feature engineering

ProductDetails
Pipeline BuilderLarge scale point-and-click data transformation
Code WorkspacesInteractive, pro-code data analysis and transformation in familiar environments such as JupyterLab®
Python TransformsPySpark data pipeline development in Foundry's web-based IDE, Code Repositories

No-code model training

No-code model training tools are available in Model Studio, providing a simple point-and-click interface for creating production-grade machine learning models.

Pro-code model training

Available libraries

Planned deprecation

The foundry_ml library and dataset-backed models have entered the planned deprecation phase of development and will be unavailable for use alongside the Python 3.9 deprecation in October 2025. Support remains available until the deprecation date. At this time, you should migrate your workflows to use the palantir_models library. Contact Palantir Support if you require additional help migrating your workflows.

The palantir_models library provides flexible tooling to publish and consume models within the Palantir platform, using the concept of model adapters.

LibraryDetails
palantir_modelsFlexible library to publish and consume models in Foundry through Model Adapters; supports models produced in platform, external models, and containerized models
foundry_mlLegacy model development library, formally deprecated as of October 2025; you should use palantir_models instead of foundry_ml unless absolutely necessary

Code authoring environments

ProductLibrary supportDetails
Code Workspacespalantir_modelsInteractive model development in Jupyter® notebooks
Code Repositoriespalantir_modelsPowerful web-based IDE with native CI/CD features and support for modeling workflows; less interactive than notebooks
Code Workbooksfoundry_mlModeling support in Code Workbooks is limited to only foundry_ml models, which have been deprecated in October 2025

Training metrics tracking

ProductDetails
ExperimentsFramework for logging metrics and hyperparameters during a model training job

Batch inference

Models can be used for running large scale batch inference pipelines across datasets.

ProductDetailsCaveats
Modeling objective batch deploymentsModeling Objectives offers broader model management features such as model release management, upgrades, evaluation, and moreDoes not support multi-output or external models
Python transformsBatch inference can be run directly in Python transformsUse the @lightweight decorator unless specifically looking to distribute inference over Spark
Jupyter® NotebookUsers can create scheduled training and/or inference jobs directly from Code WorkspacesOnly supports running inference models created from the same notebook; use Python Transforms to orchestrate models created elsewhere

Model deployment

Models can be deployed in Foundry behind a REST API; deploying a model operationalizes the model for use both inside and outside of Foundry.

ProductDetails
Model direct deploymentsAuto-upgrading model deployments; best for quick iteration and deployment
Modeling objective live deploymentsProduction-grade modeling project management; modeling objectives provide tooling for model release management, upgrades, evaluation, and more

Learn more about the difference between direct deployments and deployments through modeling objectives.

Functions integration

Publishing models as functions makes it easy to use models for live inference in downstream Foundry applications, including Workshop, Slate, actions, and more.

ProductBest for
Direct function publicationNo-code function creation on models with live deployments, allowing integration with the Ontology. The same functionality is available in the Model and Modeling Objectives applications.
Importing model functions in TypeScript v1 repositoriesImport model functions into TypeScript v1 functions to further process predictions (for example, make Ontology edits) with support for Model API type checking and Marketplace deployment.
Call model functions via Platform SDK from Python or Typescript v2 functionsModel functions can also be called from Python or TypeScript v2 functions if the model and associated functions will not be deployed via Marketplace.