REMINDER: Sign up for the Foundry Newsletter to receive a summary of new products, features, and improvements across the platform directly to your inbox. For more information on how to subscribe, see the Foundry Newsletter and Product Feedback channels announcement.
Share your thoughts about these announcements in our Developer Community Forum ↗.
Date published: 2025-08-05
Native tool calling mode is now available in AIP Agent Studio, allowing agents to leverage built-in tool calling capabilities of supported models for improved speed and performance. Previously, agents with tools were limited to Prompted tool calling mode, which used additional prompt instructions and allowed only one tool call at a time.
You can now select the Native tool calling tool mode under the Tools settings for an AIP Agent.
Tool settings in AIP Agent Studio with Prompted tool calling and Native tool calling tool modes available for selection.
Native tool calling uses the built-in capabilities of supported models to improve tool calling speed and performance, offering more efficient token use and support for parallel tool calls. Parallel tool calling reduces the time required for an agent to answer complex queries that require multiple tool calls by allowing several tool calls to be made simultaneously.
Parallel tool calls for an AIP Agent using Native tool calling mode.
Agents in native tool calling mode can access tool calls from earlier exchanges in a conversation, enabling them to reuse previous results for more efficient responses.
A native tool calling agent reusing a previous tool result in a conversation.
Native tool calling is currently available for use with a subset of Palantir-provided models only, and with the following tools:
To view the list of supported models, select Native tool calling mode under the Tools settings for your AIP Agent, then open the Model settings. For agents with unsupported models and tools at this time, continue to use Prompted tool calling mode.
For more information, review the AIP Agent Studio documentation on tools.
We want to hear about your experiences with AIP Agent Studio and welcome your feedback. Share your thoughts with Palantir Support channels or on our Developer Community ↗ using the aip-agent-studio tag ↗.
Date published: 2025-08-05
Gemini 2.5 Pro, Gemini 2.5 Flash, and Gemini 2.5 Flash Lite from Google Vertex are now available for general use in AIP. Gemini 2.5 Pro is Google's flagship model for complex, reasoning-heavy tasks, while Gemini 2.5 Flash provides a balance between speed, cost, and performance. Gemini 2.5 Flash Lite is the most efficient model offered. Comparisons between the Gemini 2.5 series models can be found in on Google's documentation↗.
Grok-4 is xAI's flagship model for deep reasoning and computationally intensive tasks. It offers significant improvements over Grok-3 for complex, multi-step problem solving and heavy-duty analysis, making it ideal for users who require robust logic and advanced deduction. Comparisons between Grok-4 and other models in the xAI family can be found in the xAI documentation↗.
As with all new models, use-case-specific evaluations are the best way to benchmark performance on your task.
You can use these new models in enrollments where the enrollment administrator has enabled the model family.
For a list of all the models available in AIP, review the documentation.
OpenAI (Direct or on Azure)
Amazon Bedrock
We want to hear about your experiences using language models in the Palantir platform and welcome your feedback. Share your thoughts with Palantir Support channels or on our Developer Community ↗ using the language-model-service tag ↗.