Ontology MCP

Ontology MCP is a Developer Console feature that exposes your application's ontology resources as Model Context Protocol (MCP) tools. This enables AI agents and external systems to interact with your ontology as MCP clients.

What is Ontology MCP?

Ontology MCP makes your application's ontology resources (such as object types, action types, and query functions) available as MCP tools. External AI agents can then discover and use these tools to read objects, execute actions, and query data from your ontology.

The Model Context Protocol ↗ is an open standard that enables AI systems to securely connect to external data sources and tools. By enabling Ontology MCP, your application becomes an MCP server that AI agents can connect to as MCP clients.

Ontology MCP vs. Palantir MCP

Ontology MCP and Palantir MCP serve different purposes:

  • Ontology MCP is designed for ontology consumers: external AI agents that need to safely write data to your ontology. Ontology MCP allows LLM agents to read objects, execute predefined actions, and query data, while restricting which actions the agent can take through application scopes. This makes it safe for external agents to interact with production ontology data.

  • Palantir MCP is designed for ontology builders, such as developers working with OSDK applications, datasets, transforms, and ontology development. Palantir MCP provides 70+ tools for building and modifying ontology types (object types, link types, action types), but cannot write actual ontology data. Palantir MCP focuses on development workflows, not production data interaction.

In summary: Ontology MCP enables controlled writes to ontology data, while Palantir MCP enables modifications to ontology structure.

Example use cases for Ontology MCP

Common use cases for Ontology MCP include:

  • Copilot Studio integration: You can integrate your Office applications with the ontology, enabling you to create documents with data from the ontology, chat with an ontology-backed agent in Teams, and prepare for meetings based on your calendar data.
  • Gemini Enterprise workflows: You can use Ontology MCP to read and write data from the ontology without leaving your Gemini Enterprise portal.
  • Headless AI agents: You can use any agent framework, such as Anthropic SDK or Google ADK, to build a pro-code agentic workflow that reads and writes to the ontology, uses the ontology as a memory and tracking system, or even runs headless in response to ontology changes.

How to enable Ontology MCP

To enable Ontology MCP for your application:

  1. Navigate to your application in Developer Console.
  2. Select the MCP page on the right side menu.
  3. Toggle the button on the top right to enable MCP server for your application.
  4. Use the configuration details listed on the page based on your selected agent framework.

The Ontology MCP toggle in Developer Console application settings.

Even if your framework of choice is not listed, the configuration is likely similar assuming your framework can act as an MCP client.

Once enabled, your application will expose its ontology resources as MCP tools. External MCP clients can connect to your application using the connection details shown in the settings panel.

Ensure your application has the appropriate application scopes configured to control which ontology resources are accessible through MCP.

LLM disclaimer

By enabling Ontology MCP on your local device with LLMs hosted outside of Palantir AIP, you are making data in your Palantir environment available to an external MCP client. Ensure that this action is compliant with your organization's policies.

When using Ontology MCP with external AI systems, consider the following:

  • Review your organization's data governance and compliance policies before enabling Ontology MCP.
  • Ensure that the ontology resources exposed through MCP comply with your data security requirements.
  • Use application scopes and permissions to restrict access to sensitive resources.

Using Ontology MCP

MCP server description

Some MCP clients, such as Claude, use the server description and inject it into the context of the agent. You can edit the server description using the markdown editor on the page and include server-level instructions for the agent.

As an example, you can instruct the agent to always search for the object primary key using the search tool before calling an action that takes the primary key as a parameter.

In order for the agent to be able to read the MCP server description, the agent must be granted read permission on the Developer Console application. Include either the end users connecting to your agents or the service user your agent is using as a viewer to the application in the Sharing & Tokens page.

Action tool description

Use the Agent tool description field in the Ontology Manager application to update the description the agent sees when using the action as a tool. This allows you to provide specific guidance to AI agents about when and how to use each action. For example, an action that creates a new task and gets the project ID for linking the tasks could include instructions on how to obtain the project ID.

The Agent tool description field in Ontology Manager.

Microsoft Copilot Studio integration

Microsoft Copilot Studio integration only supports authorization code grant in a Confidential Client. This means that when creating the Developer Console application for your Ontology MCP integration with Microsoft Copilot Studio, you should choose Backend service and User's permissions. This will create the required service user that Copilot Studio uses to issue the token on behalf of your users.

See the Palantir DevCon4 presentation and demo on YouTube ↗ for additional examples and guidance on using Ontology MCP.