You can use your own LLMs in the Palantir platform using function interfaces. For example, you can bring your own fine-tuned model to use with AIP Logic, enabling more flexibility and choice for users. Function interfaces enable you to register and use LLMs whether they are hosted on-premise, hosted on your own cloud, or fine-tuned on another platform.
There are currently two ways to build a custom connection to OpenAI:
Direct source connections are currently in beta and may not be available on your enrollment.
This tutorial explains how to create a source to define your LLM’s API endpoint, call the model from a TypeScript function using a webhook, and publish the function for use in the Palantir platform (for instance, with AIP Logic or Pipeline Builder).
In this tutorial, you will write a TypeScript function that calls an external OpenAI model via a webhook, implements the ChatCompletion
function interface, and registers the model in Foundry. Completing the tutorial will allow you to use the custom LLM API natively in AIP Logic.
ChatCompletion
interface with a TypeScript function.
ChatCompletion
function interface@ChatCompletion
function interface and calls out to your source.To maintain platform security, you need to register the call to OpenAI as a webhook using the Data Connection application. The steps below describe how to set up a REST API source and webhook with Data Connection.
Learn more about how to create a webhook and use it in a TypeScript function.
Open the Data Connection application.
Select New Source.
Search for REST API
.
Under Protocol sources, select REST API.
On the Connect to your data source page, select Direct connection.
Name your source and save the source in a folder. This example uses the source name MyOpenAI
.
Under Connection details, perform the following steps:
https://api.openai.com
and set Authentication to Bearer token. Learn more about OpenAI APIs. ↗.443
.APIKey
and paste the same API key used for the bearer token field.https://api.openai.com
to the allowlist for network egress between Palantir's managed SaaS platform and external domains. You can do this by navigating to Network connectivity and choosing Request and self-approve new policy.
You must enable Export configurations to use this API endpoint in platform applications like AIP Logic and Pipeline Builder. To enable Export configurations, toggle these options:
You must Enable code imports to use this endpoint in your function.
Select Continue and Get started to complete your API endpoint and egress setup.
On the Source overview page, select Create webhook.
Save your webhook with the name Create Chat Completion
and API name CreateChatCompletion
.
Import the example curl from the OpenAI Create chat completion documentation ↗.
Configure the messages
and model
input parameters as in the example below.
Configure the choices
and usage
output parameters as in the example below.
Test and save your webhook.
Now you have a REST source and a webhook that you can import into your TypeScript repository.
ChatCompletion
interface with a TypeScript functionAfter setting up a webhook that retrieves a chat completion from an external LLM, you can create a function that implements the ChatCompletion
interface provided by Foundry and calls out to your OpenAI webhook.
AIP Logic searches for all functions which implement the ChatCompletion
interface when displaying registered models, so you must declare that your function implements this interface. Additionally, declaring that your function implements this interface enforces at compile-time that the signature matches the expected shape.
You can write your chat completion implementation in TypeScript. To do so, you will need to create a new TypeScript functions repository.
This example function will:
This tutorial assumes a basic understanding of writing TypeScript functions in Foundry. Review the getting started guide for an introduction to TypeScript functions in Foundry.
To start, you will need to import both the OpenAI webhook and the ChatCompletion
function interface into the repository. With the TypeScript functions repository open, select the resource imports icon and import both the chat completion function interface and the OpenAI source which is associated with the webhook you previously created.
Use the Add option in the Resource imports side panel to import:
The OpenAI
Rest API Source that contains the CreateChatCompletion
webhook
The ChatCompletion
function interface
In the Resource imports panel, search for the OpenAI
source that contains the CreateChatCompletion
webhook and import it into your TypeScript repository. Learn more about how to import resources into Code Repositories.
In the Resource imports panel, search for the ChatCompletion
interface and import it into your TypeScript repository.
At this point, your Resource imports should include both the OpenAI source and ChatCompletion
interface as seen in the following image.
After importing resources, the Task Runner will re-run a localDev
task that generates the relevant code bindings. You can check on the progress of this task by opening the Task Runner tab on the ribbon at the bottom of the page.
In this section, you will write a TypeScript function that calls the previously-created OpenAI webhook and implements the chat completion interface.
Importing both the CreateChatCompletion
webhook (via the OpenAI source) and the ChatCompletion
function interface will generate code bindings to interact with those resources.
You can find code snippets to set up your function scaffolding by selecting the ChatCompletion
function interface in the Resource imports panel.
The following is an example of what your code might look like at this point:
Copied!1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
// index.ts import { ChatCompletion } from "@palantir/languagemodelservice/contracts"; import { FunctionsGenericChatCompletionRequestMessages, GenericCompletionParams, FunctionsGenericChatCompletionResponse } from "@palantir/languagemodelservice/api"; import { OpenAI } from "@foundry/external-systems/sources"; // This decorator tells the compiler and Foundry that our function is implementing the ChatCompletion interface. // Note that the generic @Function decorator is not required. @ChatCompletion() public myCustomFunction(messages: FunctionsGenericChatCompletionRequestMessages, params: GenericCompletionParams): FunctionsGenericChatCompletionResponse { // TODO: Implement the body }
This section contains the simplest implementation of this function that completes the request.
Copied!1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51
import { isErr, UserFacingError } from "@foundry/functions-api"; import * as FunctionsExperimentalApi from "@foundry/functions-experimental-api"; import { OpenAI } from "@foundry/external-systems/sources"; import { ChatCompletion } from "@palantir/languagemodelservice/contracts"; import { FunctionsGenericChatCompletionRequestMessages, GenericChatMessage, ChatMessageRole, GenericCompletionParams, FunctionsGenericChatCompletionResponse } from "@palantir/languagemodelservice/api"; export class MyFunctions { @ChatCompletion() public async myFunction( messages: FunctionsGenericChatCompletionRequestMessages, params: GenericCompletionParams ): Promise<FunctionsGenericChatCompletionResponse> { const res = await OpenAI.webhooks.CreateChatCompletion.call({ model: "gpt-4o", messages: convertToWebhookList(messages) }); if (isErr(res)) { throw new UserFacingError("Error from OpenAI."); } return { completion: res.value.output.choices[0].message.content ?? "No response from AI.", tokenUsage: { promptTokens: res.value.output.usage.prompt_tokens, maxTokens: res.value.output.usage.total_tokens, completionTokens: res.value.output.usage.completion_tokens, } } } } function convertToWebhookList(messages: FunctionsGenericChatCompletionRequestMessages): { role: string; content: string; }[] { return messages.map((genericChatMessage: GenericChatMessage) => { return { role: convertRole(genericChatMessage.role), content: genericChatMessage.content }; }); } function convertRole(role: ChatMessageRole): "system" | "user" | "assistant" { switch (role) { case "SYSTEM": return "system"; case "USER": return "user"; case "ASSISTANT": return "assistant"; default: throw new Error(`Unsupported role: ${role}`); } }
You can now test your function by selecting the Functions tab from the bottom toolbar, which will open a preview panel. Select Published, choose your function myChatCompletion
, and select the option for providing your input as JSON.
You can test with a message such as:
Copied!1 2 3 4 5 6 7 8
{ "messages": [ { "role": "USER", "content": "hello world" } ] }
You can use your function natively in AIP Logic. To do so, select the Use LLM board as you normally would, then select the Registered tab in the model dropdown and select the myChatCompletion
model.
This feature is currently in beta and may not be available on your enrollment.
You can use your function natively in Pipeline Builder LLM transforms. To do so, select the Use LLM transform as you normally would, then expand Show configurations in the Model section. From the Model type dropdown, select the Registered tab and choose your LLM (shown in the example below as myChatCompletion
).