Audit logs provide a comprehensive record of every action taken in Foundry, enabling security teams to detect threats, investigate incidents, ensure compliance, and maintain accountability across the platform. Audit logs can be thought of as a distilled record of all actions taken by users in the platform. This is often a compromise between verbosity and precision, where overly verbose logs may contain more information but be more difficult to reason about.
Audit logs in Foundry contain enough information to answer the critical questions for any security investigation or compliance review:
Sometimes, audit logs will contain contextual information about users including Personal Identifiable Information (PII), such as names and email addresses, as well as other potentially sensitive usage data. As such, audit log contents should be considered sensitive and viewed only by persons with the necessary security qualifications.
Audit logs (and associated detail) should generally be consumed and analyzed in a separate purpose-built system for security monitoring (a "security information and event management", or SIEM solution) owned by the customer if one is available. If no such system has been provisioned, Foundry itself is flexible enough for some light SIEM-native workflows to be performed directly in the platform instead.
This documentation explains how to access, consume, and analyze Foundry audit logs:
audit.2 to audit.3: Guidance for transitioning existing audit.2 analyses to the new audit.3 schema.Other documentation available includes:
Customers are strongly encouraged to consume and monitor their own audit logs via the mechanisms presented below. All audit log analyses should use the new and improved audit.3 schema logs to maintain continuity as we are in the process of fully migrating audit log archival from audit.2 to audit.3 for new audit logs. Audit.3 logs are available via API for use in a SIEM or exportable to Foundry for in-platform analysis. Review our documentation on monitoring audit logs for additional guidance.
Foundry provides flexible mechanisms for delivering audit logs to meet diverse security infrastructure and SIEM requirements. The delivery method you choose depends on your organization's existing security tooling and analysis workflows.
Audit.3 logs (recommended for all new implementations):
Audit.3 schema logs offer significant advantages for security operations:
Audit.3 logs can be consumed through API into a SIEM or through an audit export to a Foundry dataset if necessary.
Audit.2 logs (legacy, for historical analysis only):
The historical audit.2 schema logs are compiled, compressed, and moved to log archival storage within about 24 hours to environment-dependent storage (for example, an S3 bucket). These logs:
audit.3 migration.From archival storage, Foundry can deliver audit.2 logs to customers through an audit export to Foundry.
External SIEMs can ingest audit.3 logs directly from storage using Palantir's audit API endpoints. This approach is preferred for organizations with dedicated security operations centers and established SIEM platforms, as it provides the following:
SIEM ingestion is an advantage of audit.3 logs over the historical audit.2 logs, which do not have public APIs that allow direct ingestion by external SIEMs (audit.2 logs must first be exported to a Foundry dataset, then exported from there to a SIEM).
To access the audit API endpoints, you must authenticate your SIEM requests. We recommend the following approaches, in order of general preference. However, the option best for you depends on your desired security model.
Note that your API requests must include an auth header with audit-export:view on the organization for which you are requesting logs. audit-export:view is a gatekeeper operation that must be granted to the client/user whose token is used in the auth header for the organization whose audit logs are being requested. Under Organization permissions in Control Panel, grant the generated client/user a role that includes the Create datasets with audit logs for the organization workflow for the organization you want to have logs for. Review the permissions documentation for more information.
Third-party application through Developer Console (recommended):
The most secure and maintainable approach is to create a third-party application in the Developer Console application with the appropriate audit log permissions. The Developer Console allows you to create and manage applications that talk to public APIs using the Ontology SDK and OAuth.
This method provides:
Review the Developer Console documentation and third-party applications documentation for setup instructions.
OAuth2 client credentials:
If you cannot use a Developer Console application, OAuth2 client credentials provide a secure programmatic authentication method suitable for automated SIEM ingestion.
User tokens (not recommended):
Administrator user tokens should be avoided for production SIEM integrations as they come with the following limitations:
Both audit.2 and audit.3 logs can be exported, per-organization, directly into a Foundry dataset through the audit logs tooling in Control Panel. This approach is suitable for organizations that do not currently have SIEM tooling and still need to analyze their audit logs.
Once audit log data has landed in a Foundry dataset, it can be analyzed directly in Foundry. Pipeline Builder can be used for prototyping or basic quick analyses, while Code Repositories can perform long-term analysis due to the immense data scale. Audit log datasets may be too large to effectively analyze in Contour without filtering them first. You may choose to export from Foundry to an external SIEM through Data Connection (audit.3 logs should be consumed directly through public APIs if a SIEM is the ultimate destination for audit log analysis).
To export audit logs, you will need the audit-export:orchestrate-v3 operation (for audit.3) on the target organization(s). This can be granted with the Organization administrator role in Control Panel, configurable from the Organization permissions tab. Review the organization permissions documentation for more details.
To set up audit log exports to a Foundry dataset, follow the steps below:

audit.3 schema (October 2025).Export dataset updates:
audit.2 datasets may produce empty append transactions in the first several hours (or longer). This is expected behavior as the pipeline processes the full backlog of audit logs. This delay is not present for new exports of audit.3 datasets.audit.3 datasets, and 10 GiB for audit.2 datasets. Such large appends are generally only needed when an audit log dataset is first created.Export dataset disablement: To disable an export, open the audit log dataset and select File > Move to Trash, or manually move the dataset to another project.
All logs that Palantir products produce are structured logs. This means that they have a specific schema that they follow, which can be relied on by downstream systems for automated analysis and alerting.
Palantir audit logs are currently delivered in both the historical audit.2 schema and in the new and improved schema, audit.3. Audit.2 logs of new events will soon cease to be available for export, and only historical logs will be available in that schema; all new events will be available for export in only the audit.3 schema.
Within both the audit.2 and audit.3 schemas, audit logs will vary depending on the product that produces the log. This is because each product is reasoning about a different domain, and thus will have different concerns that it needs to describe. This variance is more noticeable in audit.2, as will be explained below.
Product-specific information is primarily captured within the requestFields and resultFields for audit.3 logs (or request_params and result_params for audit.2 logs). The contents of these fields will change shape depending on both the product doing the logging and the event being logged.
Palantir logs use a concept called audit log categories to make logs easier to understand with little product-specific knowledge. Rather than needing to track hundreds of service-specific event names, categories let security analysts focus on high-level actions like data loading, permission changes, or authentication attempts, regardless of which product or feature generated the log. This abstraction enables analysts to build monitoring queries that work across all Foundry services without needing to understand implementation details. For example, filtering for dataExport captures all data export events regardless of what product was used to export the data.
With audit log categories, audit logs are described as a union of auditable events. Audit log categories are based on a set of core concepts and divided into categories that describe actions on those concepts, such as the following:
| Category | Description | Example use case |
|---|---|---|
authenticationCheck | Checks authentication status via a programmatic or manual authentication event, such as token validation. | Detect token validation patterns that suggest credential stuffing. |
dataCreate | Indicates the addition of some new entry of data into the platform where it did not exist prior. This event may be reflected as a dataPromote in a separate service if it is logged in the landing service. | Track data creation patterns and enforce governance policies. |
dataDelete | Related to the deletion of data, independent of the granularity of that deletion. | Alert on deletion of critical or protected resources. |
dataExport | Export of data from the platform. Use for instances like downloading data from the platform, such as a system external to Palantir, CSV file, and more. If data was exported to another Palantir system, use the dataPromote category. | Alert on large exports, exports of sensitive data, or exports outside business hours. |
dataImport | Imports to the platform. Unlike dataPromote, dataImport refers only to data being ingested from outside the platform. This means that a dataImport in one service could show up as a dataPromote in a separate service. | Monitor for malicious file uploads or policy violations. |
dataLoad | Refers to the loading of data to be returned to a user. For purely back-end loads, use internal. | Establish baseline normal access patterns and detect anomalous bulk data access. |
tokenGeneration | Action that leads to generation of a new token. | Detect unusual token creation that could indicate preparation for bulk data access. |
userLogin | Login events of users. | Monitor for failed login attempts, unusual login times, or geographic anomalies. |
userLogout | Logout events of users. | Track session durations and identify abnormally long sessions. |
Audit log categories have also gone through a versioning change, from a looser form within audit.2 logs to a stricter and richer form within audit.3 logs. In audit.3, each log must specify at least one category, and each category defines exactly which request and result fields will be present, making automated analysis far more reliable.
Refer to our documented audit log categories for a detailed list of available categories with field specifications. Also, review the monitoring security audit logs documentation for additional guidance on using categories.
Audit logs are written to a single log archive per environment. When audit logs are processed through the delivery pipeline, the User ID fields (uid and otherUids in the schema below) are extracted, and the users are mapped to their corresponding organizations.
An audit export orchestrated for a given organization is limited to audit logs attributed to that organization. Actions taken solely by service (non-human) users will not typically be attributed to any organization as these users are not organization members. The special case of service users for third-party applications using client credentials grants and used only by the registering organization do generate audit logs attributed to that organization.
Any new log exports or analyses should use audit.3 logs rather than audit.2 logs.
Audit.3 logs are built upon a new log schema that provides a number of advantages over audit.2 logs. The key benefits to audit log consumers of the new schema and associated delivery pipeline are the following:
dataExport).product allow easier filtering by product name instead of requiring complex event name mapping filters to focus analysis on a single product's usage.Audit.3 logs are produced with the following guarantees in mind:
dataLoad describes the precise resources that are loaded.audit.3 schema. For example, all named resources are present at the top level, as well as within the request and result fields.These guarantees mean that for any particular log it is possible to tell (1) what auditable event created it, and (2) exactly what fields it contains. These guarantees are product-agnostic, enabling security analysts to build monitoring queries that work across all Foundry services.
The audit.3 log schema is provided below:
| Field | Type | Description |
|---|---|---|
categories | set<string> | All audit categories produced by this audit event. |
entities | list<any> | All entities (for example, resources) present in the request and result fields of this log. |
environment | optional<string> | The environment that produced this log. |
eventId | uuid | The unique identifier for an auditable event. This can be used to group log lines that are part of the same event. For example, the same eventId will be logged in lines emitted at the start and end of a large binary response streamed to the consumer. |
host | string | The host that produced this log. |
logEntryId | uuid | The unique identifier for this audit log line, not repeated across any other log line in the system. Note that some log lines may be duplicated during ingestion into Foundry, and there may be several rows with the same logEntryId. Rows with the same logEntryId are duplicates and can be ignored. |
name | string | The name of the audit event, generally following a (product name)_(endpoint name) structure in ALL CAPS, snake-cased. For example: DATA_PROXY_SERVICE_GENERATED_GET_DATASET_AS_CSV2. |
orgId | optional<string> | The organization to which the uid belongs, if available. |
origin | optional<string> | The best-effort identifier of the originating machine. For example, an IP address, a Kubernetes node identifier, or similar. This value can be spoofed. |
origins | list<string> | The origins of the network request, determined by request headers. This value can be spoofed. To identify audit logs for user-initiated requests, filter to audit logs that have non-empty origins. Audit logs with empty origins correspond to service-initiated requests made by the Palantir backend while fulfilling user-initiated requests.If an audit log with non-empty origins has categories including apiGatewayRequest, then the associated request was fulfilled by an API gateway. To find audit logs for the requests made by the API gateway to fulfill the user-initiated request, filter to logs with the same traceId that have a userAgent starting with the service in this audit log. |
product | string | The product that produced this log. |
producerType | AuditProducer | How this audit log was produced; for example, from a backend (SERVER) or frontend (CLIENT). |
productVersion | string | The version of the product that produced this log. |
requestFields | map<string, any> | The parameters known at method invocation time. Entries in the request and result fields will be dependent on the categories field defined above. |
result | AuditResult | Indicates whether the request was successful or the type of failure; for example, ERROR or UNAUTHORIZED. |
resultFields | map<string, any> | Information derived within a method, commonly parts of the return value. |
sequenceId | uuid | A best-effort ordering field for events that share the same eventId. |
service | optional<string> | The service that produced this log. |
sid | optional<SessionId> | The session ID, if available. |
sourceOrigin | optional<string> | The origin of the network request, determined by the TCP stack. |
stack | optional<string> | The stack on which this log was generated. |
time | datetime | The RFC3339Nano UTC datetime string, for example 2025-11-13T23:20:24.180Z. |
tokenId | optional<TokenId> | The API token ID, if available. |
traceId | optional<TraceId> | The Zipkin trace ID, if available. |
uid | optional<UserId> | The user ID, if available. This is the most downstream caller. |
userAgent | optional<string> | The user agent of the user that originated this log. |
users | set<ContextualizedUser> | All users present in this audit log, contextualized.ContextualizedUser: fields:
|
We generally recommend use of the immutable product schema field to filter audit logs when working to analyze particular applications. The service field may sometimes be useful for consumers as it allows the filtering between different instances of the same product, potentially helpful for understanding a specific incident more granularly.
Only analyses of historical periods before our logging of new audit.3 logs should use audit.2 logs (periods before October 2025), as we will soon cease to archive new events as logs in the audit.2 schema. Audit.2 logs have no inter-product guarantees about the shape of the request or result parameters. As such, reasoning about audit logs must typically be performed on a product-by-product basis.
Audit.2 logs may present an audit category within them that can be useful for narrowing a search. However, this category does not contain further information or prescribe the rest of the contents of the audit log. Additionally, audit.2 logs are not guaranteed to contain an audit category. If present, categories will be included in either the _category or _categories field within request_params.
The schema of audit.2 log export datasets is provided below.
| Field | Type | Description |
|---|---|---|
filename | .log.gz | Name of the compressed file from the log archive. |
ip | string | Best-effort identifier of the originating IP address. |
name | string | Name of the audit event, such as PUT_FILE. |
request_params | map<string, any> | The parameters known at method invocation time. |
result | AuditResult | The result of the event (success, failure, and so on). |
result_params | map<string, any> | Information derived within a method, commonly parts of the return value. |
sid | optional<SessionId> | Session ID (if available). |
time | datetime | RFC3339Nano UTC datetime string, for example: 2025-11-13T23:20:24.180Z. |
token_id | optional<TokenId> | API token ID (if available). |
trace_id | optional<TraceId> | Zipkin trace ID (if available). |
type | string | Specifies the audit schema version: "audit.2" |
uid | optional<UserId> | User ID (if available); this is the most downstream caller. |
Organizations currently using audit.2 logs for security monitoring or compliance must migrate their analyses to the audit.3 schema. This section provides guidance for transitioning your audit log workflows to take advantage of the improvements contained in audit.3.
The audit.3 schema represents a fundamental architectural change from audit.2, not just a version update. Key differences that affect migration are described in the sections below.
request_params and result_params in audit.2 are now named requestFields and resultFields in audit.3.requestFields and resultFields are generally completely different from their audit.2 equivalents (request_params and result_params).request_params in audit.2 may now be in resultFields in audit.3, or vice versa.requestFields and resultFields when looking for specific information in audit.3.type top level field identifies which schema produced the log: "audit.2" or "audit.3".name field) can be different between the schemas, which can lead to mismatches when parsing depends specifically on event names.DATA_PROXY_SERVICE_GENERATED_GET_DATASET_AS_CSV2.audit.2 to audit.3 event names, most audit.2 log event names can be straightforwardly mapped to audit.3.audit.2 has optional category usage).audit.3, there is a top level categories field with enforced, standardized values.product, service, entities, and users are available in audit.3 for easier filtering.audit.3, including groups and realms.Audit.3 only captures events from when it is enabled, going forward (as of October 2025). Historical events will remain only in the audit.2 log schema.If you have existing analyses built on audit.2 logs, your migration approach depends on where you currently perform analysis. Follow the steps in the section below that matches your analysis approach.
We strongly recommend migrating to direct API ingestion rather than continuing to use Foundry export datasets. This approach provides:
audit.2.Migration steps:
audit.3 schema structure (see the section below).audit.2 ingestion during a validation period.audit.2 Foundry export and SIEM ingestion.audit.3 export dataset in Control Panel following the export setup instructions.audit.2 export dataset running during the transition period.audit.2 data in Foundry to understand what analyses are deprecated and which need to be migrated.Rather than migrating analyses by every event name, refactor your analyses to use the standardized audit categories instead of individual product event names:
Old approach (audit.2):
Copied!1 2 3 4# Fragile: Depends on specific event names that may change export_events = audit_logs.filter( col("name").isin(["EXPORT_DATASET", "DOWNLOAD_FILE", "CREATE_EXTERNAL_CONNECTION"]) )
New approach (audit.3):
Copied!1 2 3 4# Robust: Uses standardized categories export_events = audit_logs.filter( col("categories").contains("dataExport") )
This category-based approach provides several advantages:
dataExport category.During the concurrent logging period:
audit.2 and audit.3 data sources in parallel.audit.3 analyses capture expected events.audit.3 data.Once you have validated that your audit.3 analyses provide equivalent or better coverage:
audit.3 API or dataset.audit.2 export dataset (if using Foundry exports) by moving to the Trash or a different project.audit.2 ingestion from your SIEM (if using external SIEM).audit.2 logs will remain accessible even after you stop generating new audit.2 logs. Analyses or investigations looking back in time (preceding October 2025) will require querying both audit.2 (historical) and audit.3 (current) logs separately, as audit.3 logs are not available preceding October 2025.Some event names from audit.2 may not have direct equivalents in audit.3. This typically occurs when:
audit.3.Solution
audit.2 logs.traceId field from the audit.2 log to find related audit logs in the audit.3 dataset that may be relevant.The structure of requestFields and resultFields can be complex and product-specific, differing between audit.2 and audit.3.
Solution
request_params or result_params in audit.2 and this level of granular detail is important in your analysis, you must refactor this logic due to the new field structures in audit.3. Always check both requestFields and resultFields in audit.3, as information may have moved between the request and result fields compared to audit.2.entities, users, categories, and so on) over parsing request/result fields when possible.You need to analyze a period that spans both historical audit.2 (before October 2025) and current audit.3 data.
Solution
audit.2 logs.audit.2 and audit.3 datasets.type field to apply schema-specific parsing logic when needed (for example, categories for filtering in audit.3, and name/result_params in audit.2).