As mentioned on the overview page, your automation maps to an evaluation job. The first time this job runs, it will attempt to check the entire time series for alerts. From then on, it will only check for alerts on new data. This means that if you add an automation to an existing job, historical data will not be processed. In other words, expanding the scope on an existing automation or creating a new automation that writes to an existing alerting object type will only generate alerts on time series data that comes in after you make this change.
This is not currently possible. Reach out to Palantir Support for assistance if your use case requires running an automation that maps to an existing job on historical data.
As mentioned on the overview page, your automation maps to an evaluation job. The job execution frequency depends on the alerting type you selected:

We expect the runtime and cost of the evaluation of these automations to be lower than Foundry Rules for the majority of workflows because we compute incrementally as the time series data updates. Foundry Rules runs against the full time series every time.
The following Quiver cards are supported in time series alerting logic:

If you receive an error for not having a single root object type, you likely used two different objects for comparison. For example, you may have pulled the Inlet pressure property on Machine 1 and compared it with the Outlet pressure from Machine 2 instead of comparing two properties on the Machine 1 root object type. To create a time series alerting automation, you will need to start from a single root object instance.
Review the requirements for setting up a time series alerting automation to learn why time series alerting automations must be generated from the perspective of a root object.
Yes. The logic and conditions are templatized so results can be identified from the perspective of any object with the same object type. By default, the alerting logic will be applied to all objects within the starting root object type. Learn more about applying a filter to limit the scope of your automation.
Yes. However, the automations will not run directly on top of the streaming data but rather on top of the archive dataset. This incurs at least 10 extra minutes of latency since archive jobs run every 10 minutes.
Streaming alerting allows for time series alerts to be run directly on streams, providing low-latency alerting with end-to-end latency on the order of seconds.
An alert that is ongoing will be generated in the output object type. You can tell that it is ongoing by the fact that its end timestamp is empty.
Historical alerts will not be recomputed. After the logic change happens, new events will be identified using the new logic.
If the new alert logic for a given root object indicates that that object is in a normal state, the alert will be resolved. If the new alert logic indicates that that object is in an alerting state, the alert will remain open.
For detailed steps on configuring your evaluation job, review our guide. Note that job configuration changes apply to all time series alerting automations that write to the same alert object type, as multiple automations can share the same evaluation job.
Streaming alerting may encounter issues due to data quality, job configuration, or upstream ingestion problems. Below are common issues and how to address them:
To ensure alerts remain consistent once they trigger, streaming alerting drops out-of-order points that are too far in the past or future. Verify that your input data is ingested in monotonically increasing timestamp order. You can configure the tolerance for late-arriving data using the Allowed lateness override setting.
Points within the same time window that have duplicate timestamps resolve in a non-deterministic manner. This may account for discrepancies when compared with an equivalent Quiver analysis of the same logic.
Streaming alerting performance depends on healthy upstream data ingestion. Use stream monitoring to verify that the streams ingesting your source data are healthy and processing data without issues.
If your streaming alerting job is not running or is failing to evaluate, you will not receive alert events. For more information on monitoring your automation's health and performance, review our documentation on monitoring and observability.
Reduce Ontology polling: Review the Ontology polling configuration section in the additional configurations guide.
Narrow the object set scope: Limit your automation object set scope to the minimum set of objects necessary for monitoring. A smaller object set reduces computational overhead and associated costs. Review the section on modifying the automation scope for guidance on filtering your object set.