Event processing with listeners

All events ingested by a listener are stored in a stream. You can locate this stream by navigating to your listener's Overview page in Data Connection. Once your data resides in a stream, several processing options are available.

Stream processing with Automate

Streams can be directly processed in Automate, enabling you to execute an action or function for each inbound event. You can create objects in your ontology, run AIP logic, or use sources in functions to interact with external systems, such as writing data back to the system that sent the event.

Option to process streams in Automate.

Processing streaming events with Automate is appropriate when:

  • The event stream is not high throughput
  • Your event processing is stateless
  • Latency of a few seconds is acceptable
  • At-least-once processing is sufficient

This approach is suitable for most listener integrations, providing low-cost, low-maintenance event processing workflows.

Streaming pipelines

Listener event streams can be processed with streaming pipelines as a lower-latency alternative. These pipelines support high-throughput streams, stateful event processing, and (optionally) exactly-once event processing. Streaming pipelines can additionally leverage UDFs to build powerful real-time event-processing workflows.

Build streaming pipelines in Pipeline Builder.

Batch pipelines

Every few minutes, the listener event stream will archive into a backing dataset. This dataset can be used like any other dataset in the platform, allowing you to build data pipelines and back your ontology.

Switch to archive mode on your event stream to access the underlying dataset.

For use cases that require historical analysis or that are not real-time, batch pipelines should be used.