All events ingested by a listener are stored in a stream. You can locate this stream by navigating to your listener's Overview page in Data Connection. Once your data resides in a stream, several processing options are available.
Streams can be directly processed in Automate, enabling you to execute an action or function for each inbound event. You can create objects in your ontology, run AIP logic, or use sources in functions to interact with external systems, such as writing data back to the system that sent the event.
Processing streaming events with Automate is appropriate when:
This approach is suitable for most listener integrations, providing low-cost, low-maintenance event processing workflows.
You can learn more about processing listener events with Automate in the guide to creating an AI-powered chatbot with listeners.
Listener event streams can be processed with streaming pipelines as a lower-latency alternative. These pipelines support high-throughput streams, stateful event processing, and (optionally) exactly-once event processing. Streaming pipelines can additionally leverage UDFs to build powerful real-time event-processing workflows.
Every few minutes, the listener event stream will archive into a backing dataset. This dataset can be used like any other dataset in the platform, allowing you to build data pipelines and back your ontology.
For use cases that require historical analysis or that are not real-time, batch pipelines should be used.