DEV Community

Manpreet Singh
Manpreet Singh

Posted on

Understanding Event Time and Stream Processing in Kafka Streams

What is Event Time?

Event time refers to the timestamp when an event actually occurred, as opposed to when it was processed by a system. In a distributed architecture, events may be generated at different times and places, which means they may not always be processed in the exact order they were produced. However, in event-driven systems like Kafka, we need to ensure that events are processed in the right order for accurate results.

For example:

  • Imagine a sensor in an IoT device that sends temperature readings at 9:00 AM, 9:05 AM, and 9:10 AM. The event times would be the times these readings were taken.
  • Due to network delays or congestion, these events might arrive in Kafka Streams at 9:07 AM, 9:12 AM, and 9:15 AM, respectively.

Here, the event time is when the event actually happened (9:00, 9:05, and 9:10), while the processing time is when Kafka Streams processes the event (9:07, 9:12, and 9:15). This distinction is crucial when aggregating or analyzing time-based data.

Why Event Time Matters in Stream Processing

Stream processing applications often involve aggregating events over a specific time window (e.g., every 5 minutes). If the application uses the wrong time (e.g., the processing time instead of event time), it can lead to skewed results.

For example, imagine you're calculating the average temperature from IoT sensors every 5 minutes. If you rely on processing time and some events arrive late, they might be excluded from their respective time window, resulting in an inaccurate average.

This is where event time comes into play. Kafka Streams processes events based on their event time to ensure accurate time-based computations, even when events arrive late or out of order.

How Kafka Streams Handles Event Time

Kafka Streams is a robust stream processing library that can process records based on event time. It uses timestamps from the events themselves, instead of the time they are processed. Let’s see how it works:

1. Timestamp Extractor

Kafka Streams uses a TimestampExtractor interface to determine the event time. By default, Kafka Streams will use the timestamp embedded in the Kafka record (if available). However, in many cases, you’ll want to customize this behavior, especially when event timestamps are stored in the event payload itself.

public class CustomTimestampExtractor implements TimestampExtractor {
    @Override
    public long extract(ConsumerRecord<Object, Object> record, long previousTimestamp) {
        // Assuming the timestamp is in the value as a JSON object
        JsonNode jsonNode = new ObjectMapper().readTree((String) record.value());
        return jsonNode.get("eventTime").asLong();
    }
}
Enter fullscreen mode Exit fullscreen mode

You can register the custom timestamp extractor in your Streams configuration:

StreamsConfig streamsConfig = new StreamsConfig(properties);
streamsConfig.put(StreamsConfig.DEFAULT_TIMESTAMP_EXTRACTOR_CLASS_CONFIG, CustomTimestampExtractor.class);
Enter fullscreen mode Exit fullscreen mode

2. Time-based Operations in Kafka Streams
Kafka Streams provides several operations that rely on event time, such as windowed operations. Let's consider windowing, where you group events into specific time windows for processing.

There are different window types:

  • Tumbling windows: Fixed-size, non-overlapping windows (e.g., every 5 minutes).
  • Hopping windows: Fixed-size, overlapping windows (e.g., a 5-minute window that advances every minute).
  • Session windows: Dynamic windows based on periods of activity.

For example, to group events by a tumbling window based on event time:

KStream<String, Event> stream = builder.stream("events");

KTable<Windowed<String>, Long> countByEventTime = stream
    .groupByKey()
    .windowedBy(TimeWindows.of(Duration.ofMinutes(5))) // Tumbling window of 5 minutes
    .count();
Enter fullscreen mode Exit fullscreen mode

In this example, events are grouped into 5-minute windows based on their event time, ensuring that late-arriving events are still placed in the correct window.

3. Handling Out-of-Order Events and Late Arrivals
Kafka Streams allows you to configure how late-arriving events are handled using the concept of grace periods. Late events can still be included in their correct window as long as they arrive within the grace period.

TimeWindows.of(Duration.ofMinutes(5))
    .grace(Duration.ofMinutes(1)); // Allow late events up to 1 minute after window closes
Enter fullscreen mode Exit fullscreen mode

In this case, even if an event arrives 1 minute after the 5-minute window has closed, it will still be included in the correct window.

The Impact of Event Time on State and Aggregations

When Kafka Streams uses event time for processing, it keeps track of the event times to ensure that events are processed in the correct order. This becomes particularly important when doing stateful operations like aggregations.

For instance, if you’re counting the number of events in 5-minute windows, Kafka Streams will store and manage the state (i.e., the counts) based on event time. Even if events arrive out of order, Kafka Streams will correctly update the counts for the corresponding time window.

Conclusion

Event time is the cornerstone of accurate stream processing in Kafka Streams. It ensures that your application produces correct results even when events arrive late or out of order. By leveraging Timestamp Extractors, windowed operations, and configuring appropriate grace periods, Kafka Streams empowers developers to build robust stream processing applications that handle real-world data complexities.

Understanding and working with event time in Kafka Streams allows you to:

  • Build time-sensitive applications.
  • Ensure correctness when aggregating data.
  • Handle late or out-of-order events gracefully.

Mastering event time is key to unlocking the full potential of stream processing. With Kafka Streams, you have the tools to do just that, ensuring that your applications can process streams reliably and efficiently.

Top comments (0)