blog by OSO

How to design Kafka events and event streams

Sion Smith 21 January 2025
kafka event streams

Events are records of significant moments in a system and are essential to decentralised computing and modular service architectures. Effective event structuring influences both present operations and future expandability. This whitepaper will examine crucial elements of event structuring, including how to ascertain event contents, the contrast between internal and external information, and the recommended practices for establishing resilient event streams. The four aspects of event design:

  1. Facts vs delta events: relates to the event composition — capturing an entire occurrence versus only the elements that have been modified.
  2. Normalisation vs. denormalization: relates to decisions regarding event relationships, and the compromises between standardised and customised event streams.
  3. Single vs. multiple event types per topic: addresses the trade-offs between numerous channels with unified event categories, and a single channel encompassing diverse event categories.

Discrete vs. continuous event flows: covers the connections between events and their application through operational processes.

kafka events

2. Fact vs Delta event types

Let’s delve into the nuances of fact versus delta event types, which are crucial for designing event-driven architectures.

2.1:  Fact events 

Fact events document the entire condition of an entity at a particular moment. They offer a complete picture of all pertinent fields and values, akin to an entry in a data repository. For example, a fact event for a purchase basket might encompass the basket identifier, item inventory (with product codes and quantities), and chosen delivery option. This method, referred to as event-facilitated state transmission, enables recipients to obtain the full state without needing to piece it together from multiple modifications. Fact events are advantageous for streamlining state transmission, particularly in contexts where maintaining thorough and current state is critical, such as with intricate systems like yearly fiscal declarations or measurement device information.

kafka producers and consumers

2.2:  Delta events

Delta events, in contrast, record the alterations between states. They typically contain data about what has been modified, the reason for the modification, and the particulars of the change. For instance, a product_included_in_basket delta event would encompass the basket identifier, product code, and quantity included, but not the entire condition of the basket. Delta events are especially beneficial for capturing and responding to specific modifications within a system. They facilitate detailed monitoring and can be more effective when handling frequent updates, but they necessitate that recipients maintain or deduce the complete state by processing multiple delta events.

difference between stream and table

2.3:  Summary

There are compromises between these approaches. Fact events, while thorough, can be more substantial and may involve higher data transmission and storage expenses, particularly if they incorporate both prior and current conditions. They are more straightforward for recipients as they don’t need to reconstruct the state from multiple events. However, large and frequent fact events can affect system efficiency and network utilisation.

Delta events, conversely, offer detailed context for modifications and are generally more compact. They are well-suited for scenarios where only the alterations are significant, but they can be intricate to manage because recipients must consolidate and process multiple events to rebuild the current state. This can result in challenges with synchronisation and consistency, especially when the data structure evolves.

There are also hybrid events, which merge elements of both fact and delta events. For instance, a hybrid event might include a complete snapshot of the state along with a justification for the change. While not as prevalent, hybrid events provide a way to deliver both the current state and the context of changes in a single event.

Ultimately, the selection between fact and delta events depends on the specific requirements of the system and its recipients. fact events are ideal for conveying thorough state information, making them suitable for sharing complete data with minimal recipient-side processing. delta events are better for monitoring specific changes and responding to transitions, though they require more complex state management by recipients.

In conclusion, fact events are best for scenarios requiring complete and immediate state information, while delta events are suited for monitoring and responding to specific changes. Hybrid events offer a combined approach, integrating both fact and incremental characteristics to provide flexibility in event modelling.

Objective: Obtain the Current StateObjective: React to a specific action or change
FactGood Choice: E.g. The shopping cart fact contains a full description of the current state of the cart.Potential:
Positive: The consumer can detect any change in any value in the fact event.
Negative: Consumers need to store facts to detect the change between them. The reason why the change occurred may not be represented.
DeltaBad Choice: Clients must replicate logic to build up the current state from multiple delta events.Good Choice: Composing internal state via event sourcing. Notifying external consumers for choice events, i.e.: Low Inventory Alert. Composing a custom view of the state based on the events (ie. Command Query Resource Separation)

3. Normalised vs Denormalised

Let’s delve into the differences of normalised versus denormalised modelling, which are crucial for designing event-driven architectures.

3.1:  Normalised

Normalised event flows adopt a relational database methodology, arranging information into separate tables with cross-referential connections. Although this approach preserves information accuracy and minimises redundancy, it can render event flows intricate and less efficient. Consumers must often decode cross-referential connections and execute data merges, which can be computationally demanding and problematic in streaming environments. This intricacy can affect speed and adaptability, particularly when dealing with large-scale data.

kafka ktable

3.2:  Denormalised

Denormalized event flows merge associated information into a unified, streamlined structure. This method simplifies information consumption by removing the need for data merges and making the event payloads more autonomous. Denormalised can be executed during event generation or following event publication. At the point of creation, the information is transformed before entering the event flow, often utilising an intermediary layer to shield recipients from the internal information model. Alternatively, denormalised can take place after production using flow processing tools to augment the information with supplementary context.

3.3:  Summary

Essential factors for deciding between normalisation and denormalisation include: 1. Recipient Requirements: 

  1. Normalised events are more accessible for recipients, particularly when employing technologies that don’t accommodate streaming data merges. 
  2. Efficiency Implications: Normalised flows can be resource-intensive in terms of computational power and data storage, especially with regular updates. 
  3. Information Structure Adaptability: Denormalisation helps prevent rigid linkage with the internal information structure, safeguarding recipients from alterations in the source system.

In conclusion, although normalised event flows may be more straightforward to establish at first, they frequently result in intricate data handling for recipients. Denormalised event flows, while demanding more processing capacity, provide a more accessible experience by integrating information and minimising the need for complex data merges. The decision between these methodologies hinges on recipient requirements, data system capabilities, and information structure interrelations.

Get started with OSO professional services for Apache Kafka

Have a conversation with a Kafka expert to discover how we help your adopt of Apache Kafka in your business.

CONTACT US