Saga microservice pattern

Data consistency across microservices in dispersed transaction scenarios can be managed with the help of the Saga design pattern. A saga is a series of transactions that updates each service and broadcasts a message or event to start the subsequent transaction step. The saga executes compensating transactions that cancel out the previous transactions if a step fails.

GraphQL has a role beyond API Query Language- being the backbone of application Integration
background Coditation

Saga microservice pattern

Before we talk about the Saga microservice pattern, let us talk about what necessitates it. Along with the rise of the microservice pattern, decentralization became a key theme within all aspects of software development, especially with databases. Centralized databases were convenient and easy to maintain, but scaling them with demand was a complete nightmare and it was impossible to assign ownership of the data, hence a database-per-service model came into view.

It is much easier to have each microservice handle its database, as it already conforms to microservices’ key idea of complete isolation of services, and allows for loose coupling and much greater flexibility. However, this comes with its own set of challenges.

One such functional challenge is a collaboration between services when the data, itself, is isolated. One can expose endpoints to query whatever data they might need from a particular microservice, to maintain ACID properties it is essential to have a ledger of all the changes made in all the individual databases. This ledger of activity is known as a SAGA in microservices.
All the ACID rules still apply in a Saga, albeit in a distributed manner, so compensating transactions have to make sure that all the individual databases are updated accordingly.

Now, coming back to SAGA, a transaction in SAGA is a discrete chunk of logic or labour that occasionally consists of several operations. An event occurs when the status of an entity changes during a transaction, and a command contains all the information required to carry out an action or start a subsequent event. Atomic, consistent, isolated, and long-lasting transactions are required (ACID). Although transactions within a single service are ACID, a cross-service transaction management technique is necessary to ensure data consistency.

According to multiservice architectures:

- Atomicity is a set of actions that cannot be divided or reduced; all must take place or none at all.
- Consistency refers to the fact that the transaction only transfers data across legitimate states.
- Isolation ensures that concurrent transactions result in the same data state as transactions that were done sequentially.
- In the event of a system failure or power loss, durability makes guarantee that committed transactions are kept committed.

There are two types of implementing a Saga -

1. Saga choreography

Choreography Sagas rely on events and their respective handlers in each microservice. For example, for any online delivery service, there have to be two essential services. An ordering service and payment service. A typical order sequence would then look as follows.

1. The customer places the items in her cart and places the order, this generates an event toward payment service.
2. This leads the customer toward the payment service, where she provides details of the payment and goes to the payment gateway.
      a. The customer authenticates on the payment gateway, if she’s successful in authenticating, and the payment goes through, another event towards order service will be generated indicating payment completion and successful order.
      b. Instead, if the payment fails, a failure event is generated. This event is received by the order service and is marked failed.

2. Saga Orchestration

These sagas have a separate microservice that takes care of orchestrating the events instead of services handling them themselves. The flow of events is nearly the same, with the only difference being the service talking to the orchestrator instead of each other. The example above would look as follows in the case of orchestration.

1. The customer places the items in her cart and places the order, this generates an event towards the orchestrator.
2. Orchestrator receives the above event and instructs the Payment service to handle the payment.
3. The payment service relays the information about the payment status to the orchestrator through another event.
4. Upon receiving this event, Orchestrator then accordingly generates a success or failure event towards the Order service to mark the order.

These patterns make it a little complex to develop microservices because of the need for compensating transactions but they help your microservices maintain consistent data throughout the stack without the usage of 2pc or distributed transactions.

Hello, I am Akshay Rathore, a backend development expert, a chess player and a poet by passion.

Want to receive update about our upcoming podcast?

Thanks for joining our newsletter.
Oops! Something went wrong.

Latest Articles

Implementing Custom Instrumentation for Application Performance Monitoring (APM) Using OpenTelemetry

Application Performance Monitoring (APM) has become crucial for businesses to ensure optimal software performance and user experience. As applications grow more complex and distributed, the need for comprehensive monitoring solutions has never been greater. OpenTelemetry has emerged as a powerful, vendor-neutral framework for instrumenting, generating, collecting, and exporting telemetry data. This article explores how to implement custom instrumentation using OpenTelemetry for effective APM.

Mobile Engineering
time
5
 min read

Implementing Custom Evaluation Metrics in LangChain for Measuring AI Agent Performance

As AI and language models continue to advance at breakneck speed, the need to accurately gauge AI agent performance has never been more critical. LangChain, a go-to framework for building language model applications, comes equipped with its own set of evaluation tools. However, these off-the-shelf solutions often fall short when dealing with the intricacies of specialized AI applications. This article dives into the world of custom evaluation metrics in LangChain, showing you how to craft bespoke measures that truly capture the essence of your AI agent's performance.

AI/ML
time
5
 min read

Enhancing Quality Control with AI: Smarter Defect Detection in Manufacturing

In today's competitive manufacturing landscape, quality control is paramount. Traditional methods often struggle to maintain optimal standards. However, the integration of Artificial Intelligence (AI) is revolutionizing this domain. This article delves into the transformative impact of AI on quality control in manufacturing, highlighting specific use cases and their underlying architectures.

AI/ML
time
5
 min read