Communication in Microservices: Choosing the Right Approach

This article explores effective communication strategies for microservices architecture. It highlights the importance of choosing appropriate protocols and examines the benefits and drawbacks of synchronous and asynchronous communication

GraphQL has a role beyond API Query Language- being the backbone of application Integration
background Coditation

Communication in Microservices: Choosing the Right Approach

If you are new to software architecture or microservices, before reading this blog, do check our blog on Microservices architecture.

A platform for continuous delivery that is appropriate for microservice-oriented design also delivers greater resilience. They boost developer productivity, improve scalability in real-time, and encourage quicker innovation to adapt to shifting market conditions. Each microservice is developed as a discrete, self-contained piece of software, and deploying a microservice architecture sometimes necessitates making many calls to numerous such autonomous, single-responsibility units.
When creating an application or piece of software based on microservices, communication between them is one of the most crucial parts, yet it can be a challenging process.
The several components of a programme with a monolithic design communicate with one another through language-level calls, which causes the programme to function as a single process. If case-specific objects are built using code, these components may be intimately related to one another, or references to abstractions can be utilized to create loosely connected connections using dependency insertion.
A microservices-based programme or piece of software operates as a system that utilizes numerous servers or hosts as well as various processes and services. Services must operate through the usage of a process communication protocol since it is typical for each instance of a service to operate as a separate process. These protocols may be HTTP, AMQP, gRPC, or a binary TCP protocol, based on the characteristics of each service.All of this suggests that new communication techniques are required for the microservices design. As certain communication methods are ineffective and negatively impact the performance of the software or application, it is crucial to select the one that is best suited for each microservice-based programme.

Types of microservice communications:

Several methods of communication, each targeted at a particular circumstance, can be utilized to facilitate communication between the client and the many microservices. Two general standards are used to categorize these communication systems:

  • Depending on the protocol type—synchronous or asynchronous
  • Depending on how many receivers there are: one or numerous

Synchronous:

Synchronous means that the client initiates a request and then waits for a response. This is how HTTP is acting right now.

Asynchronous:

The client simply submits the request to a message broker without waiting for a response. An asynchronous protocol is AMQP (Advanced Message Queuing Protocol).

Single Receiver:

Every request is received and dealt with by a single recipient.

Multiple Receiver:

Any number of receivers may handle a given request. This protocol has to be asynchronous.


Synchronous

HTTP/gRPC are the two most useful synchronous communication protocols. If internal microservice communication is necessary, it is recommended to use gRPC binary protocols to be as quick as possible. Even though both types of communication are synchronized, we employ different protocols for client calls and internal communication.  Client requests should be RESTful to explicitly see payloads, however, backend communication can be sacrificed to see payloads instead choose the velocity of response time. gRPC is pretty much faster compared to HTTP. 
If we want to communicate using synchronous communication, there are several options like REST approaches, HTTP or gRPC.
The most widely used method for tying the microservices together is HTTP. There's no denying that it's an appropriate choice. It functions flawlessly. However, let's assume that it is a wise decision whether the client calls an internal endpoint or the API Gateway calls a back-end endpoint. We presume that your system consists of several microservices and that to supply a particular piece of data, those services must sequentially call one another until they do so, at which point they must return the data to your client. It is an HTTP call sequence.

What are the problems?

Low Performance: 

Since HTTP is synchronous, a response to the initial request won't come until all internal HTTP calls have concluded. As long as one of the request calls isn't blocked, everything is fine. As more HTTP requests are made in such a situation, the performance is adversely affected significantly. 

Loss of Autonomy:

It's preferable in a microservices architecture that they are unaware of one another. They cannot be independent in any manner if they connect via HTTP.

Complicated Failure Management:

If one intermediary microservice in an HTTP call chain fails, the entire chain will fail. Unless you have a good circuit breaker approach and a retry scenario to recover from such failures. Yet as the linkages become more intricate, putting such a failed strategy into practice becomes increasingly difficult, if not impossible.
Request/response chains should therefore be kept to a minimum to ensure microservice autonomy and design a more robust architecture. Additionally, it is advised that all inter-microservice communication be done using asynchronous integration (including for queries) (Message- or Event-based). It is far preferable to use HTTP polling instead of the original HTTP request/response cycle, even if you picked the HTTP protocol.

What is HTTP polling?

Non-blocking requests can be made using the polling approach. It is especially helpful for apps that must send queries to services that take a while to respond.

  • How HTTP Polling Works?
  • Similar to a standard HTTP request, the client submits a request to the server.
  • The server replies to the client while the request is still in process.
  • After a predetermined amount of time, the client checks with the server to see if the request has been handled.
  • The client receives the response if the request has been handled.
  • If not, the client polls once again after a while.

For internal communications, synchronous integration is generally not advised. They prevent the microservice from being autonomous, and when one of their services fails, it affects performance as a whole. The overall response time for the customers gets worse as the synchronous dependencies between the microservices increase. You can select message brokers like RabbitMQ or any other queue system for the integration of microservices. To build an architecture, which is scalable, asynchronous and event-driven communication is required.

How Do We Implement Asynchronous Communication between Microservices?

Some of the most used asynchronous integration are:
Azure Services, Kafka, RabbitMQ, Google Pub/Sub, Amazon Services, ActiveMQ.

The communication types used in Asynchronous:

Messaging Queue:

Messages are stored in a queue in this system. The messages in the queue may be consumed by one or more consumers, but a single consumer may only consume a given message once. A message in the queue vanishes after being read by a consumer. The message will be held until a consumer becomes available who can process it if there are no consumers accessible when it is sent.

Publish subscribe:

The publish-subscribe mechanism allows for the persistence of messages on a topic. Subscribers can read all the messages in one or more subjects by subscribing to them. Message creators are referred to as publishers in the Publish-Subscribe system, while message consumers are referred to as subscribers.

Conclusion:

In conclusion, the majority of inquiries in a microservice architecture can be performed using the request/response synchronous (HTTP) protocol. You should build asynchronous communications based on messages for delayed responses that could take a few seconds to complete. Replicating or propagating data into the initial microservice database is a lot better strategy if there is a request chain to offer some data. The goal is to reduce the number of sequential calls made between microservices; it is not a rule. The best method for synchronizing this data between bounded contexts is eventual consistency, which should be carried out using asynchronous message-based protocols. Last but not least, since you are about to build autonomous microservices, anything that creates any form of dependence on other microservices is an anti-pattern and needs to be avoided. And always remember not to compare these different forms of communication and determine which is better or worse. To compare a truck to a bike is absurd. As they are meant for different purposes, it makes no sense. Make informed decisions based on what you need, your resources, and the circumstances in each situation.


Hi, I am Shagufta Yasmin. I work as a Backend developer(JAVA and Microservices) with over 6 years of experience in software development. In my free time, I enjoy traveling and staying up-to-date with the latest developments in the tech industry.

Want to receive update about our upcoming podcast?

Thanks for joining our newsletter.
Oops! Something went wrong.

Latest Articles

Implementing Custom Instrumentation for Application Performance Monitoring (APM) Using OpenTelemetry

Application Performance Monitoring (APM) has become crucial for businesses to ensure optimal software performance and user experience. As applications grow more complex and distributed, the need for comprehensive monitoring solutions has never been greater. OpenTelemetry has emerged as a powerful, vendor-neutral framework for instrumenting, generating, collecting, and exporting telemetry data. This article explores how to implement custom instrumentation using OpenTelemetry for effective APM.

Mobile Engineering
time
5
 min read

Implementing Custom Evaluation Metrics in LangChain for Measuring AI Agent Performance

As AI and language models continue to advance at breakneck speed, the need to accurately gauge AI agent performance has never been more critical. LangChain, a go-to framework for building language model applications, comes equipped with its own set of evaluation tools. However, these off-the-shelf solutions often fall short when dealing with the intricacies of specialized AI applications. This article dives into the world of custom evaluation metrics in LangChain, showing you how to craft bespoke measures that truly capture the essence of your AI agent's performance.

AI/ML
time
5
 min read

Enhancing Quality Control with AI: Smarter Defect Detection in Manufacturing

In today's competitive manufacturing landscape, quality control is paramount. Traditional methods often struggle to maintain optimal standards. However, the integration of Artificial Intelligence (AI) is revolutionizing this domain. This article delves into the transformative impact of AI on quality control in manufacturing, highlighting specific use cases and their underlying architectures.

AI/ML
time
5
 min read