How to implement Distributed Tracing for Postgres Queries with OpenTelemetry and Jaeger

Learn how to implement distributed tracing in a microservices-driven architecture using OpenTelemetry and Jaeger. This comprehensive guide covers setting up a Python-based FastAPI application, instrumenting Postgres queries, configuring Jaeger, and analyzing traces to optimize system performance.

GraphQL has a role beyond API Query Language- being the backbone of application Integration
background Coditation

How to implement Distributed Tracing for Postgres Queries with OpenTelemetry and Jaeger

In microservices-driven architecture, understanding the flow of requests across multiple services is crucial for maintaining and optimizing system performance. Distributed tracing is a powerful technique that helps developers and operations teams visualize and analyze the journey of requests through complex distributed systems. In this post, we'll see an example of distributed tracing implemented for Postgres queries using OpenTelemetry and Jaeger.

Introduction to Distributed Tracing

Before we dive into the nitty-gritty of implementation, let's take a moment to understand why distributed tracing is so crucial in modern software architecture.

Imagine you're running an e-commerce platform with microservices handling user authentication, product catalog, shopping cart, and payment processing. A single user action, like placing an order, might involve multiple services communicating with each other. When something goes wrong or performance degrades, pinpointing the exact cause can be like finding a needle in a haystack.

It allows you to:

  1. Visualize the end-to-end journey of a request
  2. Identify bottlenecks and performance issues
  3. Understand dependencies between services
  4. Troubleshoot errors more effectively

According to a 2023 survey by the Cloud Native Computing Foundation (CNCF), 62% of organizations now use distributed tracing in production environments, up from 47% in 2020. This significant increase underscores the growing importance of this technology in managing complex distributed systems.

Understanding OpenTelemetry and Jaeger

Now that we've established the importance of distributed tracing, let's introduce our tools of choice: OpenTelemetry and Jaeger.

OpenTelemetry

OpenTelemetry is an open-source observability framework for cloud-native software. It provides a single set of APIs, libraries, agents, and collector services to capture distributed traces and metrics from your application. Key benefits include:

  • Vendor-neutral: OpenTelemetry is supported by all major cloud providers and observability vendors.
  • Language support: It offers instrumentation libraries for most popular programming languages.
  • Extensibility: You can easily add custom instrumentation to your code.

Jaeger

Jaeger, originally created by Uber, is a popular open-source distributed tracing system. It's used for monitoring and troubleshooting microservices-based distributed systems. Jaeger provides:

  • Distributed context propagation
  • Distributed transaction monitoring
  • Root cause analysis
  • Service dependency analysis
  • Performance / latency optimization

Together, OpenTelemetry and Jaeger form a powerful combo for implementing distributed tracing in your applications.

Setting Up the Environment

Let's roll up our sleeves and set up our development environment. We'll use a Python-based web application with a Postgres database for this example.

First, ensure you have the following installed:

  • Python 3.8+
  • PostgreSQL 13+
  • Docker (for running Jaeger)

Now, let's create a new Python project and install the necessary dependencies:


mkdir postgres-tracing-demo
cd postgres-tracing-demo
python -m venv venv
source venv/bin/activate  # On Windows, use `venv\Scripts\activate`

pip install fastapi uvicorn psycopg2-binary opentelemetry-api opentelemetry-sdk opentelemetry-instrumentation-fastapi opentelemetry-instrumentation-psycopg2 opentelemetry-exporter-jaeger

This installs FastAPI (a modern web framework), psycopg2 (Postgres adapter for Python), and the necessary OpenTelemetry libraries.

Next, let's create a simple FastAPI application with a Postgres connection. Create a file named main.py:


import os
from fastapi import FastAPI
import psycopg2

app = FastAPI()

# Postgres connection details
DB_HOST = os.getenv("DB_HOST", "localhost")
DB_NAME = os.getenv("DB_NAME", "postgres")
DB_USER = os.getenv("DB_USER", "postgres")
DB_PASS = os.getenv("DB_PASS", "password")

def get_db_connection():
    return psycopg2.connect(
        host=DB_HOST,
        database=DB_NAME,
        user=DB_USER,
        password=DB_PASS
    )

@app.get("/")
async def root():
    return {"message": "Hello World"}

@app.get("/users")
async def get_users():
    conn = get_db_connection()
    cur = conn.cursor()
    cur.execute("SELECT * FROM users;")
    users = cur.fetchall()
    cur.close()
    conn.close()
    return {"users": users}

if __name__ == "__main__":
    import uvicorn
    uvicorn.run(app, host="0.0.0.0", port=8000)
    
    

This sets up a basic FastAPI application with two endpoints: a root endpoint and a /users endpoint that fetches data from a Postgres database.

Instrumenting Postgres Queries with OpenTelemetry

Now comes the exciting part - adding OpenTelemetry instrumentation to our application. We'll modify our main.py file to include OpenTelemetry tracing:


import os
from fastapi import FastAPI
import psycopg2

from opentelemetry import trace
from opentelemetry.instrumentation.fastapi import FastAPIInstrumentor
from opentelemetry.instrumentation.psycopg2 import Psycopg2Instrumentor
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
from opentelemetry.exporter.jaeger.thrift import JaegerExporter

# Initialize TracerProvider
trace.set_tracer_provider(TracerProvider())

# Create a JaegerExporter
jaeger_exporter = JaegerExporter(
    agent_host_name="localhost",
    agent_port=6831,
)

# Add BatchSpanProcessor to the TracerProvider
trace.get_tracer_provider().add_span_processor(
    BatchSpanProcessor(jaeger_exporter)
)

app = FastAPI()

# Instrument FastAPI
FastAPIInstrumentor.instrument_app(app)

# Instrument Psycopg2
Psycopg2Instrumentor().instrument()

# ... (rest of the code remains the same)

Let's break down what we've added:

  1. We import necessary OpenTelemetry modules.
  2. We set up a TracerProvider, which is responsible for creating tracers.
  3. We create a JaegerExporter that will send our traces to Jaeger.
  4. We add a BatchSpanProcessor to the TracerProvider. This processor batches spans before sending them to Jaeger, which is more efficient than sending them one by one.
  5. We use FastAPIInstrumentor to automatically instrument our FastAPI application.
  6. We use Psycopg2Instrumentor to automatically instrument Postgres queries made with psycopg2.

With these changes, our application will now generate traces for incoming HTTP requests and Postgres queries.

Configuring Jaeger for Trace Collection

Now that our application is instrumented, we need to set up Jaeger to collect and visualize our traces. We'll use Docker to run Jaeger:


docker run -d --name jaeger \
  -e COLLECTOR_ZIPKIN_HOST_PORT=:9411 \
  -p 5775:5775/udp \
  -p 6831:6831/udp \
  -p 6832:6832/udp \
  -p 5778:5778 \
  -p 16686:16686 \
  -p 14250:14250 \
  -p 14268:14268 \
  -p 14269:14269 \
  -p 9411:9411 \
  jaegertracing/all-in-one:1.35
  
  

This command starts Jaeger in all-in-one mode, which includes the Jaeger agent, collector, query, and UI components.

Analyzing Traces in Jaeger UI

With everything set up, let's run our application and generate some traces:


uvicorn main:app --reload

Now, make a few requests to your application:


curl http://localhost:8000/
curl http://localhost:8000/users

To view the traces, open your browser and navigate to http://localhost:16686. You should see the Jaeger UI. Here's what you can do:

  1. Select your service from the "Service" dropdown.
  2. Click "Find Traces" to see a list of traces.
  3. Click on any trace to see its details.

You'll be able to see the entire journey of each request, including the time spent in the FastAPI application and in Postgres queries.

Best Practices and Considerations

As you implement distributed tracing in your applications, keep these best practices in mind:

1. Sampling: In high-traffic systems, tracing every single request can be resource-intensive. Implement a sampling strategy to trace a representative subset of requests.
2. Context Propagation: Ensure that trace context is properly propagated across service boundaries. OpenTelemetry provides utilities for this.
3. Meaningful Span Names: Use clear, descriptive names for your spans to make traces easier to understand.
4. Add Custom Attributes: Enrich your spans with custom attributes that provide additional context. For example:


from opentelemetry import trace

tracer = trace.get_tracer(__name__)

@app.get("/users/{user_id}")
async def get_user(user_id: int):
    with tracer.start_as_current_span("get_user") as span:
        span.set_attribute("user.id", user_id)
        # ... fetch user from database
        
        

5. Monitor Trace Data Volume: Keep an eye on the volume of trace data you're generating. It can grow quickly in large systems.
6. Security Considerations: Be careful not to include sensitive information (like passwords or personal data) in your traces.

Advanced Techniques

Once you're comfortable with basic tracing, you can explore more advanced techniques:

1. Distributed Context Propagation: If your application calls other services, ensure you're propagating the trace context. OpenTelemetry provides utilities for this:


from opentelemetry.propagate import inject
import requests

@app.get("/proxy")
async def proxy_request():
    with tracer.start_as_current_span("proxy_request") as span:
        headers = {}
        inject(headers)  # This injects the current trace context into the headers
        response = requests.get("https://api.example.com", headers=headers)
        return response.json()
        
        

2. Asynchronous Tracing: If you're using asynchronous code, make sure you're using the appropriate context management:


from opentelemetry import context

@app.get("/async")
async def async_endpoint():
    ctx = context.get_current()
    # Do some async work
    await asyncio.sleep(1)
    # Ensure we're in the correct context
    token = context.attach(ctx)
    with tracer.start_as_current_span("async_operation"):
        # Do more work
    context.detach(token)
    
    

3. Custom Span Processors: You can create custom span processors for advanced use cases, like filtering or modifying spans before they're exported:


from opentelemetry.sdk.trace import SpanProcessor

class CustomSpanProcessor(SpanProcessor):
    def on_start(self, span: "Span", parent_context: Optional[Context] = None) -> None:
        if "sensitive" in span.name:
            span.update_name("redacted_operation")

    def on_end(self, span: "ReadableSpan") -> None:
        pass

trace.get_tracer_provider().add_span_processor(CustomSpanProcessor())

4. Integrating with Logging: Correlate your logs with your traces for even more powerful debugging:


import logging
from opentelemetry.trace import get_current_span

@app.get("/logged")
async def logged_endpoint():
    span = get_current_span()
    logging.info(f"Processing request", extra={"trace_id": span.get_span_context().trace_id})
    
    

Conclusion

Implementing distributed tracing for Postgres queries using OpenTelemetry and Jaeger can significantly enhance your ability to understand and optimize your application's performance. By following the steps outlined in this article, you've gained the knowledge to:

  1. Set up a basic FastAPI application with Postgres integration
  2. Instrument your application using OpenTelemetry
  3. Configure Jaeger for trace collection and visualization
  4. Analyze traces to gain insights into your application's behavior

Remember, distributed tracing is just one piece of the observability puzzle. For a complete picture, consider integrating it with other observability tools like metrics and logging.

As your system grows, you may need to scale your tracing infrastructure. Consider exploring more advanced deployment options for Jaeger, such as using a separate collector and storage backend like Elasticsearch or Cassandra for better performance and data retention.

Distributed tracing has become an essential tool in the modern developer's toolkit. According to a report by MarketsandMarkets, the global application performance monitoring market size is expected to grow from $5.7 billion in 2020 to $12.9 billion by 2025, with distributed tracing playing a significant role in this growth.

With these techniques, you're well-equipped to tackle the challenges of debugging and optimizing complex distributed systems.

Want to receive update about our upcoming podcast?

Thanks for joining our newsletter.
Oops! Something went wrong.