Bahubali Shetti

Automatic instrumentation with OpenTelemetry for Python applications

Learn how to auto-instrument Python applications using OpenTelemetry. With standard commands in a Docker file, applications can be instrumented quickly without writing code in multiple places, enabling rapid change, scale, and easier management.

Automatic instrumentation with OpenTelemetry for Python applications

DevOps and SRE teams are transforming the process of software development. While DevOps engineers focus on efficient software applications and service delivery, SRE teams are key to ensuring reliability, scalability, and performance. These teams must rely on a full-stack observability solution that allows them to manage and monitor systems and ensure issues are resolved before they impact the business.

Observability across the entire stack of modern distributed applications requires data collection, processing, and correlation often in the form of dashboards. Ingesting all system data requires installing agents across stacks, frameworks, and providers — a process that can be challenging and time-consuming for teams who have to deal with version changes, compatibility issues, and proprietary code that doesn't scale as systems change.

Thanks to OpenTelemetry (OTel), DevOps and SRE teams now have a standard way to collect and send data that doesn't rely on proprietary code and has a large support community reducing vendor lock-in.

In a previous blog, we also reviewed how to use the OpenTelemetry demo and connect it to Elastic®, as well as some of Elastic’s capabilities with OpenTelemetry visualizations and Kubernetes.

In this blog, we will show how to use automatic instrumentation for OpenTelemetry with the Python service of our application called Elastiflix, which helps highlight auto-instrumentation in a simple way.

The beauty of this is that there is no need for the otel-collector! This setup enables you to slowly and easily migrate an application to OTel with Elastic according to a timeline that best fits your business.

Application, prerequisites, and config

The application that we use for this blog is called Elastiflix, a movie-streaming application. It consists of several micro-services written in .NET, NodeJS, Go, and Python.

Before we instrument our sample application, we will first need to understand how Elastic can receive the telemetry data.

All of Elastic Observability’s APM capabilities are available with OTel data. Some of these include:

  • Service maps
  • Service details (latency, throughput, failed transactions)
  • Dependencies between services, distributed tracing
  • Transactions (traces)
  • Machine learning (ML) correlations
  • Log correlation

In addition to Elastic’s APM and a unified view of the telemetry data, you will also be able to use Elastic’s powerful machine learning capabilities to reduce the analysis, and alerting to help reduce MTTR.

Prerequisites

View the example source code

The full source code, including the Dockerfile used in this blog, can be found on GitHub. The repository also contains the same application without instrumentation. This allows you to compare each file and see the differences.

The following steps will show you how to instrument this application and run it on the command line or in Docker. If you are interested in a more complete OTel example, take a look at the docker-compose file here, which will bring up the full project.

Step-by-step guide

Step 0. Log in to your Elastic Cloud account

This blog assumes you have an Elastic Cloud account — if not, follow the instructions to get started on Elastic Cloud.

Step 1. Configure auto-instrumentation for the Python Service

We are going to use automatic instrumentation with Python service from the Elastiflix demo application.

We will be using the following service from Elastiflix:

Elastiflix/python-favorite-otel-auto

Per the OpenTelemetry Automatic Instrumentation for Python documentation, you will simply install the appropriate Python packages using pip install.

>pip install opentelemetry-distro \
	opentelemetry-exporter-otlp

>opentelemetry-bootstrap -a install

If you are running the Python service on the command line, then you can use the following command:

opentelemetry-instrument python main.py

For our application, we do this as part of the Dockerfile.

Dockerfile

FROM python:3.9-slim as base

# get packages
COPY requirements.txt .
RUN pip install -r requirements.txt
WORKDIR /favoriteservice

#install opentelemetry packages
RUN pip install opentelemetry-distro \
	opentelemetry-exporter-otlp

RUN opentelemetry-bootstrap -a install

# Add the application
COPY . .

EXPOSE 5000
ENTRYPOINT [ "opentelemetry-instrument", "python", "main.py"]

Step 2. Running the Docker image with environment variables

As specified in the OTEL Python documentation, we will use environment variables and pass in the configuration values to enable it to connect with Elastic Observability’s APM server.

Because Elastic accepts OTLP natively, we just need to provide the Endpoint and authentication where the OTEL Exporter needs to send the data, as well as some other environment variables.

Getting Elastic Cloud variables
You can copy the endpoints and token from Kibana® under the path /app/home#/tutorial/apm.

You will need to copy the following environment variables:

OTEL_EXPORTER_OTLP_ENDPOINT
OTEL_EXPORTER_OTLP_HEADERS

Build the image

docker build -t  python-otel-auto-image .

Run the image

docker run \
       -e OTEL_EXPORTER_OTLP_ENDPOINT="<REPLACE WITH OTEL_EXPORTER_OTLP_ENDPOINT>" \
       -e OTEL_EXPORTER_OTLP_HEADERS="Authorization=Bearer%20<REPLACE WITH TOKEN>" \
       -e OTEL_RESOURCE_ATTRIBUTES="service.version=1.0,deployment.environment=production" \
       -e OTEL_SERVICE_NAME="python-favorite-otel-auto" \
       -p 5001:5001 \
       python-otel-auto-image

Important: Note that the “OTEL_EXPORTER_OTLP_HEADERS” variable has the whitespace after Bearer escaped as “%20” — this is a requirement for Python.

You can now issue a few requests in order to generate trace data. Note that these requests are expected to return an error, as this service relies on a connection to Redis that you don’t currently have running. As mentioned before, you can find a more complete example using docker-compose here.

curl localhost:5000/favorites

# or alternatively issue a request every second

while true; do curl "localhost:5000/favorites"; sleep 1; done;

Step 3: Explore traces, metrics, and logs in Elastic APM

Exploring the Services section in Elastic APM, you’ll see the Python service displayed.

Clicking on the python-favorite-otel-auto service , you can see that it is ingesting telemetry data using OpenTelemetry.

In this blog, we discussed the following:

  • How to auto-instrument Python with OpenTelemetry
  • Using standard commands in a Dockerfile, auto-instrumentation was done efficiently and without adding code in multiple places

Since Elastic can support a mix of methods for ingesting data, whether it be using auto-instrumentation of open-source OpenTelemetry or manual instrumentation with its native APM agents, you can plan your migration to OTel by focusing on a few applications first and then using OpenTelemety across your applications later on in a manner that best fits your business needs.

Developer resources:

General configuration and use case resources:

Don’t have an Elastic Cloud account yet? Sign up for Elastic Cloud and try out the auto-instrumentation capabilities that I discussed above. I would be interested in getting your feedback about your experience in gaining visibility into your application stack with Elastic.

The release and timing of any features or functionality described in this post remain at Elastic's sole discretion. Any features or functionality not currently available may not be delivered on time or at all.

Share this article