Skip to content

Latest commit

 

History

History
68 lines (41 loc) · 4.03 KB

File metadata and controls

68 lines (41 loc) · 4.03 KB

OpenTelemetry Instrumentation

OpenTelemetry is an observability framework designed to aid in the generation and collection of application telemetry data such as metrics, logs, and traces.

One of the biggest advantages of using OpenTelemetry is that it is vendor-agnostic. It can export data in multiple formats which you can send to a backend of your choice.

This project includes support for sending metrics and traces with OpenTelemetry, making it easy to integrate it in you observability stack of choice.

Setup Automatic Instrumentation (Python)

This service uses the OpenTelemetry Python SDK and FastAPI instrumentation. Configuration is done via standard OpenTelemetry environment variables. There is no Java agent; instead, the application relies on Python instrumentation libraries.

Here’s an example configuration via environment variables:

OTEL_TRACES_EXPORTER=otlp \
OTEL_METRICS_EXPORTER=otlp \
OTEL_SERVICE_NAME=collibra-data-catalog-plugin \
OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:5555 \

Here’s an explanation of what each environment variables does:

  • OTEL_SERVICE_NAME sets the name of the service associated with your telemetry, and is sent to your Observability backend.

  • OTEL_TRACES_EXPORTER specifies which traces exporter to use. In this case, traces are being exported with otlp.

  • OTEL_METRICS_EXPORTER specifies which metrics exporter to use. In this case, metrics are being exported to otlp.

  • OTEL_EXPORTER_OTLP_ENDPOINT sets the endpoint where telemetry is exported to. If omitted, the default Collector endpoint will be used, which is 0.0.0.0:4317 for gRPC and 0.0.0.0:4318 for HTTP.

Example observability backend using Docker Compose

Note Ensure that Docker Compose is installed on your machine.

You can find a basic observability backend built with Grafana, Tempo, Prometheus and the OpenTelemetry Collector in the otel directory. You can run it with:

docker compose up

This will run the collector and all other services. With the agent setup and configuration outlined above the application will send telemetry data to this backend so you can immediately start to experiment with it, given that the application is running outside of Docker.

If you want to test locally using Docker, you can use the local machine hostname as the endpoint for the telemetry exporter, like this:

docker run --name collibra-python-container \
-e OTEL_EXPORTER_OTLP_ENDPOINT=http://$(hostname -f):5555 \
-p 5002:5002 collibra-data-catalog-plugin-python

Grafana

If ran locally, Grafana is available on localhost:3000; for more information on how to use Grafana refer to the official Grafana documentation.

Grafana dashboards

Grafana allows to create dashboards combining traces and metrics received from the application. As a starting point, we provide a default basic dashboard showing successful v.s. failed HTTP requests. This dashboard is loaded as a volume on the grafana container and can be found in the local directory otel/o11y-backend/grafana/dashboards.

If you want to modify this default dashboard, once Grafana is running, create a new dashboard or modify an existing one and export it as a JSON file (see Grafana documentation on this matter.) You can add this JSON dashboard in the provided folder to be loaded at startup, as all JSON dashboards in this folder will be loaded at container startup.

Dashboards loaded in this fashion cannot be deleted from the Grafana UI.

Application Metric example

Automatic instrumentation captures telemetry from supported libraries and frameworks. For custom metrics in Python, you can use OpenTelemetry Metrics API or libraries that integrate with it. Refer to the OpenTelemetry Python metrics documentation.