Deploying the OpenTelemetry Collector in a Docker Environment

The OpenTelemetry Collector is a vendor-agnostic service that receives, processes, and exports telemetry data such as logs, metrics, and traces. It acts as a middle layer between instrumented applications and telemetry backends, providing a flexible and scalable way to manage observability data.

By separating telemetry collection from your application code, the Collector helps you keep your telemetry workflows centralized and organized. It supports capabilities like batching, filtering, and resource transformation. You can even export your telemetry data to multiple observability platforms.

One of the most popular approaches to running the Collector is spinning it up as a Docker container. There are several advantages to this approach, including:

  • Isolates telemetry concerns from your app container: Your application container remains focused on business logic
  • Easier to update or replace: You can roll out changes to the Collector independently
  • Can serve multiple services on the same host or network: One Collector container can collect from many apps
  • Ideal for local dev, CI pipelines, or lightweight deployments: It’s quick to set up and easy to tear down

Introduction to the demo application

The example demo project you'll use is a product inventory API built with Node.js and Express. The application uses PostgreSQL for data storage and Redis for caching. The entire application stack—including the API server, PostgreSQL database, and Redis cache—is containerized using Docker and orchestrated with Docker Compose.

The code for this demo project can be found at this GitHub repository.

In this post, you'll learn how to deploy the OpenTelemetry Collector within your “dockerized” application. You will:

  1. Add the OpenTelemetry collector to our Docker Compose setup
  2. Configure the collector to receive telemetry data via OpenTelemetry Protocol (OTLP) and export it to SolarWinds® Observability SaaS
  3. Instrument your Node.js application using the OpenTelemetry JavaScript SDK to capture distributed traces, structured logs, and metrics
  4. Show how to view telemetry data in SolarWinds Observability SaaS

Step 1: Add the collector to your Docker Compose

To begin, add a new service to your docker-compose.yml. This service will use the official OpenTelemetry Collector Docker image and mount the collector configuration file you will create in Step 3.

For our example project, we modified docker-compose.yml so the main application would know where to send its telemetry data (to the endpoint specified by OTEL_EXPORTER_OTLP_ENDPOINT). Then, we added the otel-collector service, passing along an API_TOKEN for SolarWinds Observability SaaS data ingestion, which we'll select on Docker startup.

services:
  app:

    …
    environment:

      …
      - OTEL_EXPORTER_OTLP_ENDPOINT=http://otel-collector:4318
    depends_on:

      …
      - otel-collector

  postgres:

    …

  redis:

    …

  otel-collector:
    image: otel/opentelemetry-collector-contrib:latest
    command: ["--config=/etc/otel-collector-config.yaml"]
    environment:
      API_TOKEN: ${API_TOKEN}
    volumes:
      - type: bind
        source: ./otel-collector-config.yaml
        target: /etc/otel-collector-config.yaml
        read_only: true
    ports:
      - "14317:4317"
      - "14318:4318"
      - "13133:13133"

Step 2: Obtain your SolarWinds Observability SaaS endpoint and API token

To send telemetry data to SolarWinds Observability SaaS, you will need to find your OpenTelemetry endpoint and create an API token for data ingestion. Follow these steps.

Log in to SolarWinds Observability SaaS. The URL, after you log in, may look similar to this:

https://my.na-01.cloud.solarwinds.com/

The xx-yy part of the URL (na-01 in this example) will depend on the data center used for your SolarWinds Observability SaaS account and organization. Take note of this.

Navigate to Settings > API Tokens.

On the API Tokens page, click Create API Token.

Specify a name for your new API token. Select Ingestion as the token type. You can also add tags if you wish. Click Create API Token.

Copy the resulting token value.

Here’s what a basic otel-collector-config.yaml should include:

  • OTLP receiver: Accepting telemetry data, typically over ports 4318 for HTTP and 4317 for gRPC.
  • Processors for batching, resource attribution, and memory limits: Groups and enriches data, attaches service metadata (like service.name), and ensures the Collector stays within memory constraints.
  • Exporters to forward telemetry to external backends: These exporters allow you to send telemetry to the observability platform of your choice. In our example, we’ll forward data to SolarWinds Observability SaaS.

The otel-collector-config.yaml file for your project looks like this:

receivers:
  otlp:
    protocols:
      http:
        endpoint: 0.0.0.0:4318
      grpc:
        endpoint: 0.0.0.0:4317

processors:
  batch:
    timeout: 1s
    send_batch_size: 1024
  resource:
    attributes:
      - key: service.name
        value: "product-inventory-demo"
        action: upsert
  memory_limiter:
    check_interval: 1s
    limit_mib: 1500
    spike_limit_mib: 512

exporters:
  otlp/swo:
    endpoint: "otel.collector.na-01.cloud.solarwinds.com:443"
    tls:
      insecure: false
    headers:
      "authorization": "Bearer ${API_TOKEN}"
  debug:
    verbosity: detailed

extensions:
  pprof:
    endpoint: 0.0.0.0:1777
  health_check:
    endpoint: 0.0.0.0:13133
  zpages:
    endpoint: 0.0.0.0:55679

service:
  extensions: [health_check, pprof, zpages]
  telemetry:
    logs:
      level: debug
  pipelines:
    traces:
      receivers: [otlp]
      processors: [memory_limiter, batch, resource]
      exporters: [otlp/swo, debug]
    logs:
      receivers: [otlp]
      processors: [memory_limiter, batch, resource]
      exporters: [otlp/swo, debug]
    metrics:
      receivers: [otlp]
      processors: [memory_limiter, batch, resource]
      exporters: [otlp/swo, debug]

Note that this is where you specify the OpenTelemetry data ingestion endpoint for SolarWinds Observability SaaS. Make sure to replace na-01 with your organization's xx-yy data center code.

Step 4: Instrument your application

Next, you'll need to ensure your application is instrumented to emit telemetry via OTLP.

Add project dependencies

For the Node.js app used in this walkthrough, we started by installing the OpenTelemetry SDK for Node.js. We updated the package.json file to include the following dependencies:

"@opentelemetry/api": "1.7.0",
    "@opentelemetry/auto-instrumentations-node": "0.41.1",
    "@opentelemetry/exporter-trace-otlp-http": "0.48.0",
    "@opentelemetry/exporter-logs-otlp-http": "0.48.0",
    "@opentelemetry/exporter-metrics-otlp-http": "0.48.0",
    "@opentelemetry/instrumentation-express": "0.35.0",
    "@opentelemetry/instrumentation-http": "0.48.0",
    "@opentelemetry/instrumentation-pg": "0.35.0",
    "@opentelemetry/instrumentation-redis": "0.35.0",
    "@opentelemetry/resources": "1.21.0",
    "@opentelemetry/sdk-logs": "0.48.0",
    "@opentelemetry/sdk-metrics": "1.21.0",
    "@opentelemetry/sdk-node": "0.48.0",
    "@opentelemetry/sdk-trace-base": "1.21.0",
    "@opentelemetry/sdk-trace-node": "1.21.0",
    "@opentelemetry/semantic-conventions": "1.21.0",

Initialize the OpenTelemetry SDK

Then, we created a helper file (src/telemetry.js) to initialize the SDK and set up our instrumentation. This also sets up basic tracing.

const opentelemetry = require('@opentelemetry/sdk-node');
const { getNodeAutoInstrumentations } = require('@opentelemetry/auto-instrumentations-node');
const { OTLPTraceExporter } = require('@opentelemetry/exporter-trace-otlp-http');
const { Resource } = require('@opentelemetry/resources');
const { SemanticResourceAttributes } = require('@opentelemetry/semantic-conventions');
const { NodeSDK } = require('@opentelemetry/sdk-node');
const { PgInstrumentation } = require('@opentelemetry/instrumentation-pg');
const { RedisInstrumentation } = require('@opentelemetry/instrumentation-redis');
const { loggerProvider } = require('./logger');

async function setupTelemetry() {
    const resource = new Resource({
        [SemanticResourceAttributes.SERVICE_NAME]: 'product-inventory-api',
        [SemanticResourceAttributes.SERVICE_VERSION]: '1.0.0',
    });

    const traceExporter = new OTLPTraceExporter({
        url: process.env.OTEL_EXPORTER_OTLP_ENDPOINT + '/v1/traces',
    });

    const sdk = new NodeSDK({
        resource,
        traceExporter,
        instrumentations: [
            getNodeAutoInstrumentations(),
            new PgInstrumentation(),
            new RedisInstrumentation()
        ],
    });

    try {
        // Start the SDK
        await sdk.start();
        console.log('Tracing initialized');

        // Gracefully shut down the SDK on process exit
        process.on('SIGTERM', () => {
            sdk.shutdown()
                .then(() => console.log('Tracing terminated'))
                .catch((error) => console.log('Error terminating tracing', error))
                .finally(() => process.exit(0));
        });

        return { sdk };
    } catch (error) {
        console.error('Error initializing tracing:', error);
        throw error;
    }
}

module.exports = { setupTelemetry };

Use the OTLPLogExporter

We also modified our Logger (src/logger.js), which was originally just a simple winston logging setup, to use the OTLPLogExporter.

const winston = require('winston');
const { LoggerProvider, SimpleLogRecordProcessor } = require('@opentelemetry/sdk-logs');
const { OTLPLogExporter } = require('@opentelemetry/exporter-logs-otlp-http');
const { Resource } = require('@opentelemetry/resources');
const { SemanticResourceAttributes } = require('@opentelemetry/semantic-conventions');

// Create OpenTelemetry LoggerProvider
const loggerProvider = new LoggerProvider({
    resource: new Resource({
        [SemanticResourceAttributes.SERVICE_NAME]: 'product-inventory-api',
        [SemanticResourceAttributes.SERVICE_VERSION]: '1.0.0',
        [SemanticResourceAttributes.DEPLOYMENT_ENVIRONMENT]: process.env.NODE_ENV || 'development'
    }),
});

// Create OTLP Log Exporter
const logExporter = new OTLPLogExporter({
    url: process.env.OTEL_EXPORTER_OTLP_ENDPOINT + '/v1/logs',
    headers: {
        'Authorization': `Bearer ${process.env.API_TOKEN}`
    }
});

// Add the log processor to the logger provider
loggerProvider.addLogRecordProcessor(
    new SimpleLogRecordProcessor(logExporter)
);

// Get the OpenTelemetry logger
const otelLogger = loggerProvider.getLogger('product-inventory-api');

// Create Winston logger for console output
const winstonLogger = winston.createLogger({
    level: process.env.LOG_LEVEL || 'info',
    format: winston.format.combine(
        winston.format.timestamp(),
        winston.format.printf(({ timestamp, level, ...meta }) => {
            return JSON.stringify({
                timestamp,
                level,
                ...meta
            }, null, 2);
        })
    ),
    defaultMeta: { service: 'product-inventory-api' },
    transports: [
        new winston.transports.Console()
    ]
});

// Create a combined logger that sends to both Winston and OpenTelemetry
const logger = {
    info: (logData) => {
        const logObject = typeof logData === 'string' ? { message: logData } : logData;
        winstonLogger.info(logObject);
        otelLogger.emit({
            severityText: 'INFO',
            body: JSON.stringify(logObject),
            attributes: {
                ...logObject,
                logLevel: 'info',
                environment: process.env.NODE_ENV || 'development'
            }
        });
    },
    error: (logData) => {
        const logObject = typeof logData === 'string' ? { message: logData } : logData;
        winstonLogger.error(logObject);
        otelLogger.emit({
            severityText: 'ERROR',
            body: JSON.stringify(logObject),
            attributes: {
                ...logObject,
                logLevel: 'error',
                environment: process.env.NODE_ENV || 'development'
            }
        });
    },
    warn: (logData) => {
        const logObject = typeof logData === 'string' ? { message: logData } : logData;
        winstonLogger.warn(logObject);
        otelLogger.emit({
            severityText: 'WARN',
            body: JSON.stringify(logObject),
            attributes: {
                ...logObject,
                logLevel: 'warn',
                environment: process.env.NODE_ENV || 'development'
            }
        });
    },
    debug: (logData) => {
        const logObject = typeof logData === 'string' ? { message: logData } : logData;
        winstonLogger.debug(logObject);
        otelLogger.emit({
            severityText: 'DEBUG',
            body: JSON.stringify(logObject),
            attributes: {
                ...logObject,
                logLevel: 'debug',
                environment: process.env.NODE_ENV || 'development'
            }
        });
    }
};

module.exports = { logger, loggerProvider };

Use the OTLPMetricExporter

We also created a metrics helper (src/metrics.js), using OTLPMetricExporter to capture some key metrics for our Node.js application.

const { MeterProvider, PeriodicExportingMetricReader } = require('@opentelemetry/sdk-metrics');
const { OTLPMetricExporter } = require('@opentelemetry/exporter-metrics-otlp-http');
const { Resource } = require('@opentelemetry/resources');
const { SemanticResourceAttributes } = require('@opentelemetry/semantic-conventions');
const { metrics } = require('@opentelemetry/api');
const os = require('os');

// Create OTLP Metric Exporter
const metricExporter = new OTLPMetricExporter({
    url: process.env.OTEL_EXPORTER_OTLP_ENDPOINT + '/v1/metrics',
    headers: {
        'Authorization': `Bearer ${process.env.API_TOKEN}`
    }
});

// Create a meter provider with the exporter
const meterProvider = new MeterProvider({
    resource: new Resource({
        [SemanticResourceAttributes.SERVICE_NAME]: 'product-inventory-api',
        [SemanticResourceAttributes.SERVICE_VERSION]: '1.0.0',
        [SemanticResourceAttributes.DEPLOYMENT_ENVIRONMENT]: process.env.NODE_ENV || 'development'
    })
});

// Register the metric exporter with a periodic reader
meterProvider.addMetricReader(new PeriodicExportingMetricReader({
    exporter: metricExporter,
    exportIntervalMillis: 1000 // Export metrics every second
}));

// Set the global meter provider
metrics.setGlobalMeterProvider(meterProvider);

// Get a meter instance
const meter = metrics.getMeter('product-inventory-api');

// Create metrics
const requestCounter = meter.createCounter('http.requests.total', {
    description: 'Total number of HTTP requests',
    unit: '1'
});

const errorCounter = meter.createCounter('http.errors.total', {
    description: 'Total number of HTTP errors',
    unit: '1'
});

const requestDuration = meter.createHistogram('http.request.duration', {
    description: 'HTTP request duration in milliseconds',
    unit: 'ms'
});

const dbQueryDuration = meter.createHistogram('db.query.duration', {
    description: 'Database query duration in milliseconds',
    unit: 'ms'
});

const cacheOperationDuration = meter.createHistogram('cache.operation.duration', {
    description: 'Cache operation duration in milliseconds',
    unit: 'ms'
});

const cacheHitCounter = meter.createCounter('cache.hits.total', {
    description: 'Total number of cache hits',
    unit: '1'
});

const cacheMissCounter = meter.createCounter('cache.misses.total', {
    description: 'Total number of cache misses',
    unit: '1'
});

// System metrics
const cpuUsage = meter.createObservableGauge('system.cpu.usage', {
    description: 'CPU usage percentage',
    unit: '%'
});

const memoryUsage = meter.createObservableGauge('system.memory.usage', {
    description: 'Memory usage in bytes',
    unit: 'bytes'
});

// Register system metrics using individual callbacks
cpuUsage.addCallback((result) => {
    const cpuUsagePercent = os.loadavg()[0] * 100 / os.cpus().length;
    result.observe(cpuUsagePercent, {
        type: 'process'
    });
});

memoryUsage.addCallback((result) => {
    const memUsage = process.memoryUsage();
    result.observe(memUsage.heapUsed, {
        type: 'heap'
    });
    result.observe(memUsage.rss, {
        type: 'rss'
    });
});

// Create a middleware to track HTTP metrics
const metricsMiddleware = (req, res, next) => {
    const startTime = Date.now();
    const path = req.route?.path || req.path;
    const method = req.method;
   
    // Increment request counter
    requestCounter.add(1, {
        method,
        path,
        status: 'pending'
    });
   
    // Track response
    res.on('finish', () => {
        const duration = Date.now() - startTime;
        const status = res.statusCode;
       
        // Record request duration
        requestDuration.record(duration, {
            method,
            path,
            status: status.toString()
        });
       
        // Update request counter with final status
        requestCounter.add(1, {
            method,
            path,
            status: status.toString()
        });
       
        // Increment error counter for 4xx and 5xx status codes
        if (status >= 400) {
            errorCounter.add(1, {
                method,
                path,
                status: status.toString()
            });
        }
    });
   
    next();
};

module.exports = {
    meter,
    requestCounter,
    errorCounter,
    requestDuration,
    dbQueryDuration,
    cacheOperationDuration,
    cacheHitCounter,
    cacheMissCounter,
    metricsMiddleware
};

Finally, we sprinkled the actual logging and metric instrumentation calls throughout our main application code (src/index.js). For example:

const { setupTelemetry } = require('./telemetry');
const { logger } = require('./logger');
const { metricsMiddleware, dbQueryDuration, cacheOperationDuration, cacheHitCounter, cacheMissCounter } = require('./metrics');



async function startServer() {
    try {
        await setupTelemetry();

        …


        // Wrapper functions for Redis metrics
        async function getCachedDataWithMetrics(key) {
            const startTime = Date.now();
            try {
                const result = await redisClient.get(key);
                const duration = Date.now() - startTime;
               
                cacheOperationDuration.record(duration, {
                    operation: 'get',
                    key
                });
               
                if (result) {
                    cacheHitCounter.add(1, { key });
                    return JSON.parse(result);
                } else {
                    cacheMissCounter.add(1, { key });
                    return null;
                }
            } catch (error) {
                const duration = Date.now() - startTime;
                cacheOperationDuration.record(duration, {
                    operation: 'get',
                    key,
                    error: error.message
                });
                throw error;
            }
        }

        …

        // DB queries with metrics capture added

        async function executeQuery(query, params) {
            const startTime = Date.now();
            try {
                const result = await pool.query(query, params);
                const duration = Date.now() - startTime;
               
                dbQueryDuration.record(duration, {
                    query: query.split(' ')[0].toLowerCase(), // GET, INSERT, UPDATE, DELETE
                    table: query.includes('FROM') ? query.split('FROM')[1].split(' ')[1] : 'unknown'
                });
               
                return result;
            } catch (error) {
                const duration = Date.now() - startTime;
                dbQueryDuration.record(duration, {
                    query: query.split(' ')[0].toLowerCase(),
                    table: query.includes('FROM') ? query.split('FROM')[1].split(' ')[1] : 'unknown',
                    error: error.message
                });
                throw error;
            }
        }

        // Get a single product by ID
        app.get('/api/products/:id', async (req, res) => {
            const requestId = Math.random().toString(36).substring(7);
            try {
                const { id } = req.params;
                const cacheKey = `product:${id}`;
                const startTime = Date.now();

                // Try to get from cache first
                const cacheStart = Date.now();
                logger.debug({
                    message: "Cache lookup",
                    requestId,
                    cacheKey,
                    cacheType: "single-product",
                    productId: id
                });
                const cachedData = await getCachedDataWithMetrics(cacheKey);
                const cacheDuration = Date.now() - cacheStart;

                if (cachedData) {
                    const totalDuration = Date.now() - startTime;
                    logger.info({
                        message: "Cache hit",
                        requestId,
                        cacheKey,
                        cacheType: "single-product",
                        productId: id,
                        performance: {
                            cacheDuration: `${cacheDuration}ms`,
                            totalDuration: `${totalDuration}ms`
                        },
                        data: {
                            product: {
                                id: cachedData.id,
                                name: cachedData.name,
                                category: cachedData.category,
                                price: cachedData.price,
                                quantity: cachedData.quantity
                            }
                        }
                    });
                    return res.json(cachedData);
                }

                logger.info({
                    message: "Cache miss",
                    requestId,
                    cacheKey,
                    cacheType: "single-product",
                    productId: id,
                    performance: {
                        cacheDuration: `${cacheDuration}ms`
                    }
                });

                const dbStart = Date.now();
                const result = await pool.query('SELECT * FROM products WHERE id = $1', [id]);
                const dbDuration = Date.now() - dbStart;
               
                if (result.rows.length === 0) {
                    const totalDuration = Date.now() - startTime;
                    logger.warn({
                        message: "Product not found",
                        requestId,
                        productId: id,
                        performance: {
                            totalDuration: `${totalDuration}ms`
                        }
                    });
                    return res.status(404).json({ error: 'Product not found' });
                }

                // Cache the result
                const cacheSetStart = Date.now();
                logger.debug({
                    message: "Caching result",
                    requestId,
                    cacheKey,
                    cacheType: "single-product",
                    productId: id,
                    data: {
                        product: {
                            name: result.rows[0].name,
                            category: result.rows[0].category,
                            price: result.rows[0].price,
                            quantity: result.rows[0].quantity
                        }
                    }
                });
                await setCachedDataWithMetrics(cacheKey, result.rows[0]);
                const cacheSetDuration = Date.now() - cacheSetStart;
               
                const totalDuration = Date.now() - startTime;
                logger.info({
                    message: "Database fetch",
                    requestId,
                    cacheType: "single-product",
                    productId: id,
                    performance: {
                        dbDuration: `${dbDuration}ms`,
                        cacheSetDuration: `${cacheSetDuration}ms`,
                        totalDuration: `${totalDuration}ms`
                    },
                    data: {
                        product: {
                            id: result.rows[0].id,
                            name: result.rows[0].name,
                            category: result.rows[0].category,
                            price: result.rows[0].price,
                            quantity: result.rows[0].quantity
                        }
                    }
                });
                res.json(result.rows[0]);
            } catch (error) {
                logger.error({
                    message: "Error fetching product:",
                    requestId,
                    error: {
                        message: error.message,
                        stack: error.stack,
                        id: req.params.id,
                        timestamp: new Date().toISOString(),
                        errorType: error.name,
                        errorCode: error.code
                    }
                });
                res.status(500).json({ error: 'Internal server error' });
            }
        });

        …

Step 5: Use Docker Compose to bring everything up

With the application instrumented with OpenTelemetry to emit logs, traces, and metrics, it’s time to orchestrate the full setup with Docker Compose. This simplifies the process of spinning up the collector, the app, and supporting services.

 To spin up your full application, you need to make sure to pass your SolarWinds Observability SaaS data ingestion API token through the environment variable. First, run this command:

$ export API_TOKEN=replace-this-with-your-API-token

Then, run the following command:

$ docker-compose up --build -d

With the application up and running, you can use curl to send requests to your API, currently running at localhost:3000. For this walkthrough’s convenience, we built a few scripts (demo.sh and demo_continuous.sh) to automate a series of API calls.

Step 6: Verify that telemetry is Flowing

Your dockerized application is up and running, and the OpenTelemetry Collector (running in its own Docker container) is collecting data and exporting it to SolarWinds Observability SaaS. To verify this, navigate to the APM page.

There, you’ll see your application listed.

When you click on the application, you can see from the tab bar that there are associated traces, metrics, and logs.

View trace information

Clicking on the Traces tab shows you traces that were captured by your instrumentation.

You can click on the details for any trace to see more information.

View log information

Returning to the application overview, you can navigate to the Logs tag. This will show you the log messages captured by the OpenTelemetry Collector which were subsequently exported to SolarWinds Observability SaaS. You can also click on Logs in the main left sidebar navigation.

View metric information

You can view captured metrics either by clicking the Metrics tab in the application overview or through Analyze > Metrics in the main sidebar navigation.

This will show your custom metrics, any metrics from trace auto instrumentation, and SolarWinds Observability SaaS platform metrics.

Clicking on an individual metric will show a chart of that metric over time.

Create custom dashboards

You can create custom dashboards to visualize multiple metrics for correlation and better understanding of your telemetry data. On the Dashboards page, click Create Dashboard. Select the Standard dashboard type and click Next.

An empty dashboard will be created. Click Save. Specify a dashboard name, and then click Save again.

Returning the list of metrics for your application, you can choose any metric of interest, click on its action menu, and select Send to Dashboard.

Multiple metric series can be overlaid upon one another within a single chart.

Wrap-up: Move from local setup to production observability

In this walkthrough, we deployed the OpenTelemetry Collector in a Docker container, configured it to receive telemetry from a containerized app, and forwarded that data to SolarWinds Observability SaaS.

This setup mirrors what you might use in production—centralized, decoupled, and flexible. When you’re ready to level up your observability pipeline, try out SolarWinds Observability SaaS to see the complete picture of your system in real time.

THWACK - Symbolize TM, R, and C