Skip to main content
 
 
Splunk Lantern

Running the Splunk OpenTelemetry Collector on Darwin

 

Your company runs on Darwin (Mac OS X) and you are interested in using the Splunk-Otel-Collector and the upstream OpenTelemetry Collector on this operating system. You need configuration guidance.

Solution

The full configuration involves some manual steps to build a custom binary for Darwin (Mac OS X) for AMD64 or for ARM64 for the Splunk distribution of the OpenTelemetry Collector.

If you want to get started without compiling software and wrangling dependencies, you can use a pre-compiled binary of OpenTelemetry Collector, but this solution does not include the additional functionality included in the Splunk-Otel-Collector.

Prerequisites

  • Git
  • Xcode
  • Xcode developer tools
  • Golang 

If you have an HEC endpoint currently available, logs can be configured but that is not currently covered in this instruction set.

Steps

  1. Clone the Splunk-Otel-Collector github repository.
    git clone https://github.com/signalfx/splunk-otel-collector.git
  2. Change to the Splunk-Otel-Collector directory.
    cd splunk-otel-collector
  3. Modify the makefile to set CGO_ENABLED=1 in the “otelcol” section using your favorite text editor.
    $ vi Makefile
        …
        .PHONY: otelcol
        otelcol:
           go generate ./...
           GO111MODULE=on CGO_ENABLED=1 go build -trimpath -o
        ./bin/otelcol_$(GOOS)_$(GOARCH)$(EXTENSION) $(BUILD_INFO)

    The difference between the makefile is shown in vscode source control, highlighted in row 140 of the following screenshot.
  4. Build the Darwin package for Install Tools and Splunk-Otel-Collector using make:
        make install-tools
        make otelcol
  5. Edit .zshrc to add golang home if you do not already have it set:
    1. On the command line, open .zshrc with your favorite text editor:
      vi .zshrc
    2. Add the paths to your golang installation. If you don’t know what they are, look at the output of ‘go env’. It should look like this:
      export GOPATH=$HOME/go
      export GOROOT=/usr/local/opt/go/libexec
      export PATH=$PATH:$GOPATH/bin
      export PATH=$PATH:$GOROOT/bin
  6. Ensure you have the latest version of the addlicense package installed:
    1. Install the go package addlicense if needed:
      go get -u github.com/google/addlicense
    2. Validate the addlicense go command by calling addlicense on the command line:
      $ addlicense
      Usage: addlicense [flags] pattern [pattern ...]
  7. Move the directory of the clone of the Splunk-Otel-Collector and build the collector with the make command.
    $ cd /your/path/to/Splunk-Otel-Collector/
    $ make -k
  8. Copy the artifact from the /bin folder within the Splunk-Otel-Collector repository directory’s /bin to /etc/otel/collector. This directory includes a symlink of otelcol to the binary built for amd64 or arm64 etc.
    $ cd /bin
    $ cp -R * /etc/otel/collector
  9. Add the environmental variables needed for the agent_config OpenTelemetry Collector configuration.
    # If the collector is installed without the Linux/Windows installer script, the following
    # environment variables are required to be manually defined or configured below:
    # - SPLUNK_ACCESS_TOKEN: The Splunk access token to authenticate requests
    # - SPLUNK_API_URL: The Splunk API URL, e.g. https://api.us0.signalfx.com
    # - SPLUNK_BUNDLE_DIR: The path to the Smart Agent bundle, e.g. /usr/lib/splunk-otel-collector/agent-bundle
    # - SPLUNK_COLLECTD_DIR: The path to the collectd config directory for the Smart Agent, e.g. /usr/lib/splunk-otel-collector/agent-bundle/run/collectd
    # - SPLUNK_HEC_TOKEN: The Splunk HEC authentication token
    # - SPLUNK_HEC_URL: The Splunk HEC endpoint URL, e.g. https://ingest.us0.signalfx.com/v1/log
    # - SPLUNK_INGEST_URL: The Splunk ingest URL, e.g. https://ingest.us0.signalfx.com
    # - SPLUNK_TRACE_URL: The Splunk trace endpoint URL, e.g. https://ingest.us0.signalfx.com/v2/trace
  10. Add or modify an agent_config.yaml to /etc/otel/collector or your desired path and then start the Collector. Comment out the logging sections only sending data to Splunk Observability Cloud.
    SPLUNK_API_TOKEN=token SPLUNK_ACCESS_TOKEN=token SPLUNK_API_URL=https://api.us0.signalfx.com SPLUNK_INGEST_URL=https://ingest.us0.signalfx.com/ SPLUNK_TRACE_URL=https://ingest.us0.signalfx.com/v2/trace SPLUNK_COLLECTD_DIR=/usr/local/opt/collectd SPLUNK_REALM=us0 ./otelcol --config=/etc/otel/collector/agent_config.yaml
  11. Click Allow on the modal that asks if you want to allow the incoming network. Otherwise, the OpenTelemetry Collector will not work properly.
  12. Navigate to your the Splunk platform (for logs) or Splunk Observability Cloud (for host metrics) to help validate the Collector configuration. In Splunk Observability Cloud, navigate to Infrastructure > My Data Center > Hosts and filter to the host name of your machine. This opens a more detailed dashboard where you can learn more about the host's CPU, memory, disk, and additional metadata.
  13. In Splunk Application Performance Monitoring, you can explore local applications. Use synthetic trace data to test the local collector and view sample traces.

If you have node.js applications automatically instrumented with Splunk tracing, you can also run those and see the service highlighted in Tag Spotlight, as in the following example.

Resources

Example configuration: /etc/otel/collector/agent_config.yaml (logs disabled)

extensions:
  health_check:
    endpoint: 0.0.0.0:13133
  smartagent:
    bundleDir: ${SPLUNK_BUNDLE_DIR}
    collectd:
      configDir: ${SPLUNK_COLLECTD_DIR}
  zpages:
    endpoint: 0.0.0.0:55679
  memory_ballast:
    size_in_percentage: 33
receivers:
  jaeger:
    protocols:
      grpc:
        endpoint: 0.0.0.0:14250
      thrift_binary:
        endpoint: 0.0.0.0:6832
      thrift_compact:
        endpoint: 0.0.0.0:6831
      thrift_http:
        endpoint: 0.0.0.0:14268
  sapm:
    endpoint: 0.0.0.0:7276
  zipkin:
    endpoint: 0.0.0.0:9411
  otlp:
    protocols:
      grpc:
        endpoint: 0.0.0.0:4317
      http:
        endpoint: 0.0.0.0:4318
  prometheus/internal:
    config:
      scrape_configs:
      - job_name: otel-collector
        scrape_interval: 10s
        static_configs:
        - targets:
          - 0.0.0.0:8888
        metric_relabel_configs:
        - source_labels:
          - __name__
          regex: .*grpc_io.*
          action: drop
  signalfx:
    endpoint: 0.0.0.0:9943
  hostmetrics:
    collection_interval: 10s
    scrapers:
#      cpu:
#      disk:
      filesystem:
      memory:
      network:
      load:
      paging:
      processes:
processors:
  batch:
  memory_limiter:
    check_interval: 2s
    limit_mib: ${SPLUNK_MEMORY_LIMIT_MIB}
  resourcedetection:
    detectors:
    - system
    override: false
  resourcedetection/internal:
    detectors:
    - system
    override: true
  resource/add_environment:
    attributes:
    - action: insert
      value: production
      key: deployment.environment
exporters:
  sapm:
    access_token: ${SPLUNK_ACCESS_TOKEN}
    endpoint: ${SPLUNK_TRACE_URL}
  signalfx:
    access_token: ${SPLUNK_API_TOKEN}
    api_url: ${SPLUNK_API_URL}
    ingest_url: ${SPLUNK_INGEST_URL}
    sync_host_metadata: true
    correlation:
#  logging:
#    loglevel: debug
service:
  extensions:
  - health_check
  - zpages
  - memory_ballast
  pipelines:
    metrics:
      receivers:
      - otlp
      - signalfx
      - hostmetrics
      processors:
      - memory_limiter
      - batch
      - resourcedetection
      exporters:
      - signalfx
#      - logging
    metrics/internal:
      receivers:
      - prometheus/internal
      processors:
      - memory_limiter
      - batch
      - resourcedetection/internal
      exporters:
      - signalfx
    traces:
      receivers:
      - jaeger
      - sapm
      - zipkin
      - otlp
      processors:
      - memory_limiter
      - batch
      - resourcedetection
      - resource/add_environment
      exporters:
      - sapm
      - signalfx
#      - logging
 #   logs:
 #     receivers:
 #     - otlp
 #     processors:
 #     - memory_limiter
 #     - batch
 #     - resourcedetection
 #     - resource/add_environment
 #     exporters:
 #     - logging