Skip to main content
 
 
Splunk Lantern

Getting Kubernetes log data Into Splunk Cloud Platform with OpenTelemetry

 

Splunk Cloud Platform offers many ways to get logs into the platform. For applications running in Kubernetes, we recommend using native OpenTelemetry logging capabilities, which are included as part of the Splunk Distribution of the OpenTelemetry Collector.

This article uses the Astronomy Shop OpenTelemetry Demo example configured in Setting up the OpenTelemetry Demo in Kubernetes. If you don't have an OpenTelemetry Demo application set up in your environment, use that article first to set one up.

Create a custom values.yaml file

We’ll enable native OpenTelemetry logging by creating a custom yaml file, called values.yaml, with the following content:

splunkPlatform:
  token: <your HEC token> 
  endpoint: https://<your HEC endpoint>:<HEC port>/services/collector
  index: astronomyshop
  insecureSkipVerify: false
splunkObservability:
  accessToken: <your access token> 
  realm: <your realm e.g. us1, eu0, etc.>
clusterName: minikube
logsEngine: otel

You’ll need to add your access tokens for Splunk Cloud Platform and Splunk Observability Cloud, as well as your HEC endpoint and Splunk Observability Cloud realm. 

If you’re using a trial instance of Splunk Cloud Platform, set the insecureSkipVerify attribute in the above example to “true”.  This is required as a trial instance includes a self-signed certificate, and is not intended for production usage.

This tells the Splunk Distribution of the OpenTelemetry Collector running in our Kubernetes cluster to collect logs using native OpenTelemetry capabilities and send them to our Splunk Cloud Platform instance via the HEC endpoint.

Get the Helm release name

Next, run the helm list command to get the name of your Helm release. The output should look something like this:

NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
splunk-otel-collector-1692055334 default 1 2023-08-14 16:22:14.836667 -0700 PDT deployed splunk-otel-collector-0.82.0 0.82.0

In our example, the Helm release name is splunk-otel-collector-1692055334.

Update the Helm release

Then, run the following command to update the Helm release using our custom values.yaml file.

helm upgrade <helm release name> -f values.yaml splunk-otel-collector-chart/splunk-otel-collector

Verify log data is flowing

After a minute or so, we should see logs flowing into Splunk Cloud Platform.

clipboard_ec5187a0f39ca1b1c663763f24a085dff.png

We can also run a count by sourcetype, to ensure that we capture logs from all services in the demo application.

clipboard_eb87acc49f67dd19270790a41dbec7c13.png


With just a few changes, we were able to update our helm release and configure native OpenTelemetry logging. The underlying OpenTelemetry operator did the heavy lifting and ensured all of the application logs from our Astronomy Shop demo are sent to Splunk Cloud Platform.

Cleanup

If you want, you can clean up the application deployment by running the following commands.

kubectl delete --namespace otel-demo -f ./splunk/opentelemetry-demo.yaml
helm uninstall <helm release name> 

Next steps

In this article, we showed how native OpenTelemetry logging capabilities included with the Splunk Distribution of the OpenTelemetry Collector can be used to effortlessly bring logs into Splunk Cloud Platform.

You might be interested in how to do these processes in Docker. If so, see: Getting Docker log data Into Splunk Cloud Platform with OpenTelemetry.

You might also be interested in configuring Splunk Log Observer Connect to bring the logs into Splunk Observability Cloud, and then using correlated log, trace, and metric data in Splunk Observability Cloud to rapidly troubleshoot application issues.