Skip to main content

 

Splunk Lantern

Getting Started with Splunk Log Observer

Getting Data In

In order to use Splunk Log Observer, you must start with getting data in.

Step 1: Collect Infrastructure Data with an OpenTelemetry Collector

Observability Cloud supports integrations for Kubernetes, Linux, and Windows. Integrations for these data sources help you deploy a Splunk OpenTelemetry Collector to export metrics from hosts and containers to Observability Cloud.

Step 2: Verify successful data ingestion

Verify successful ingestion of data by filtering or aggregating the log data available. Performing these basic functions will enable you to drill deeper into the ingested log data to determine whether or not the data was ingested as expected. 

You can do this by selecting the Add Filter button at the top of the search header in the Log Observer UI. Add a filter to the data that you know should be present in the ingested log data in order to verify successful data ingestion.

After you’re satisfied with how the data is ingested and is presented in Log Observer, you have completed this Getting Data In step.

Completing Foundational Training

The next step is to complete the respective training via Splunk Education.

Step 1: Register for Splunk EDU Course: Using the Splunk Log Observer

This course is designed for developers responsible for debugging their own applications, and for SREs responsible for troubleshooting performance issues. The Splunk Log Observer is built primarily for DevOps teams working on applications built on modern tech stacks (containerized micro-services). It describes how to use the tool to work with log data using the no-code user interface. You will learn to create, save, and share search filters; and to investigate the shape of your log data.  You will analyze logs with aggregation functions and group by rules, and you will create rules to manipulate incoming data, as well as to generate synthetic metrics from log data.

Step 2: Attend and Complete Foundational Training

Mark your calendars for your scheduled training session and ensure you attend! You will learn a lot about how to get the most out of your usage of Log Observer, so this is very important!

Monitoring and Troubleshooting

Now that you have your logs being successfully ingested, lets pivot our focus to monitoring. There are specific features you can make use of to achieve quick and easy exploration, monitoring, and troubleshooting of logs.

Explore the Live Tail Feature

Live Tail is powerful for exploring, monitoring, and troubleshooting logs all within a real-time stream. “Real-Time Stream” means the logs are made available in the UI as they happen and are continuously updated as more logs are ingested. You can slow down or speed up the stream of logs to a frequency that works best for your purposes. Beyond that you can filter the log stream and also apply keywords to the log stream in order to highlight specific keywords in the log lines. This is helpful for locating the desired log lines in the stream or simply stay on top of errors, affected services, and other relative information found embedded in the logs.

  • Was this article helpful?