Skip to main content
 
Splunk Lantern

Extracting service insights from APM

 

Review the Splunk Application Performance Monitoring homepage

Now that you have your OTel Collector in place and our applications instrumented, you should have data populating in the out of the box visualizations in Splunk Application Performance Monitoring. Let's get a feel for these highly valuable components of the product.

The Splunk Application Performance Monitoring homepage provides a high density view at a service/workflow level with historical context. Out of the box, this page shows you Top Services by Error Rate, Top Services by Latency (P90) as well as the Top Business Workflows by Error Rate and then by Duration (P90). For more details, choose a viewing preference: Service Map, Tag Spotlight, or Trace Search.

Learn more about monitoring applications with pre-built dashboards in Splunk APM.

Review the Service Map

The Service Map is a visual representation of your various services and their dependencies. Splunk Application Performance Monitoring automatically discovers your instrumented services and their interactions to present dynamic and real-time service maps of your application’s architecture. Use the service map to make more sense of your complex network of services and quickly identify where issues may be occurring, in a visual way.

Review Tag Spotlight

Use Tag Spotlight to analyze the performance of your services to discover trends that contribute to high latency or error rates with indexed span tags. You can break down every indexed span tag for a particular service to view metrics for it. When you select specific span tag values or a specific time range, you can view representative traces to learn more about an outlying incident.

Create service and business workflow detectors

You can dynamically monitor error rate and latency in the services you are tracing with Splunk Application Performance Monitoring as well as Business Workflows. Let’s walk through a configuration of a Splunk Application Performance Monitoring Service/Business Workflow Detector below.

So, what can you configure within a Detector? Detectors contain rules that specify:

  • When the detector will be triggered, based on conditions related to the detector’s signal/metric
  • The severity of the alert to be generated by the detector
  • Where notifications should be sent

From there, set up your detector parameters:

  • Type. Choose what type of detector to create: APM Metric or Infrastructure/Custom Metric.
  • Alert Signal. Define what Service Metric or Business Workflow are you trying to alert on: Error Rate or Latency. Here you will also define the specific environment and specific service/endpoint 
  • Alert Condition. Define the conditions of the signal/metric in which you would like to be alerted on: Static Threshold or Sudden Change.
  • Alert Settings. These settings depend on which condition is selected and will be configured at this step.
  • Alert Message. Define the severity of the alert and customize the message of it. Can also link to helpful documentation to be delivered with the alert.
  • Alert Recipients. Define who will receive the alert and the delivery method: email, Splunk On-Call, Slack, PagerDuty, Webhook, etc.

Learn more about detectors in Splunk APM and observability at large.