Skip to main content
 
 
Splunk Lantern

Troubleshooting database performance

 

The applications you monitor in Splunk Infrastructure Monitoring use databases. Those databases are monitored by detectors that trigger alerts when the databases negatively impact application performance, availability, or reliability. As a CloudOps engineer, site reliability engineer, service developer, or database administrator, when you receive an alert, you need to quickly navigate to Splunk Application Performance Monitoring to determine what services are contributing to infrastructure high resource usage or a performance issue.

As part of DevOps process improvement strategy, you might also be interested in answering the following questions:

  • How can we correlate our database performance monitoring with application performance monitoring so when a database incident occurs, the impact to the services can quickly be visualized in the context of a service map and transactions being impacted?
  • How can we break down the operational silos and enrich the collaboration between the infrastructure and application teams as part of continuous process improvement?

Solution

You can resolve this scenario with Splunk Application Performance Monitoring’s Database Query Performance capability.

Your company’s leadership wants a proactive operational workflow that starts with a database instance issue such as high CPU consumption or missing cache rate on a database server and having the ability to quickly correlate the applications being impacted and potentially the source of the issue such as a bad database query.

Your company has engineered the use of Redis enterprise clusters for global caching services in support of their microservice architecture and application development framework. Operations leadership wants to assure their tier one application portfolio, including your company's flagship sales portal application, has no observable blind spots as it relates to the new content caching database architecture.

Overall, your improvement goals are:

  • Improve the ability of a CloudOps engineer, SRE or DBA to quickly identify and correlate database performance with application transactional performance and impact.
  • Improve the DevOps process collaboration between database operations teams and the application SREs, reducing the number of war rooms required.
  • Improve incident urgency and prioritization quality based upon application importance and impact radius, in short, knowing what to work on first.
  • Improve MTTD (Mean Time to Detect) and MTTR (Mean Time to Restore).

Process

Your company’s sales portal consumes a microservice, called cartservice, where a new version has been deployed that targets Redis database caching optimizations. The service development team just introduced the new version as part of the DevOps CI/CD pipeline using their canary deployment methodology.

A Splunk detector has been deployed and has alerted on when there is a sudden increase of the Redis database instance’s CPU utilization (%). You are the CloudOps engineer who receives the notification.

Here is how the detector has been set up. The detector looks at Redis CPU Utilization (%) signal history and alerts when the CPU % changes 20% above the statistical mean or norm.

clipboard_efff0ecde8f0ef0cc9380c8f1d1c40e31.png

The detector triggers an alert, in this case, when the Redis database instance CPU utilization (%) increases 20% above the statistical mean.

unnamed (18).png

  1. In Splunk Infrastructure Monitoring, navigate to the Redis database impacted to review the dashboard.
  2. Here you can see there is a spike in CPU Utilization (%), as well as spikes in Operations/sec and Network Bytes/sec.

    unnamed (19).png

  3. At the bottom of UI, click the Map for redis tab to open the Service Map where you can see the services that are consuming the Redis database instance resources.

  4. In the Service Map shown below, you can see that the cartservice making database calls to the Redis database is experiencing latency of 2.49s. Click the redis inferred database in the Service Map to access Database Query Performance information.

    clipboard_e8f9ddb21ab50c341422d6248b52c449d.png

  5. Latency appears normal for the Redis commands. To drill down deeper, open Database Query Performance by using the expand icon < > at the top-right of this section.

    clipboard_e0a398b90c1268c2f27021eb75ad8d9b9.png

  6. Here you can see that the amount of SCAN command requests, 96.5k, is unusually high compared to the other commands. The Total Time of 22.3min is also unusually high. From experience, you know that SCANs are not normally used in production services.

    clipboard_e185e22c8ef6cc336a31833f65c2d819d.png

  7. To drill down deeper, open Tag Spotlight: Request Latency by using the expand icon < > at the top-right of this section.

    clipboard_ece58009ea79020d85a4ffea09387eb69.png

  8. Looking at the Operation pane in Tag Spotlight you can see the high number of SCAN requests. Double-click SCAN to filter for only SCAN spans.

    clipboard_ecade17c1e59418d874ea68d71b5a5841.png

  9. On the chart showing the filtered SCAN spans, click a high peak.

    clipboard_ec359af727676e30cdb55dd6ac25b066c.png

  10. This brings up a number of traces. Click a trace to examine it further.

    clipboard_eceac8ee70e792741ad64bc305ee221a9.png

  11. Click the trace UI’s Span Performance tab. Here you can see 128 SCAN spans for a single trace, and one scan is taking over a second to complete, which is high. The SCAN is also consuming 89.7% of the total workload. Given this information, you conclude this is the probable cause for the Redis database instance CPU Utilization (%), Operations/sec and Network Bytes/sec. You conclude that the SCAN is impacting the database instance resource consumption and potentially the performance.

    clipboard_edcda5da3c29bca18e435a3acc010540f.png

  12. At this point, you can notify the cartservice development team so they can they perform the rollback of the cartservice service version. The team can then identify what led up to this problem - for example, whether the SCAN was introduced for testing and should not have been part of the new build. In an instance like this, the team can remove the SCAN and redeploy, allowing the database instance and cartservice latency metrics to trend back to normal.

Quick identification of the services that impacted the Redis database instance performance and identification of the root cause helps accelerate MTTI (mean time to identify) and MTTR (mean time to recover), with minimal customer impact.

Next steps

You might also be interested in Troubleshooting a service latency issue related to a database query.

To fully unlock the power of Splunk, we strongly recommend our comprehensive Splunk training. At this stage in your journey, we recommend you explore the Splunk Observability Training Courses.

Splunk OnDemand Services: Use these credit-based services for direct access to Splunk technical consultants with a variety of technical services from a pre-defined catalog. Most customers have OnDemand Services per their license support plan. Engage the ODS team at OnDemand-Inquires@splunk.com if you require assistance.