Skip to main content
 
 
Splunk Lantern

Optimizing search

 

Slow searches can be caused by inefficient search practices, but they can also be caused by poor data quality. Inefficiencies such as incorrect event breaks and time stamp errors in the data can cause indexers to work overtime both when indexing data and finding the search results. You want to resolve these issues to get performance improvements.

Use the Monitoring Console to look for performance issues

Splunk Enterprise

The Monitoring Console comes with preconfigured health checks in addition to platform alerts. You can modify existing health checks or create new ones. You can interpret results in the following dashboards to identify ways to optimize and troubleshoot your deployment.

  • Search activity dashboards. The Search Activity: Instance and Search Activity: Deployment dashboards show search activity across your deployment with detailed information broken down by instance.
  • Scheduler activity dashboards. The Scheduler activity: Deployment dashboard shows information about the past executions of scheduled searches, and their success rates. If you have a search head cluster, the Search head clustering Scheduler delegation dashboard deals with how the captain orchestrates scheduler jobs.
  • Indexing performance dashboards. The Indexing performance: Deployment and Indexing performance: Instance dashboards show indexing performance across the deployment.

Splunk Cloud Platform

The Splunk Cloud Platform Monitoring Console (CMC) dashboards enable you to monitor Splunk Cloud Platform deployment health and to enable platform alerts. You can modify existing alerts or create new ones. You can interpret results in these dashboards to identify ways to optimize and troubleshoot your deployment.

  • Search usage statistics. This dashboard shows search activity across your deployment with detailed information broken down by instance.
  • Scheduler activity. This dashboard shows Information about scheduled search jobs (reports) and you can configure the priority of scheduled reports.
  • Forwarders. Instance and Forwarders: Deployment. These dashboards show information about forwarder connections and status. Read about how to troubleshoot forwarder/receiver connection in Forwarding Data.

Improve your searches

  • Select an index in the first line of your search. The computational effort of a search is greatest at the beginning, so searching across all indexes (index=*) slows down a search significantly.
  • Use the TERM directive. Major breakers, such as a comma or quotation mark, split your search terms, increasing the number of false positives. For example, searching for average=0.9* searches for 0 and 9*. Searching for TERM(average=0.9*) searches for average=0.9*. If you aren't sure what terms exist in your logs, you can use the walklex command (available in version 7.3 and higher) to inspect the logs. You can use the TERM directive when searching raw data or when using the tstats command.

    If you never use the TERM directive, you can turn off the major breakers in your segmenters.conf file by moving all the minor breakers to the major breakers field in the [search] section of this configuration file. Doing so reduces bucket size but increases CPU usage.

  • Use the tstats command. The tstats command performs statistical queries on indexed fields, so it's much faster than searching raw data. The limitation is that because it requires indexed fields, you can't use it to search some data. However, if you are on 8.0 or higher, you can use the PREFIX directive instead of the TERM directive to process data that has not been indexed while using the tstats command. PREFIX matches a common string that precedes a certain value type.
  • Avoid using table commands in the middle of searches and instead, place them at the end. Table is a reporting command and will cause data to be pushed to the search head which then performs the work, when it's usually more efficient to have the search load distributed among the indexers since they can take advantage of Map Reduce, for example.
  • Test your search string performance. The Search Performance Evaluator dashboard allows you to evaluate your search strings on key metrics, such as run duration (faster is better), the percentage of buckets eliminated from a search (bigger is better), and the percentage of events dropped by schema on the fly (lower is better).

Detect and resolve data imbalances

Run the following search to detect a data imbalance. Specify a time window of 15 minutes or less before running the search.

| tstats count WHERE index=_internal BY splunk_server

This counts the distribution of events across indexers. There are two important things you should look for in the results:

  • All your indexers should be listed. If an indexer is missing from the list, there is either a problem distributing the search to that peer (you’d see a warning message) or no events are being sent to that peer.
  • An even or close-to-even distribution of events across the peers.

If there is an imbalance of events across the peers, you should correct it as soon as possible. The Splunk universal forwarder (UF) provides a built-in load balancing mechanism that is enabled by default. However, it may require adjustments for some data sources. The UF is designed to stream data sources to indexers as quickly as possible. Due to its lightweight nature, the UF does not see event boundaries in your log files and data streams. To ensure that events aren’t chopped in half when switching between indexers, the UF waits until it has read to the end of a log file or until a data stream has gone quite before streaming data to a new indexer. This can create issues if the UF is reading from a very large log file or a very chatty data stream. There are two choices for resolving situations where the UF becomes “sticky” to one of more indexers due the conditions discussed above.

  • The first is a parameter called forceTimebasedAutoLB. This parameter is convenient because it is available in older versions of the Splunk platform and it applies to all sources/sourcetypes handled by a UF. When enabled, this setting causes the UF to switch indexers whenever autoLBfrequency or autoLBvolume are reached, even if it is still reading from a log or receiving a data stream. This can cause issues if any single event is larger than 64KB when reading a log file or 8KB for an incoming TCP/UDP stream. If any event exceeds those sizes, you run the risk of the event being truncated or lost.
  • The second parameter is called event_breaker. This parameter is enabled on a per source type basis on each UF. The advantage it has over forceTimebaseAutoLB is that there are no event size limitations. However, this parameter requires you to manage additional configurations on each UF and is only available on forwarders running Splunk 6.5 or newer.

Adjust the search mode

The Splunk Search and Reporting app has multiple modes that searches can be run under. These different search modes impact the resource utilization, search runtime, and data transferred to fulfill your search.

  • Fast mode. Searches run in fast mode only return fields that are explicitly mentioned in the search or are indexed-fields. This mode has the lowest impact on system resources, but also limits your ability to visually explore your data via the UI.
  • Smart mode. When running searches in smart mode, the Splunk platform attempts to decide which fields are necessary to fulfill your search. For example, if you use a transforming command like stats in smart mode, the Splunk platform only returns the summary data and not the raw event.
  • Verbose mode. Sometimes it is necessary to see all of the fields and raw data in your search, even if you’ve used a transforming command like timechart. For example, if you’re trying to troubleshoot your search to determine why a graph looks a particular way, you might want to see the raw events. Under smart mode, the raw events won’t be available under the Events tab. You have to switch to verbose mode to see the raw events and the summary data. This mode is extremely inefficient because it causes the indexers to send all events matching your search criteria to the search head. You should only enable verbose mode temporarily to troubleshoot searches.

Target your search to a narrow dataset

Reducing the scope of your search to a more narrow set of results is often the quickest and easiest way to improve performance and reclaim system capacity.

  • Limit the timeframe of your search to 15 minutes or less.
  • Reduce the amount of data the Splunk platform needs to search through by specifying specific index names in your searches. Typically, you want to store like data that is commonly searched together in the same index.
    • For example, let’s say you have 5 different firewall vendors sending data to the Splunk platform. Even though the data format and source types are different, you probably write searches that target all firewall data at the same time. Keeping the source types in the same index prevents the Splunk platform from needing to look in different places to match search terms.
    • For another example, let's say your firewall data has 5 billion unique events and is stored in the ‘main’ index. You decide to add the error logs from your WordPress site, which is 100k unique events to the same index. Anytime you want to run a search for your WordPress logs, the Splunk platform has to sort through 5 billion firewall events to find the ones you care about. If you moved your WordPress logs to a different index, you could speed up searches by reducing the amount of data the Splunk platform has to sort through.
  • Add more unique terms to your search. When running a search, the Splunk platform consults the TSIDX to locate all events that contain the terms provided in your search. For example, consider the following search: index=firewall status=ERROR. The Splunk platform would consult the TSIDX files for the ‘firewall’ index and locate all events that contain the term ‘error’. It is highly likely that many events contain the term ‘error’ and the Splunk platform will need to sort through a lot of data to locate all of those events. You could speed up this search by always specifying terms that are unique to the events you want to target. This search would perform better: index=firewall status=ERROR type=cisco model=asa datacenter=newyork

Manage capacity

  • Use horizontal scaling in Splunk Enterprise or Splunk Cloud Platform to increase concurrency and data ingest rates. The Splunk platform decreases search runtime by dividing up the processing across multiple servers. By doing this, each server performs less overall work, which decreases individual search runtime and increases the number of searches that can be executed in the same span of time. For example, if a search takes 60 seconds to complete on a single server, you can divide that work across 6 servers and complete the same search in 10 seconds.
  • Change scheduler limits. A Splunk Cloud Platform administrator can define what percentage of the total search capacity the scheduler is allowed to consume with scheduled search jobs. By default, the scheduler is allowed to consume 50 percent of the total capacity. This ensures that there is reserved capacity for interactive users to create ad-hoc searches. If you have a high number of scheduled searches, you may choose to raise the scheduler limits.
  • Identify SVC utilization changes. The Splunk App for Chargeback can be used to monitor SVC consumption by business unit, department, or an individual user. An unexpected increase in SVC consumption could indicate adoption of inefficient searches or dashboards.
  • Peek at pipelines. Review the indexing performance dashboards to identify any issues or load in a particular pipeline.
  • Knowledge objects can impact the work required and system resources needed to fulfill a search. For example, if you’ve configured an automatic lookup and scoped it for all users globally, the Splunk platform needs to enrich all events in every search with that lookup whether the user needs the additional fields or not. You can avoid unnecessary resource consumption by only installing apps and technology add-ons (TAs) in production that are necessary. After installing, ensure that the app or TA is scoped so that it only targets the appropriate users and searches. See the Knowledge Manager manual for Splunk Enterprise or Splunk Cloud Platform.
  • Preload expensive datasets using loadjob. The loadjob command uses the results of a previous search. If you run a lengthy search in one browser tab and keep it open, the data remains on the search head for some time, as long as you keep the tab open. Eventually, the search will time out, but while it is available, you can run other searches based off that initial data using the search job id (sid).
  • In Splunk Enterprise versions 7.2.x and higher, using the zstd compression algorithm in the indexes.conf file, rather than gzip, makes buckets smaller, thereby increasing search speed.
  • Finally, you can also update the tsidxWritingLevel to 3 in the indexes.conf file in Splunk Enterprise version 7.3.x and higher. Doing so takes advantage of newer tsidxfile formats for metrics and log events that decrease storage cost and increase speed.

What else?

  • Improve your source types. Review the data quality dashboards to identify and resolve data quality issues in Splunk Enterprise or Splunk Cloud Platform.
  • Use tokens to build high-performance dashboards. Searches saved in dashboards can use tokens to allow users to switch between commands. When the token is in a child search (in Splunk Enterprise or Splunk Cloud Platform), only the child search is updated as the token input changes. The base search, which can contain the index and other costly functionality, only needs to run once, which speeds up the search overall.
  • Secure your Splunk Enterprise deployment. Review the safeguards for risky commands in the Splunk Enterprise Securing Splunk Enterprise Manual.

Next steps

These additional Splunk resources might help you understand and implement these recommendations: