Skip to main content
 
 
Splunk Lantern

Indexing and search architecture

 

Splunk Validated Architectures (SVAs) are proven reference architectures for stable, efficient, and repeatable Splunk deployments. SVAs are broken into three major content areas: 

  • Indexing and search architecture
  • Data collection architecture
  • Design principles and best practices

SVAs are designed to provide the best possible results while minimizing total cost of ownership. Additionally, the entire foundation of your Splunk deployment will be based on a repeatable architecture that will allow you to scale your deployment as your needs evolve over time.

Cloud architecture

In a Splunk Cloud Platform deployment, decisions regarding your indexing and search topologies are handled by a Splunk team that aligns with SVA best practices. The Splunk Cloud Platform team will build and operate your dedicated (single-tenant) AWS environment to meet Splunk's compliance requirements and service SLAs. This nearly eliminates customer effort on indexing and search components.

Your Splunk Cloud Platform deployment has one of two possible experience designations: Classic or Victoria. For information on the differing capabilities of each Splunk Cloud Platform Experience, see Differences between Classic Experience and Victoria Experience.

unnamed.png

For more information about cloud architecture, see the Splunk Cloud documentation.

Enterprise Architecture (On-Prem or Cloud)

Splunk Enterprise architectures allow customers to deploy and manage Splunk entirely within their own environments, on-prem or cloud. SVA guidance can be critical for long term reliability, scalability, and supportability. 

The SVA outlines multiple architectural topologies for the indexing and search tiers, which are uniquely suited for different business needs. It is important to consider the purpose of each topology as it pertains to ingest volume, search volume, high availability, and overall complexity.

Deployment Type

Site
Count

Search Tier
High-Availability

Index Tier High-Availability 

Ingest Volume

Search Volume

Maturity

Single Server

1

no

no

< 300GB/day

limited

standard

Non-Clustered Indexers +
Standalone Search Head

1

no

no

scalable

limited

standard

Non-Clustered Indexers +
Search Head Cluster

1

yes

no

scalable

scalable

intermediate

Clustered Indexers +
Standalone Search Head

1

no

yes

scalable

limited

intermediate

Clustered Indexers +
Search head Cluster

1

yes

yes

scalable

scalable

intermediate

Multi Clustered Indexers +
Multi Standalone Search Head

> 1

yes

*across sites
*no artifact syncing

yes

scalable

limited

advanced

Multi Clustered Indexers +
Multi Search Head Cluster

> 1

yes

*across sites

*within site
*no artifact syncing

yes

scalable

scalable

advanced

Multi Clustered Indexers + Single Search Head Cluster

> 1

yes

yes

scalable

scalable

advanced

Multi-clustered indexers and single search head cluster topology example

unnamed (1).png

For more information on Enterprise architecture, see the Splunk Docs pages:

Splunk indexer storage options

Splunk indexers can be configured using two different storage options. Choosing which storage option to utilize requires understanding of the implications. Changing the architecture later is not straightforward and requires a migration process that involves both infrastructure changes as well as data movement. 

The search, indexer, and storage architecture for Splunk Cloud Platform is designed and managed by Splunk.  

Classic indexer architecture using file system storage

In a standard installation, indexers store data across the entire data lifecycle on a server accessible file system. This can be a direct-attached storage (DAS) file system only or a combination of DAS storage and network-attached file storage.

unnamed (2).png

This architecture tightly couples indexer compute and storage, and is able to provide a consistent search performance profile across all data at rest with few external dependencies. In clustered indexer topologies, the Splunk platform maintains multiple copies of the data across the configured retention period. This requires a potentially significant amount of storage, especially when requirements for long-term data retention exist. This architecture is recommended when you have requirements for either:

  • Short-term data retention (<= 3 months)
  • Long-term retention and performance-critical search use cases that frequently access older historic data

SmartStore indexer architecture using object storage

Splunk SmartStore architecture was created primarily to provide a solution for the decoupling of compute and storage on the indexing tier. This enables a more elastic indexing tier deployment. SmartStore utilizes a fast, SSD-based cache on each indexer node to keep recent data locally available for search. When data rolls to a warm lifecycle stage, it is uploaded to an S3 API-compliant object store for persistence, but remains in local cache until it is evicted based on cache manager policy.

unnamed (3).png

Search tier recommended best practices 

Keep the search tier close (in network terms) to the indexing tier

Any network delays between the search and indexing tier has direct impact on search performance.

Exploit search head clustering when scaling the search tier

A search head cluster replicates user artifacts across the cluster and allows intelligent search workload scheduling across all members of the cluster. It also provides a high availability solution. An exception to this is that premium apps such as Splunk Enterprise Security and Splunk ITSI may benefit from a separate SH.

Forward all search heads internal logs to indexing tier

All indexed data should be stored on the indexing tier only. This removes the need to provide high-performing storage on the search head tier and simplifies management. This also applies to any other Splunk components.

Consider using LDAP authentication whenever possible 

Having centrally managed user identities for authentication is a general enterprise best practice. This simplifies the management of your Splunk deployment and increases security.

Ensure enough cores to cover concurrent search needs

Every search requires a CPU core to execute. If no cores are available to run a search, the search will be queued, resulting in search delays for the user. This is also applicable to the indexing tier.

Utilize scheduled search time windows 

Scheduled searches often run at specific points in time (for example on the hour, 5/15/30 minutes after the hour, or at midnight). Providing a time window that your search can run in helps avoid search concurrency hotspots.

Limit the number of distinct search heads/clusters on the same indexing tier

Search workload can only be governed automatically within a given search head environment. Independent search head clusters have the potential to create more concurrent search workload than the indexer (search peer) tier can handle. The same is true for carefully planning the number of standalone search heads.

When building search head clusters, use an odd number of nodes (3, 5, 7, etc)

Search head cluster captain election is performed using a majority-based protocol. An odd number of nodes ensures that a search head cluster can never be split into even numbers of nodes during network failures.

Splunk Cloud Platform search tier best practices are implemented by Splunk.

Indexing tier recommended best practices

Enable parallel pipelines to take advantage of available resources 

Parallelization features enable the exploitation of available system resources that would otherwise sit idle. I/O performance must be adequate before enabling ingest parallelization features.

Consider using SSDs for hot/warm volumes and summaries

SSDs remove I/O limitations that are often the cause for unsatisfactory search performance.

Keep the indexing tier close (in network terms) to the search tier

Lowest possible network latency has a positive effect on user experience when searching.

Use index replication when historical data/report high availability (HA) is needed

Index replication ensures multiple copies of every event in the cluster to protect against search peer failure. Adjust the number of copies (the replication factor) to match your SLAs.

Ensure good data onboarding hygiene 

Ensure line breaking, timestamp extraction, timezone, source, source type, and host are all properly and explicitly defined for each data source. Explicitly configuring data sources instead of relying on auto-detection capabilities has been proven to have significant benefits to data ingest capacity and indexing latency, especially in high-volume deployments.

Consider configuring batch mode search parallelization setting on indexers with excess processing power

Exploiting search parallelization features can have a significant impact on search performance for certain types of searches, and allows you to utilize system resources that may otherwise be unused.

Monitor for balanced data distribution across indexer nodes

Even event/data distribution across the index tier is a critical contributing factor for search performance and proper data retention policy enforcement.

Disable web UI on indexers in distributed/clustered deployments

There is no reasonable need to access the WebUI directly on indexers.

Use Splunk pre-built Technology Add-Ons for well-known data sources

Rather than building your own configuration to ensure data onboarding hygiene for well understood data sources, Splunk-provided TAs can provide faster time to value and ensure optimal implementation.

Monitor critical indexer metrics

Splunk provides you with a monitoring console that provides key performance metrics on how your indexing tier is performing. This includes CPU and memory utilization, as well as detailed metrics of internal Splunk components (processes, pipelines, queues, search).

Splunk Cloud Platform indexing tier best practices are implemented by Splunk.

Next steps

These resources might help you understand and implement this guidance: