Skip to main content
Splunk Lantern

Data management overview


The data lifecycle is the center point of a Splunk implementation. Before you even engage Splunk software, best practices can help you manage and structure your data efficiently to optimize its search-ability and value.

Best practices in the data functional area also help you design effective use cases that are tightly aligned to data, so you can use Splunk to answer questions you know about and reveal answers to questions you didn't know you had.

Follow these best practices according to the standard, intermediate, or advanced goals you have set.

Activities Standard Intermediate Advanced


Processes to bring requests for new use cases or data sources to your team's attention and to track and prioritize them among other requests.

Accept ad-hoc requests (for example, email, chat, voice).

Establish a process and templates for requests that include the following:

  • Data source host names and IP addresses
  • Path
  • Location
  • Access information
  • Retention requirements
  • A brief description of what the data represents
  • Estimated data volume (see Data onboarding workflow)

Establish cost chargeback estimates for budget owner (see Showing the value of your Splunk deployment)


Guidelines to determine where to place line breaks and timestamps on incoming data and to identify the intended use case or value of the data.

Establish a process for defining baseline data that uses learned source types and little or no source data optimization

Establish a process for defining technical data that includes the following:

  • Defined source types (discover an existing add-on or create a new one)
  • Target index(es)
  • Data sensitivity searches, including personally identifiable information
  • End-user needs and outcomes
  • Field and value extractions
  • Generated knowledge objects, dashboards, and alerts (see Data onboarding workflow)

Everything outlined in intermediate

Establish a process for defining value-oriented data

  • Normalize fields with a common information model
  • Define tags
  • Develop a corporate information model
  • Consider license and storage impact
  • Identify business priority
  • Reuse data and knowledge objects for other use cases (see Data onboarding workflow)


Processes to carry out the Splunk configuration and to engage the requestor in implementation.

Deploy technical add-ons that support getting data in

Create a deployment server class that includes search, index, and forwarding and data collection tiers (see Define server class)

Everything outlined in standard

Establish initial use case requirements 

Apply naming conventions to knowledge objects (see Naming conventions)

Utilize the Add-on Builder for custom components (see Add-on Builder)

Everything outlined in intermediate

Create a lab environment for developing system and test automation (see Setting up a lab environment)


Processes to verify that the use case meets the requester's expectations and needs, and to enable the requester to communicate feedback.

Validate reactively: requester validates the work after it is completed (see Data onboarding workflow)

Validate proactively: requester validates the work at regular intervals during development (see Data onboarding workflow)

Validate demonstratively: requester validates the work in real-time from a hands-on demonstration of the use case (knowledge objects, data, and so on) (see Data onboarding workflow)


Structures to inform the requester that the related work is completed and to enable others to learn about it.

Communicate with individuals that the work is completed (see Data onboarding workflow)

Everything outlined in standard

Announce to the community that the work is completed (see Data onboarding workflow)

Everything outlined in intermediate

Share final showback calculations with the requester (see Showing the value of your Splunk deployment)

Track and communicate the business value of use cases to executive stakeholders (see Engaging with your executive sponsor)


Processes to maintain Splunk knowledge objects and to remove them and their data sources when a use case is no longer needed.

Monitor the ongoing need of use cases individually

Stop indexing, generating, or forwarding the data when it is no longer needed (see Showing the value of your Splunk deployment)

Everything outlined in standard

Establish a process for regularly evaluating the need for use cases

Establish a process for users to request that a use case be retired (see Data onboarding workflow)

Establish a process for retiring use cases that includes disabling knowledge objects, purging unnecessary data, and disabling server class(es) on the search, index, and forwarder and data collection tiers

Everything outlined in intermediate

Remove the use case from showback system (see Showing the value of your Splunk deployment)

Next steps

For more about data ingestion, see GDI - Getting data in and Collecting logs in Splunk.