Skip to main content
Do you build apps on Splunk or are a Splunk admin? If so, we want to hear from you. Help shape the future of Splunk and win a $35 gift card!
 
 
Splunk Lantern

Data management overview

 

The data lifecycle is the center point of a Splunk implementation. Before you even engage Splunk software, best practices can help you manage and structure your data efficiently to optimize its search-ability and value.

Best practices in the data functional area also help you design effective use cases that are tightly aligned to data, so you can use the Splunk platform to answer questions you know about and reveal answers to questions you didn't know you had.

Follow these best practices according to the foundational, standard, intermediate, or advanced goals you have set.

Activities Foundational Standard Intermediate Advanced

REQUEST DATA

Processes to bring requests for new use cases or data sources to your team's attention and to track and prioritize them among other requests.

Accept ad-hoc requests, for example email, chat, voice (see Data onboarding workflow) Establish a process and templates for requests that includes the following:
  • Data source host names and IP addresses
  • Path
  • Location
  • Access information
  • Retention requirements
  • A brief description of what the data represents
  • Estimated data volume (see Data onboarding workflow)

Everything outlined in Standard

Establish cost chargeback estimates for budget owner (see Showing the value of your Splunk deployment)

Same as Intermediate

DEFINE THE DATA

Guidelines to determine where to place line breaks and timestamps on incoming data and to identify the intended use case or value of the data.

 

Learn about data onboarding, retention, and tagging best practices (see Enhancing data management and governance)

Establish a process for defining technical data that includes the following:

  • Defined source types
  • Target index(es)
  • Data sensitivity searches
  • Field and value extractions
  • Generated knowledge objects, dashboards, and alerts (see Data onboarding workflow)

Apply the "Great 8" configurations to your data sources (see Improving data onboarding with props.conf configurations)

Establish a process for defining baseline data that uses learned source types and little or no source data optimization (see Data onboarding workflow)

Everything outlined in intermediate

Establish a process for defining value-oriented data

IMPLEMENT THE USE CASE

Processes to carry out the Splunk configuration and to engage the requestor in implementation.

Deploy technical add-ons that support getting data in (see Data onboarding workflow)

Create a deployment server class that includes search, index, and forwarding and data collection tiers (see Define server class)

Everything outlined in Foundational

Establish initial use case requirements (see Data onboarding workflow)

Apply naming conventions to knowledge objects (see Naming conventions)

Everything outlined in Standard

Create a lab environment for developing system and test automation (see Setting up a lab environment)

Everything outlined in Intermediate

Utilize the Add-on Builder for custom components (see Add-on Builder)

COMMUNICATE USE CASE CHANGES

Structures to inform the requester that the related work is completed and to enable others to learn about it.

Communicate with individuals that the work is completed (see Data onboarding workflow)

Everything outlined in Foundational

Announce to the community that the work is completed (see Data onboarding workflow)

Everything outlined in Standard

Share final showback calculations with the requester (see Showing the value of your Splunk deployment)

Everything outlined in Intermediate

Track and communicate the business value of use cases to executive stakeholders (see Engaging with your executive sponsor)

MAINTAIN AND RETIRE USE CASES

Processes to maintain Splunk knowledge objects and to remove them and their data sources when a use case is no longer needed.

Stop indexing, generating, or forwarding the data when it is no longer needed (see Showing the value of your Splunk deployment)

Learn to manage knowledge objects to maintain operational efficiency (see Cleaning up knowledge objects)

Learn to maintain smaller, pre-processed datasets that are quick and efficient (see Using summary indexing)

Everything outlined in Foundational

Monitor the ongoing need of use cases individually (see Use Case Registry)

 

Everything outlined in Standard

Establish a process for users to request that a use case be retired (see Data onboarding workflow)

Establish a process for regularly evaluating the need for use cases (see Use Case Registry)

 

Everything outlined in Intermediate

Establish a process for retiring use cases that includes disabling knowledge objects, purging unnecessary data, and disabling server class(es) on the search, index, and forwarder and data collection tiers (see Data onboarding workflow)

Remove the use case from showback system (see Showing the value of your Splunk deployment)

Next steps

For more about data ingestion, see GDI - Getting data in and Collecting logs in Splunk.