Skip to main content


Splunk Lantern

Ingest Data


Use Case Explorer for Security
ingest_light-removebg-preview.png Monitor_dark-removebg-preview.png analyze_dark-removebg-preview.png act_dark-removebg-preview.png

Ingesting data correctly is a foundational step in your Splunk security implementation that, if done correctly, will allow you to get the most value across your entire Splunk environment.

You'll use Splunk Cloud Platform or Splunk Enterprise to configure your data inputs to get them ready for use in Splunk Enterprise Security, as well as Splunk technology add-ons and Splunk Universal Forwarders.

Data inputs for Splunk Cloud Platform Data inputs for Splunk Enterprise

Splunk Cloud Platform provides tools to configure many kinds of data inputs, including those that are specific to particular application needs. Splunk Cloud Platform also provides the tools to configure any arbitrary data input types. Splunk Cloud Platform inputs can be categorized as:

Because Splunk Enterprise is on-premises, you can either get data into the instance directly or use universal or heavy forwarders to get data in. Splunk Enterprise inputs can be categorized as:

  • Files and directories
  • Network events
  • Windows data
  • Other sources

Working with data in Splunk Enterprise Security

Splunk Enterprise Security works most effectively when you send all your security data into a Splunk deployment to be indexed. You should then use data models to map your data to common fields with the same name so that they can be used and identified properly.

You'll use the Splunk Common Information Model (CIM) to normalize your data to match a common standard. The CIM is a “shared semantic model focused on extracting value from data.” For example, when you search for an IP address, different data sources may use different field names such as ipaddr, ip_addr, ip_address, or ip. The CIM normalizes different data sources to use the same field name for consistency across all sources. This normalization is especially important when ingesting data from multiple sources, which can cause problems if they are not standardized with a time synchronization mechanism.

The volume, type, and number of data sources influence the overall Splunk platform architecture, the number and placement of forwarders, estimated load, and impact on network resources. The Splunk platform can index any kind of data, for example any and all IT streaming, machine, and historical data, such as Microsoft Windows event logs, web server logs, live application logs, network feeds, metrics, change monitoring, message queues, or archive files. Getting Data In (GDI) is the process that you'll follow to ingest machine data into Splunk.

When data is being ingested into your deployment, data enrichment ensures that you can perform effective threat detection, investigation, and response. Using enriched data makes dealing with security threats easier and more efficient.

Explore data ingestion focal areas and find your use cases

If you're at the Ingest Data stage of your journey, explore the following focal areas to find use cases you can apply.

  • Data availability & retention
    Develop centralized visibility for essential data, identify and onboard relevant data source types, and learn best practices for data retention.
  • Enrichment
    Centralize normalized asset and identity data for search and enrichment. Gather information automatically from external tools to enrich incident handling.
  • Normalization
    Learn how to normalize log data, making it ready for correlation and use in Splunk Enterprise Security.