Data center storage is provisioned in two general ways: built into servers and shared using various network storage protocols, or via a dedicated storage array that consolidates capacity for use by multiple applications that access it using either a dedicated storage area network (SAN) or ethernet LAN file-sharing protocol. The activity of internal, server-based storage is typically recorded in system logs, however storage arrays have internal controllers/storage processors that run a storage-optimized OS and log a plethora of operating, error and usage data. Since many organizations have several such arrays, the logs often are consolidated by a storage management system that can report on the aggregate activity and capacity. In the Common Information Model, storage data is typically mapped to the Inventory data model and the Performance data model.
Shared storage logs record overall system health, error conditions, and usage. Collectively, the information can alert operations teams to problems, the need for more capacity and performance bottlenecks. The data of this type is also used to understand access patterns to files and directories. These access patterns provide insight into performance of applications that are dependent on the storage.
When your Splunk deployment is ingesting storage data, you can use it to accomplish IT Ops use cases.
Guidance for onboarding data can be found in the Spunk Documentation, Getting Data In (Splunk Enterprise) or Getting Data In (Splunk Cloud). In addition, these Splunk Add-Ons and Apps are helpful for working with storage data.
Looking for more information on data types? Download the Splunk Essential Guide to Machine Data.