*nix operating system logs are a source of data that reports on state changes in a UNIX or Linux variant operating system. This includes changes to applications, service state, and hardware events. Data collected from these different elements are written to the plain text log files hosted within the operating system. These events are used by operations and development teams to troubleshoot and mitigate errors. Security and audit events are also written to the same place, but because they serve different use cases, they are covered in the *nix security logs data source article. In the Common Information Model, *nix operating system logs can be mapped to any of the following data models, depending on the field: Endpoint, Inventory, Updates, Change, Performance, Network Sessions.
Data visibility
The *nix operating system logs contain important events relating to applications, system services, and the operating system. The events describe errors, warnings, and other information about activity taking place on each system. This information is used to monitor and troubleshoot each system.
Data application
When your Splunk deployment is ingesting *nix operating system logs, you can use the data to achieve the following objectives:
Configuration
The following sections provide information on configuring Splunk software to ingest this data source. To configure the device or software, we recommend that you leverage official Unix or Linux resources.
Data ingestion
If your deployment is not already ingesting *nix operating system logs, follow the Getting Data In guidance for Splunk Enterprise or the Onboarding and Forwarding Your Data guidance for Splunk Cloud.
The supported input types are monitored OS Logs, syslog, and scripted.
In addition, you will need the Splunk Add-on for Unix and Linux The add-on can be downloaded here and the add-on documentation can be accessed here. Read and follow the documentation carefully to understand all the essential information you need to work with this data source, including how to install the add-on, configure Unix or Linux, and configure Splunk.
Sizing estimate
The best way to estimate sizing is to send the data to Splunk and use the monitoring console to get ingest sizing by index or sourcetype. Data ingest will vary widely, but an estimated baseline is 250/MB per day per item.
Validation
The first step in validating the logs is to run a search and confirm that the index is getting data in the proper time frame and that the source types and sources are as expected. Further validation is done by inspecting the events and making sure the needed fields are seen.
A search similar to the following is a good starting point:
index=* earliest=-15m@
|stats count by sourcetype source index
Comments
0 comments
Please sign in to leave a comment.