*nix security logs are a source of data that records information related to login attempts (success and failure), elevated privileges, and other security events as defined by the system’s audit policy. Security data is collected and written to the plain text log files hosted within the operating system. These logs are one of the primary tools used by security analysts to detect and investigate unauthorized activity and to troubleshoot access problems. In the Common Information Model, *nix security logs can be mapped to any of the following data models, depending on the field: Endpoint, Network Sessions, Inventory, Updates, Change, Performance.
Data visibility
The *nix security logs contain important events relating to applications, system services, and the operating system. The events describe errors, warnings or details about activity taking place on each system. This information is used to monitor and troubleshoot each system.
Data application
When your Splunk deployment is ingesting *nix security logs, you can use the data to achieve the following objectives:
- Securing a work-from-home organization
- Investigating a ransomware attack
- Managing *nix system user account behavior
Configuration
The following sections provide information on configuring Splunk software to ingest this data source. To configure the device or software, we recommend that you leverage official Unix and Linux resources.
Data ingestion
If your deployment is not already ingesting *nix security data, follow the Getting Data In guidance for Splunk Enterprise or the Onboarding and Forwarding Your Data guidance for Splunk Cloud.
The source type includes many identifiers, all listed here.
The supported input types are monitored OS Logs, syslog, and scripted.
In addition, you will need the Splunk Add-on for Unix and Linux. The add-on can be downloaded here and the add-on documentation can be accessed here. Read and follow the documentation carefully to understand all the essential information you need to work with this data source, including how to install the add-on, configure Unix and Linux, and configure Splunk.
Sizing estimate
The best way to estimate sizing is to send the data to Splunk and use the monitoring console to get ingest sizing by index or sourcetype. Data ingest will vary widely, but an estimated baseline is 250/MB per day per item.
Validation
The first step in validating the logs is to run a search and confirm that the index is getting data in the proper time frame and that the source types and sources are as expected. Further validation is done by inspecting the events and making sure the needed fields are seen.
A search similar to the following is a good starting point. You can limit the search to the index you configured by replacing index=* with your choice of name e.g., index=foo.
| tstats values(sourcetype) WHERE index=* group by index
Comments
0 comments
Please sign in to leave a comment.