SIEM replacement - LogRhythm considerations
This article is designed to augment the Conducting a SIEM use case development workshop guidance. It outlines the technical components that need to be taken into consideration when replacing LogRhythm with Splunk Enterprise Security. If you need hands-on assistance with this process, contact Splunk Professional Services experts.
Sizing and licensing
LogRhythm is sized and licensed based on several factors of an organization's environment, including the volume of data ingested, the number of devices or endpoints monitored, and the specific features or LogRhythm modules required. Sizing and licensing in the Splunk platform is significantly different. Both the Splunk platform and LogRhythm consider data volume and performance needs when sizing deployments, but the Splunk platform focuses more on daily data volume (GB/day) while LogRhythm often emphasizes events per second (EPS). Sizing in the Splunk platform is also heavily influenced by the need for scalable and distributed architecture, while sizing in LogRhythm focuses on the number of log sources.
When considering the replacement, use the following formula to determine the minimum bandwidth required for each TCP connection. For example, to calculate the bandwidth required to achieve 40K events per second:
Average event size (bytes) * 40K / Number of data sources * 8 / 1024 / 1024
For example, using 410 bytes as the average event size, with 27 data sources, we need 4.6 Mbits per second for each TCP connection:
410 * 40,000 / 27 * 8 / 1024 / 1024 = 4.6 Mbits
The Splunk platform uses a data volume-based licensing model (GB/day), with additional licenses for specific features or apps. LogRhythm offers both volume-based (GB/day) and EPS-based licensing, with modular options for different SIEM components. Both platforms offer subscription and perpetual licensing, with enterprise agreements available for large-scale deployments.
SIEM components
The following sections provide guidance on each of the key technical components that you need to address when replacing LogRhythm with Splunk Enterprise Security. This article assumes that your workshop participants include technical SMEs who understand the functionality of your current SIEM and can use this guidance about the Splunk platform and Splunk Enterprise Security (ES) to scope the necessary changes accurately.
SIEM databases
The Splunk platform uses a non-relational database, and its architecture provides for a more flexible and scalable solution with proprietary indexing and distributed search capabilities. The Splunk platform is highly scalable and can handle massive amounts of data without careful planning and consideration of SQL database structure. Its distributed search capabilities allow for scalability across multiple servers or clusters. As you work through the SIEM Use Case Development Workshop, be sure to identify all applicable data sources, even if they are within external locations that are not currently being sent to LogRhythm databases.
Platform manager
In the Splunk platform, the components that are most comparable to the LogRhythm SIEM Platform Manager are the search head and deployment server. The search head, more specifically the Splunk Enterprise console and the ES application, serves a similar role to the LogRhythm Platform Manager in terms of providing a centralized interface for managing the SIEM environment, user roles, and access controls, and handling correlation searches, reports, and dashboards. The deployment server in Splunk Enterprise complements this by managing the distribution of apps and configurations across a distributed environment, analogous to some of the centralized control functions of the LogRhythm Platform Manager.
Data processor
Typically, the Splunk platform does not store normalized data, as it does most data normalization at search time. As you discuss Splunk architecture within the context of the SIEM Replacement Workshop, ensure you are making sizing and architecture recommendations appropriate for expected data volume, as well as data redundancy/clustering requirements.
The Splunk Common Information Model (CIM) applies a common taxonomy to log data, making it possible to standardize event classification across various data sources, similar to the machine data intelligence fabric in LogRhythm. As you plan for data processing and on-boarding within the context of the Splunk platform and Splunk Enterprise Security, ensure you document required Technical Add-ons to be installed on the Enterprise Security search head to achieve CIM compliance.
Data indexer
The Data Indexer is equivalent to Splunk indexers, but typically the Splunk platform does not store normalized data, as it does most data normalization at search time. As you discuss Splunk architecture within the context of the SIEM Use Case Development Workshop, ensure you make sizing and architecture recommendations appropriate for expected data volume, as well as data redundancy/clustering requirements. Splunk index clusters can provide redundancy and load balancing, similar to using multiple data indexers in LogRhythm.
Web console
Splunk Enterprise Security (ES) is equivalent to the LogRhythm web console, a user interface that security analysts utilize to interact with the systems. It provides access to dashboards, reports and real-time views to monitor and investigate security alarms. Be sure you understand your standard operating procedures for incident response and triage, and determine what, if any, circumstances might be needed for the ES application. This might take the form of custom workflow actions, investigation workbench customization, incident review settings, and more. Document anything that might need to be customized or extended within ES.
System monitor agent
In the Splunk platform, the component comparable to LogRhythm’s System Monitor Agent is the Splunk universal forwarder. Be sure you discuss both traditional log events and network/flow data. They are collected separately in a LogRhythm implementation, and it's important to understand that Splunk captures data differently to ensure nothing is missed. You might need to collect some data via UF/Syslog, other data via Stream/HTTP event collector, or some other method. This should all be captured and documented in the architecture plan.
AI engine
In Splunk Enterprise Security (ES), the component comparable to the LogRhythm SIEM AI Engine is primarily found in the combination of the ES Correlation Search capabilities and the integration of the Splunk Machine Learning Toolkit (MLTK). These features together provide advanced analytics and threat detection mechanisms similar to those offered by LogRhythm's AI Engine. Additionally, the Splunk Adaptive Response Framework performs automated actions based on the outputs of correlation searches and other detection mechanisms. This can include blocking a threat, isolating a system, or enriching data with additional context, similar to some of the automated capabilities of the LogRhythm AI Engine.
A “lift and shift” approach of LogRhythm detections to ES correlation searches is unlikely to be successful, as logic differs and the current alerts might not produce the fidelity of your goals. As you perform the use case planning portion of the SIEM Use Case Development Workshop, ensure you capture all business requirements and security monitoring objectives, and take a requirements-based approach to building ES use cases.
Throughout this phase, it is also important to draw a distinction between two types of correlation searches: risk rules (narrowly defined detections, capturing single indicator events), and risk notables (correlations across risk rules/index that, when triggered, would be representative of a potential security incident). This distinction comprises the key components (among others) of ES that make up the risk-based alerting concept.
Assets and identities
Both Splunk Enterprise Security and LogRhythm integrate with asset and identity data to enhance security monitoring and provide contextual analysis to security events. As you work through the SIEM Use Case Development Workshop, ensure you document all sources of relevant asset and identity data. Start with the main source of truth, and add additional enrichment sources (vulnerability data, configuration management database data, etc) that might be relevant and provide contextual value. These can be configured directly in Splunk Enterprise Security, and will be merged into a single kvstore collection for assets and another for identities.
Alarm and response manager (ARM)
The LogRhythm Alarm and Response Manager (ARM) is comparable to a combination of features within Splunk Enterprise Security (ES), particularly the Notable Event framework and the Adaptive Response framework. Be sure you understand your standard operating procedures for incident response and triage, and determine what, if any, circumstances might be needed for Splunk Enterprise Security. This might take the form of custom workflow actions, investigation workbench customization, incident review settings, and more. Document anything that might need to be customized or extended within ES.
SmartResponse
LogRhythm's SmartResponse is a feature within the SIEM platform designed to automate responses to security threats and alarms. This is similar to functionalities in Splunk SOAR and the Adaptive Response framework functionality in ES, allowing automated actions based on detected security events. Again, it is important to fully understand your standard operating procedures for incident response and triage, and determine what, if any, circumstances might be needed for Splunk Enterprise Security. This might take the form of custom workflow actions, investigation workbench customization, incident review settings, and the configuration of adaptive response actions. Document anything that might need to be customized or extended within ES.
Architecture: Clustering and data redundancy
Both LogRhythm and the Splunk platform provide clustering and data redundancy capabilities, designed to ensure system reliability, data integrity, and continuous operation. The Splunk platform utilizes a distributed architecture in which data indexing and searching can be scaled across multiple indexer and search head nodes. Additionally, it supports indexer clustering for data redundancy and search head clustering for high availability and load balancing. It might be necessary to consider the following in architecture discussions.
Load balancing
The Splunk platform provides the ability to distribute the incoming data across multiple indexer nodes to balance the load. Use Splunk universal forwarders to distribute data to multiple indexers, configuring to round-robin data among the available indexers or to use more sophisticated load balancing strategies based on the volume of data or the number of events.
Data replication
The Splunk platform supports indexer replication where each piece of data can be replicated across multiple indexers to prevent data loss. Search head clustering provides fault tolerance for search management.
Other possible components
The following components are additional considerations that might be applicable and that align with the Splunk unified security roadmap.
LogRhythm NetMon
LogRhythm NetMon functionality falls under the Splunk Enterprise platform and Splunk Stream. If this tool and capability is applicable to you, note how you use the tool, as that will apply to Enterprise Security, Mission Control, and SOAR functionality.
LogRhythm UEBA
If you use LogRhythm’s user and entity behavioral detection capabilities, be sure to discuss these within the context of the SIEM Use Case Development Workshop. It is important to identify and document your business requirements for behavioral detections and determine how to solve for these requirements within Splunk Enterprise Security. It is possible that Splunk User Behavior Analytics (UBA) will be required as part of your Splunk architecture, but a number of your capabilities should be able to be addressed using the Splunk Machine Learning Toolkit and standard deviation methodology within Splunk ES.
Threat Intelligence Service
The LogRhythm Threat Intelligence Service (TIS) is similar to that in Splunk ES and the built-in Threat Intelligence framework that allows it to ingest, normalize, and leverage threat data from various external sources. This can include commercial feeds, open-source feeds, and custom threat intelligence sources.
Data migration planning
Migrating from LogRhythm SIEM to Splunk Enterprise Security involves careful planning, understanding of the differences between the two systems, and careful execution to ensure that security monitoring and response capabilities are maintained throughout the transition.
- Assessment and planning:
- Fully understand the scope and migration requirements, including features utilized, data types, volume, and other timelines or constraints
- Identify applicable data sources to be migrated and determine how they map to the Splunk platform
- Data extraction
- You can use available APIs, DB connectors or export utilities to extract data, depending on the data type to be extracted
- It is possible that custom developed or third-party tools will be required to extract certain data from LogRhythm in a format compatible with Splunk ES
- Data transformation
- Convert extracted data into a suitable format for ingestion into the Splunk platform (log files, CSV, or other supported input methods (HEC, UF, etc)
- Consider any data transformation requirements, such as time stamping or line breaking
- Translate LogRhythm rules, alerts, and dashboards in Splunk search language (SPL) and reporting framework.
- Data ingestion
- Ingest data into the Splunk platform using the appropriate ingestion methods (UF, HEC, DBX, etc)
- Apply full getting data in processes for all ingested data
- Install and configure all appropriate search and index time field extraction configurations (preferably with Splunk-supported TAs)
- Verification and validation
- Verify that data appears as expected in the Splunk platform and retains expected integrity and structure
- Perform any required troubleshooting or adjustments if there are any issues during migration process
- Engage end-users to test the functionality and usability of Splunk ES to ensure it meets their needs. Collect feedback and make necessary adjustments
- Migration and cutover
- Plan for any possible incremental migration and cutover. For example, you might want to have a dual feed plan built until your planned cutover date to Splunk ES
- Ensure this plan has sign-off with all project stakeholders and technical leads
- Post-migration activities
- Optimize searches, dashboards, and alerts based on the actual data and usage patterns in Splunk ES
Additional resources
The following resources might help you plan your Splunk platform architecture for a SIEM replacement: