Corporate computers that sit idle at night can be utilized to perform units of work on behalf of common servers. For example, each machine can serve as a Java Message Service (JMS) client that turns itself on during the company's off hours and receives a message encapsulated with data to perform some calculation. In this scenario, each client would receive an object from a queue and call one interface method that would perform the calculation on the client, encapsulate the results in the same object, and the client would place this object into a reply-queue. The application server would then receive the message via message driven beans (MDBs) and store the results in some backend store, such as a database.
If you don't use JMS, the same concepts can be applied to other queuing systems, such as Kafka, ZeroMQ, and RabbitMQ.
Regardless of system, extracting the time series data and translating it to load into a database can be time-consuming. After you get it into a database, you might have to go through additional steps to be able to work with the data.
You can save time by streaming text-based, time series data directly into the Splunk platform without any need for the extract, transform, and load (ETL) process. Further, with Splunk platform as the receiver from the output queue, you are prepared to conduct further analytics with the data using SPL.
There are two ways to integrate the results into Splunk:
- Have the message driven beans store their results into a rotated file that is monitored by the Splunk platform (or preferably Universal forwarders).
- Have JMS clients receive the results from the reply queue and have them send their results to standard output to be picked up by the Splunk platform (or again, preferably Universal forwarders).
After you integrate the results into the Splunk platform, you can conduct further analysis on the data. For example, if your goal for your data was to conduct arbitrary matrix multiplication (such as what is used in weather forecasting apps), you could use the multikv command to treat each column in a matrix as a multi-value field and then:
- simply view the values.
- calculate key statistics on the columns.
- apply a visualization to show key statistics, such the average, for each column.
Exactly what analysis you conduct will depend on your industry and needs. Additional common applications for this method of receiving time series data include:
- executing banking applications
- gathering scientific analysis
- performing linear optimization (such as what is used in industrial planning)
- building mathematical models
Bringing queued time series data directly into the Splunk platform is also beneficial because it allows you to turn the data into reports without needing to write code.
These additional Splunk resources might help you understand and implement this product tip:
- Splunk Add-On: JMS Messaging Modular Input
- Use Case: Using Kakfa to monitor at scale
- Splunk Docs: About reports (Splunk Enterprise or Splunk Cloud Platform)
- Splunk Docs: Dashboard Studio - Add and format visualizations (Splunk Enterprise or Splunk Cloud Platform)
- Splunk Docs: Calculating statistics (Splunk Enterprise or Splunk Cloud Platform)
- Splunk Docs: Multikv (Splunk Enterprise or Splunk Cloud Platform)
Splunk OnDemand Services: Use these credit-based services for direct access to Splunk technical consultants with a variety of technical services from a pre-defined catalog. Most customers have OnDemand Services per their license support plan. Engage the ODS team at OnDemand-Inquires@splunk.