Skip to main content

 

Splunk Lantern

Querying payment rail logs in natural language using Splunk DSDL's LLM-powered chat UI

Financial institutions generate massive log volumes from core banking systems, ATM networks, and wire transfer applications. Payment rails (for example, Zelle, FedNow, ACH, Wire, etc.) are essential data sources for daily transactions. Investigating system health, analyzing transaction success rates, and catching failures quickly typically requires specialized SPL knowledge, which limits the ability of junior analysts to obtain rapid insights.

The Splunk App for Data Science and Deep Learning (DSDL) with an LLM-powered chat UI changes this: analysts can now query data in natural language and receive actionable results instantly. This solution allows analysts to "talk" to the data (for example, "Show me all Zelle transfers that have failed in the last 30 days below the average baseline"), accelerating incident response and democratizing data access.

Prerequisites

  • Splunk Enterprise or Splunk Cloud Platform
  • Splunk App for Data Science and Deep Learning (DSDL) installed and configured
  • A configured compute environment (Docker or Kubernetes) linked to DSDL
  • Access to an LLM API (such as OpenAI, Anthropic, or a local model)

How to use Splunk software for this use case

Starting with DSDL 5.2.1, users can query their logs conversationally through a built-in chat interface. You can configure any LLM of your choice to work with this process - both on-premises and SaaS models.

Step 1: Configure your LLM

In this example, we'll use OpenAI GPT 5.4 Nano. Configure it in Configuration > Setup > Setup LLM-RAG. Leave the OpenAI URL field empty.

clipboard_cb7df434-5789-4fba-8f47-75aebd23cf25.png

Step 2: Start the Agentic AI container

Navigate to Containers (Configuration > Containers) and start the Agentic AI container (5.2.3). This container provides the backend infrastructure that powers the interactive chat UI.

clipboard_7f685719-4b6c-4d3e-818e-0069a2d75f77.png

Step 3: Load your logs into the chat UI

With your setup configured and running, navigate to LLM Chat (Assistants > Interactive Log Analysis > LLM Chat). You'll see two key components: a search bar and a chat interface. Load your logs by using SPL in the search bar to specify the index, source type, and time range.

Let's begin by analyzing payment rail logs (for example, Zelle, FedNow, ACH, and others). Each log entry is quite verbose. With these logs loaded, we can now leverage an LLM to parse and analyze them.

clipboard_99218189-6234-40b8-9c95-4e1076745fd3.png

Step 4: Query the LLM for data insights

We can now query the LLM for data insights. In this example, we'll ask it to summarize the logs, report on rejected log statuses, and generate a review-ready report.

clipboard_9046c558-6be7-4124-a7ca-08b6da98a8b5.png

The summary report shows whether our payment rails exceed average rejection thresholds and provides specifics on each rejected payment: rail used, amount, rejection reason, and account information.

Step 5: Visualize the data

For better data visualization, ask the LLM to generate a table.

clipboard_6c6b1be8-48fa-4084-9df3-e9a34c3b52be.png

The LLM creates a markdown table. Open it in a markdown viewer to see the data and analyze the results.

clipboard_961f7be2-e999-4212-9506-1b4967d06ba8.png

The table displays our summarized data, including the single failed transaction with comprehensive details: failure reason, transaction amount, source and destination financial institutions, and transaction IDs.

In just a few minutes, we were able to query the data, generate a report focused on transaction status with relevant metrics, and create a table for quick review - all without writing complex SPL or having to manually review every log entry.

Next steps

After you've validated this with a few logs, you can expand to other use cases such as:

In addition, these resources might help you understand and implement this guidance: