Querying payment rail logs in natural language using Splunk DSDL's LLM-powered chat UI
Financial institutions generate massive log volumes from core banking systems, ATM networks, and wire transfer applications. Payment rails (for example, Zelle, FedNow, ACH, Wire, etc.) are essential data sources for daily transactions. Investigating system health, analyzing transaction success rates, and catching failures quickly typically requires specialized SPL knowledge, which limits the ability of junior analysts to obtain rapid insights.
The Splunk App for Data Science and Deep Learning (DSDL) with an LLM-powered chat UI changes this: analysts can now query data in natural language and receive actionable results instantly. This solution allows analysts to "talk" to the data (for example, "Show me all Zelle transfers that have failed in the last 30 days below the average baseline"), accelerating incident response and democratizing data access.
Prerequisites
- Splunk Enterprise or Splunk Cloud Platform
- Splunk App for Data Science and Deep Learning (DSDL) installed and configured
- A configured compute environment (Docker or Kubernetes) linked to DSDL
- Access to an LLM API (such as OpenAI, Anthropic, or a local model)
How to use Splunk software for this use case
Starting with DSDL 5.2.1, users can query their logs conversationally through a built-in chat interface. You can configure any LLM of your choice to work with this process - both on-premises and SaaS models.
Step 1: Configure your LLM
In this example, we'll use OpenAI GPT 5.4 Nano. Configure it in Configuration > Setup > Setup LLM-RAG. Leave the OpenAI URL field empty.
Step 2: Start the Agentic AI container
Navigate to Containers (Configuration > Containers) and start the Agentic AI container (5.2.3). This container provides the backend infrastructure that powers the interactive chat UI.

Step 3: Load your logs into the chat UI
With your setup configured and running, navigate to LLM Chat (Assistants > Interactive Log Analysis > LLM Chat). You'll see two key components: a search bar and a chat interface. Load your logs by using SPL in the search bar to specify the index, source type, and time range.
Let's begin by analyzing payment rail logs (for example, Zelle, FedNow, ACH, and others). Each log entry is quite verbose. With these logs loaded, we can now leverage an LLM to parse and analyze them.

Step 4: Query the LLM for data insights
We can now query the LLM for data insights. In this example, we'll ask it to summarize the logs, report on rejected log statuses, and generate a review-ready report.

The summary report shows whether our payment rails exceed average rejection thresholds and provides specifics on each rejected payment: rail used, amount, rejection reason, and account information.
Step 5: Visualize the data
For better data visualization, ask the LLM to generate a table.

The LLM creates a markdown table. Open it in a markdown viewer to see the data and analyze the results.

The table displays our summarized data, including the single failed transaction with comprehensive details: failure reason, transaction amount, source and destination financial institutions, and transaction IDs.
In just a few minutes, we were able to query the data, generate a report focused on transaction status with relevant metrics, and create a table for quick review - all without writing complex SPL or having to manually review every log entry.
Next steps
After you've validated this with a few logs, you can expand to other use cases such as:
- Automating alert investigations by integrating LLMs with the Splunk platform and Confluence: Learn how to unify disparate tools, transforming multi-step manual IT investigations into automated, conversational workflows.
- Leveraging LLM reasoning and ML capabilities for Jira alert investigations: Learn how LLMs can correlate security alerts, create enriched datasets, and apply ML models. Utilize advanced ML for investigations, improving the impact of ML models through better data association and enabling the detection of anomalies across various data streams.
In addition, these resources might help you understand and implement this guidance:
- Splunk Lantern Article: Leveraging generative AI capability in security operations with the AITK
- Splunk Lantern Article: Creating, monitoring, and optimizing LLM retrieval augmented generation patterns
- Splunk Lantern Article: Using the Cisco Time Series Model 1.0 on DSDL 5.2.3
- Splunk Lantern Article: Leveraging generative AI capability in security operations with the AITK
- Splunk Blog: Faster insights with third-party LLM services in Splunk search
- Splunk Blog: Talk to your logs: LLM-powered chat UI in DSDL 5.2.3
- Splunk .conf: Integrating GenAI with Splunk to drive digital transformation - .Conf 25


