Skip to main content
 
 
Splunk Lantern

Using comparative testing to drive app performance

 

You are a software developer responsible for the performance of your app. You already know how to use Splunk Synthetic Monitoring to incorporate performance testing into the software development lifecycle, helping you to catch problems before they surface in the hands of end users. You're now looking optimize your app's performance to take it from good to great. 

There are lots of different types of changes you could make to improve front-end performance, such as evaluating third party vendors, cleaning up first-party code, or adjusting image optimization levels, but without evidence it's not clear which changes will result in the best outcomes. You need to create a process that uses evidence from comparative testing to assure both you and your stakeholders that these changes will result in the outcomes you're looking for.

How to use Splunk software for this use case

The process you'll follow begins with measuring your performance, then researching potential solutions, creating a hypothesis for the change and its outcomes, testing it with Splunk Synthetic Monitoring, analyzing the results, and finally communicating the results in a report.

Measure your performance

In this stage, you'll assess your performance today and identify what could improve experiences with your app for you, your end users, or your colleagues.

  • Measure performance. Depending on your area of focus, you could run uptime tests for expected endpoint responses, API tests for availability and performance of APIs, or browser tests for performance of browser apps.
  • Pull KPIs into custom views. This helps you to visualize trends over time and anomalies in your app health. If you don’t yet have KPIs and are focused on the frontend user experience, a good place to start is Google’s core web vitals, which quantify major aspects of digital user experience.
  • What do the metrics say? How do they compare to web standards and expectations?
  • Why are performance results important? What is your end user feedback? How does performance impact your colleagues and your business? Keeping this documented will bolster the evidence you are gathering and help validate changes on top of the objective performance metrics. Examples of feedback to document include:
    • Complaints that your registration forms are down
    • One-star app ratings due to bugs with the login process
    • Comments from colleagues that your internal portal is unresponsive, slowing down or preventing your team from doing what they need to
  • Map recent and upcoming projects affecting the app. Investigate internally to map activities such as major code deployments, content campaigns, infrastructure changes, or vendor changes. Events such as these could directly correlate to changes in KPIs that you have noticed in the past or want to change in the future.

Research solutions

In this stage, you'll research possible solutions for the issues you identified in the previous step.

  • Research online. Use resources like web.dev to better understand how to address certain metrics, and read case studies to learn how other organizations have improved their user experience.
  • Consider running a single comparative synthetics test on another site that you want to emulate. What do you learn from the resources loaded and resulting metrics? Do they have significantly fewer JavaScript calls or much smaller content size? These can be clues to how you could make your own improvements.

Create a hypothesis

In this stage, you'll hypothesize the impacts of the change that you want to put in place, detailing what specific outcomes could be achieved.

  • Keep the focus as narrow as possible. There are enough variables to track already, and the more changes you introduce at once, the more difficult to analyze and determine success of any single method.
  • Be specific. Write a specific hypothesis and detail the outcome of it. Examples of these could include:
    • Switching our CDN provider to vendor Y will significantly improve our average visually complete time across all device types at the highest connection speed.
    • Changing our image compression by 20% will move our sitewide Largest Contentful Paint time into Google’s “good” range for our mobile site with LTE connection.
    • Implementing lazy loading will reduce interactive time on our desktop landing page by at least 1000ms.
  • Define when will mark the end of the exercise. It's important to do this to avoid scope creep. This could include a strict time limit like concluding after two weeks of testing, or a more flexible time limit, such as a maximum of three sprints.

Test with Splunk Synthetic Monitoring

In this stage, you'll use Splunk Synthetic Monitoring to test your hypothesis.

  • Make it easy to evaluate the changes. Consider a cookie or other custom header to clearly delineate the different app scenarios being tested.
  • Create dedicated tests for this exercise. Use a naming convention to make it clear what each test is covering.
  • Add labels. If creating a multi-step test, add a readable label to each step and group steps into transactions for easier analysis.
  • Make comparative tests identical to each other in every way other than the change you are introducing. Configuration options to evaluate include geolocation, frequency, connection speed and device, any steps within the test, or other custom headers.
  • Understand your variables. Are you truly comparing apples to apples, or are you comparing Granny Smith apples grown in New Zealand winter to Gala apples grown in Texas summer? The more variables, the more difficult it is to analyze and draw conclusions.
  • Understand environment differences. Factors such as backend resources and content parity can affect your test outcomes, so you should test both scenarios in the same environment if possible.
  • Create and visualize events aligned to app-impacting changes. This places your data in context so it’s easier to analyze the results compared to code releases, content changes, outages, etc. Events can be managed via GUI or via API so you can automate them into your processes as much as possible.

The screenshot below shows an example of a test in Splunk Synthetic Monitoring created with names and descriptions that will be easy to analyze in the next part of the process.

Chart with Largest Contentful Paint over time, focussed on an event that might impact the metric

Analyze outcomes

In this stage, you'll examine what happened in your tests, and explore why those things happened.

  • Leverage visualizations in context with events over time. You can also use views of tests running in parallel to better understand the results.
  • How does the data support your hypothesis? Are you seeing what you expected?
  • What else happened? Did unexpected variables come into play? Did the change potentially introduce other user experience implications (good or bad)?
  • Are your success criteria fulfilled? Are there any caveats or suggestions for additional experimentation?

The screenshot below shows an example of two tests in Splunk Synthetic Monitoring running in parallel to show the difference between one configuration and another.

unnamed (96).png

Conclude and communicate

In this stage, you'll summarize your process and make recommendations for where to go from here.

  • Write up your findings. Clearly summarize the experiment, your findings, and your conclusions. Remember to include “why this is important”, or how your conclusion impacts your organization.
  • Include visualizations with context. For example, “This is the interactive time on our mobile landing page with and without 500KB of javascript”, or “This is the visually complete time on our desktop product page with CDN A versus CDN B”.
  • Propose next steps based on your conclusions. Clearly outline urgency, responsibility, and level of effort as relevant.

Next steps

After your changes are pushed to production, make sure to continue monitoring your performance KPIs. You can use charts to map KPI changes to your app changes. Don't forget to celebrate your project's success, and keep searching for new opportunities to improve user experience.

If you’re getting more advanced in your observability practice, add in Splunk Real User Monitoring and Splunk Application Performance Monitoring.

  • Splunk Real User Monitoring - What are your end users actually experiencing in your browser or mobile application?
  • Splunk Application Performance Monitoring - How are your services performing related to critical business workflows?
  • You can also use custom tags and events to capture specific data about your users and their workflows, and to more easily compare metrics between your app scenarios.

These resources might help you understand and implement this guidance: