Struggling with Your Website Speed Optimization Efforts? It May Be the Way You’re Testing.

Has a client ever come to your agency with concerns about their website’s page speed performance? Were they worried about the negative impact that slow page speeds might have on their Google rankings, conversion rates, and the overall experience their sites provide users? Have you wondered the same thing about your agency’s own website?

It’s a common problem that may have a surprising underlying cause. Sure, website speed and performance optimization efforts may actually have come up short. However, it’s also possible that page speed tests and performance monitoring are simply being completed under inconsistent conditions.

Without consistent website page speed testing conditions, it’s difficult to accurately quantify any positive or negative impact of optimization efforts. If your agency or your clients are worried about the negative effects of slow page speeds, make sure the right problem is being addressed. Otherwise, your development team may wind up wasting resources on efforts you cannot accurately evaluate.

Why Consistent Conditions Are Critical for Ongoing Website Performance Monitoring

Every tool has its use. But using a tool outside of its intended purpose won’t yield great results. Sometimes, you may be testing website performance with tools you don’t fully understand.

When that happens, it’s possible to overlook inconsistencies in the way a tool functions and measures site performance. And, if the tool isn’t specifically designed for long-term monitoring, you may wind up comparing data from different timeframes that really don’t correlate. This is dangerous because it can send your team on a wild goose chance.

Certain tools are perfectly fine for one-off performance tests. Maybe a change was made to the website and you want to do a quick check on Google’s UX-centric performance metrics. That’s all well and good, but if you decide to run another test a few weeks later and compare the data, you may not be comparing apples to apples and that’s when you can lead your team astray.

The same device, using the same connection, at the same time of day, tested with the same tool, may net wildly different results. This is the scenario that sends SEO teams and developers spiraling out. What’s wrong? Why is performance slipping? The answer may be that it’s not. Instead, certain testing conditions beyond your control aren’t consistent.

On the other hand, if the only difference between two tests is the code that is being tested, that is: all other factors (network conditions, testing location, device resources, etc) are equal, your team can be confident about the potential impacts of the steps they take in light of those test results. This is why consistency is key: it minimizes the variables and maximizes confidence.

What Conditions May Influence Website Performance Monitoring Tools and Site Speed Results?

When you test a site using the same tool and the same device at the same time of day from the same location, differing results don’t necessarily mean something is wrong with the site. It can mean you chose the wrong tool for the job.

Certain website performance monitoring tools are built with long-term, comparative monitoring in mind. Using these tools for that purpose ensures consistency in those testing elements that may otherwise be beyond your control.

When choosing a website performance testing tool, look for one offering consistent conditions such as:

  • Consistent Testing Locations - The physical distance that your request has to travel before hitting the server handling its request can affect metrics like “time to first byte”. If you’re using a tool that doesn’t test consistently from the same location, you have no idea how far your request from last week traveled compared to the one from today. Any comparison of those time to first byte metrics is worthless as a result.
  • Consistent Network Speed - Network conditions aren’t always optimal, so why test under optimal network conditions at all times? Some tools do so to return results quickly. They then simulate different conditions to nudge performance metrics down a bit, emulating “real-world” conditions. Others actually throttle network conditions to get more accurate results — ensuring the consistency necessary for ongoing monitoring.

If you or your clients don’t understand the consistency and variability of your website monitoring tool, you can’t rely on that tool to provide reliable data over a stretch of time. There’s only so much you can control on your end. Take the time to determine what it is you need to accomplish, and choose a tool appropriate for that need.

Accurate Ongoing Performance Monitoring Is Impossible with Inconsistent Testing Conditions

Few will argue that site speed doesn’t matter. At the most basic level, it’s clear that slower websites have higher bounce rates and earn fewer conversions. Attention spans in the digital sphere aren’t getting any longer. When you dive deeper into the topic of how user experience affects Google rankings with its recent Core Web Vitals update, it only reinforces the fact that website speed optimization is worth investing in, provided that optimization is actually happening.

Let’s take a look at a real-world example of how two website performance tests may play out with very different results. We’ll even use two real, popular tools to highlight the difference.

One-Off vs. Long-Term Monitoring Tools: A Real-World Example of the Difference

Google’s monitoring tool PageSpeed Insights (PSI) and others like it allow users to simply plug in a website URL and receive an analysis from its servers in seconds. What many users don’t understand are the inconsistencies within this test, including those mentioned above.

PSI is perfectly fine for generating a snapshot of how that site performed at that moment, on that day, under those specific conditions. But how will today’s performance stack up against an evaluation three months down the road, after an intense period of website performance optimization?

The development team may do good work and make effective improvements. Unfortunately, the PSI test 3 months down the road may send a request to a server three times farther away than the previous test. As a result, the new performance looks subpar in comparison.

Now the development team is demoralized. Product managers are upset about perceived time and money wasted. Everyone still wants better results, and another round of optimization ensues. Who knows what variables might influence those results even further down the road, though? It becomes a cycle of unpredictability, where everyone winds up comparing apples to oranges.

Tools designed for ongoing monitoring, on the other hand, provide the consistency necessary for gathering comparable data. At first glance, a long-term monitoring tool like Calibre or SpeedCurve look to share a lot of DNA with PSI. They both use the Google Lighthouse speed and quality auditing tool, for instance.

However, these tools satisfy the criteria we outlined above. They put controls in place to ensure consistent testing conditions. That allows for meaningful data comparisons, and quantifiable results of website performance optimization efforts over time.

Find a Website Performance Monitoring Tool That Works for You

Whichever performance monitoring tool you or your clients choose, make sure it does what you need to get accurate testing data. Do the research so internal teams aren’t led astray, chasing their own tails and wondering why optimization efforts appear to yield inconsistent results.