Performance impact of geography

Sim

Well-known member
Just thought I'd share some interesting data from an (unexpected!) server move last week.

I've been hosting my forums in Linode's Singapore DC for quite a few years now - primarily because it was the closest (network-wise) to Australia where most of my audience is for PropertyChat.

However, ZooChat now has a very international audience, with around 40% of users coming from the US and 25% from the UK, with Australia coming in only 3rd with 5% of the users (overall, North America is 43%, Europe 36%, Asia 10%, Oceania 7%).

So Singapore is a long way from the majority of our users.

I had been planning on relocating the server for quite some time - but there were higher priorities, so it never got done. However, a tricky server issue last week which took my main server offline forced me to build a new server and I figured that was as good a time as any to move the site. I chose Linode's Newark DC on the east coast of the US on the basis that it should be close enough to most US users and much closer to Europe than the US west coast.

Things went well (took quite a few hours to transfer the image galleries from Singapore to Newark!), and then yesterday I discovered some interesting data when reconfiguring StatusCake to monitor the new servers - I had forgotten I set up StatusCake pagespeed monitoring a while back and you can see the impact the server move had!

(Note that the site uses Cloudflare - not sure how much impact that has, if any)

From the UK:

1562022799404.webp

... I estimate roughly 2x page speed boost for my UK audience.


From the US:

1562022816443.webp

... I estimate as much as a 3x page speed boost for my US audience!


Naturally there was always going to be a penalty from moving the server further away from Australia - but given the US-centric nature of the internet, most countries should have pretty decent connectivity to there, so I'm hoping the impact is not as bad as the gains for most other users.

From Australia:

1562022897708.webp

I estimate approximately 40-50% page speed penalty from Australia - not as bad as I had feared it would be!

It's interesting to see how much more stable the page speed performance is on the new server! The site is now on its own server - so doesn't have to contend for resources with PropertyChat like it did previously.

So, I'm going to call that mission accomplished - the performance boost (for the majority of my audience) from relocating the server to the east coast of the US is significant and worthwhile.
 
You do understand that in most instances, you're literally talking about a blink of an eye, right? I'm in LA. The difference between a server here in LA and one almost anywhere in the world (i.e. Africa) is less than 200ms. I'm right around 200ms from Johannesburg. Unless you're running a game server or super sensitive voice server or such, where latency is a big deal, 200ms is nothing.

There are plenty of people who disagree with me, and they are more than welcome to, but all the graphs in the world and speed penalty tables and everything else is a complete waste of time.
 
Unless you're running a game server or super sensitive voice server or such, where latency is a big deal, 200ms is nothing.

But it's not 200ms. Take a look at the network tab in Chrome's inspect mode - there's dozens and dozens of different requests made to the server to load a single page. Some of them block others - not everything is done in parallel - and (in your example), every single request adds 200ms to the load time.

So total page load time can easily be more than 1s slower with higher latency . This is very noticeable to end-users.

Indeed, the charts above quite clearly show that the total page download time is as much as 4 seconds quicker for my US audience ... and 4 seconds is a very long time.

Indeed, in my experience - for any interactive site (or application) where the users are clicking on lots of things, sometimes quite quickly - anything more than a few hundred ms is very noticeable - the site just feels sluggish in response. This is very different to a purely content-based site where users will click, read the content and then move on - when engaged with an interactive site - the time the user had to wait between interactions is very important.

Of course, whether the users care or not is another matter completely - and it may well be that people who are used to slow internet and high latency wouldn't notice if you moved the server. But it absolutely does matter to many people - and more importantly - it matters to Google.

You just have to take a look at Google Search Console's crawl stats charts. Since I made the change, the "Time spent downloading a page" has dropped by at least 60% and correspondingly, the pages crawled per day has tripled. Having more pages indexed (or re-indexed) is always a good thing - especially for a commercial site which relies on search engine rankings and traffic.
 
(Note that the site uses Cloudflare - not sure how much impact that has, if any)
It can help a lot. With XenForo; nearly +50% caching rate for requests for HTML/avatars/css (with a page rule so CSS is cached) is quite doable. This massively helps as cache validation requests hit Cloudflare's CDN and not your backend cutting out a significant chunk of request latency.

Since XenForo isn't a single-page-application; ensuring all the additional new-page requests are hitting a local cache really helps user experience.
 
It can help a lot.

Sure - I've no doubt it helps when compared to a non-Cloudflare fronted site - which is one of the reasons I use it!

However, given the almost ubiquitous nature of Cloudflare CDN POPs - I'm not sure what impact (if any) that moving a site that already uses Cloudflare to a new geography would have - you would presumably already be getting all of the benefit for any cached data, regardless of the geographical location of the origin.

I'm sure that Argo might have a degree of impact on overall latency - but given the image-heavy nature of ZooChat (uses 6x as much bandwidth than PropertyChat while only having 75% of the number of visitors!), I found that Argo's bandwidth-based pricing was simply too expensive to justify for ZooChat - which is a pity, because our highly distributed audience would benefit greatly from using it. If I could somehow exclude the gallery images from Argo routing I'm sure it would be much more justifiable.
 
  • Like
Reactions: Xon
But it's not 200ms. Take a look at the network tab in Chrome's inspect mode - there's dozens and dozens of different requests made to the server to load a single page. Some of them block others - not everything is done in parallel - and (in your example), every single request adds 200ms to the load time.

It doesn't quite work that way, but that's okay. Instead of using charts and Chrome Inspect, use some real world examples. I recently took a test 3mb website with 100 requests. Large by any standards. It is less than 1.5 seconds page load difference to pretty much anywhere in the world. Generally, it's less than a second.

Nobody cares. Seriously...nobody cares. On a normal size website with a normal number of requests, as I mentioned, you're literally talking less latency than the blink of an eye.

To quote the great @Manster54 "It's a lot like a dog chasing his tail. Nothing gained, but apparently amusing to the dog."

Keep chasin'! ;)
 
If I could somehow exclude the gallery images from Argo routing I'm sure it would be much more justifiable.
Not sure that's possible given that gallery images need to be streamed by XenForo for security purposes. They can certainly be stored in alternative locations but the full images themselves are required to be served by the frontend. At least that's my understanding.
 
If I could somehow exclude the gallery images from Argo routing I'm sure it would be much more justifiable.
Not sure that's possible given that gallery images need to be streamed by XenForo for security purposes. They can certainly be stored in alternative locations but the full images themselves are required to be served by the frontend. At least that's my understanding.
You can use cloudflare workers which pattern match to attachments URLs, and then inject edge-based access control and redirection. Would require a tiny add-on to expose a 'permission check for attachment' end-point. Workers can modify the request to fetch from a non-argo enabled sub-domain, and I think fetch from local cache for that request.
 
You can use cloudflare workers which pattern match to attachments URLs, and then inject edge-based access control and redirection. Would require a tiny add-on to expose a 'permission check for attachment' end-point. Workers can modify the request to fetch from a non-argo enabled sub-domain, and I think fetch from local cache for that request.
That's a good idea. Fast permission check for public view then fetch from cache. Most gallery images are usually public view so it would offload a lot of bandwidth to cloudflare.
 
Back
Top Bottom