[DigitalPoint] App for Cloudflare®

[DigitalPoint] App for Cloudflare® 1.9.1.1

No permission to download
The cache purge when a new post happens it's a new thing (no more API calls are being made), it only moved to running on the backend vs. the front-end (before the end user would have to wait for the API call to be run, now they don't). Cloudflare allows 1200 API calls per rolling 5 minute window. How many posts are you getting?
 
Not even 150 in a day. You might remember i was getting hit by this issue few months ago and i even tried getting support after taking pro. I ended up disabling the addon eventually. I just enabled the addon a week ago and it has been fine since then. Only got around five errors few hours ago pointing to purge cache process. I would keep an eye on this and would disable guest caching if it results in the error repeatedly. My account is cursed somehow 😂
 
Well, as I said, nothing changed with the number of API calls being made, the only change was how they were called (backend vs. frontend). So rolling back to an old version isn't going to help in your case.

I suppose if you flat out don't want to purge the cache when a user makes a post, you could do that by commenting out this line in the DigitalPoint\Cloudflare\Job\PurgeCache.php file:
PHP:
// $cloudflareRepo->purgeCache(['files' => $this->data['urls']]);

That doesn't really solve the underlying issue where your Cloudflare account can't do API calls in a normal way though... purging the cache is one of a whole lot of API calls the plugin makes for everything, so hiding/silently failing API calls really isn't the solution. Other more important things are going to randomly fail as well (like IP blocking with firewall or changing settings).
 
After installing the latest version and enabling guest caching, now seeing hundreds of "ErrorException: Cloudflare: 971: Please wait and consider throttling your request speed".
 
After installing the latest version and enabling guest caching, now seeing hundreds of "ErrorException: Cloudflare: 971: Please wait and consider throttling your request speed".
If you get a crazy number of posts, it could cause you to hit Cloudflare’s API limits (1,200 API calls per rolling 5 minute window), so there’s an API call triggered to purge the cache with each new post if you have guest page caching enabled. That would be the same with the previous versions with guest page caching enabled though.

API limits are per Cloudflare account, not per zone, so if you have a lot of sites making API calls, the limit is shared between them all.
 
There is one way you could get a lot of cache purge API calls that wouldn't be that out of the ordinary to happen even if you weren't getting 100+ new posts per minute. In addition to cache purging URLs related to the thread, if someone (for example a moderator) was deleting a ton of posts at once, that could trigger API limits because the cache purge is triggered when a post is deleted as well.

I did end up making a new option, Purge cache when post is created or deleted that at least allows a site to choose if they want to do the cache purge that normally happens when guest page caching is enabled. So for a site that gets a very large number of new posts (or deleted posts) in a short period of time, they can disable the cache purge (and just be wary of the time they choose to cache guest pages).
 
Last edited:
The API call to purge cache by URLs is already rate limited a little differently than other API calls. It's rate limited by the number of URLs you are purging rather than API calls.

The single-file purge rate limit for the Free subscription is 1000 urls/min.

The system already groups URLs into a single API call for each post (there are potentially up to 4 URLs that get purged with each purge action).

For sake of math, let's say each post created or deleted needs to purge the max (4) URLs. Currently it's purging those 4 URLs with a single API call. So you start to hit the API limit allowed for purging URLs when you get to 250 new or deleted posts per minute.

You can purge up to 20 URLs per API call, so the only real difference here is you get to the 1,000 urls per minute limit with 50 API calls rather than 250.

Since the limit is per URL (not per API call), the added complexity of having a purge queue system doesn't really do much. The API requests have already been moved to the backend so they don't affect the user at all (they are done with the XenForo job manager system). Besides the additional complexity of putting URLs into a purge queue system, it would be slower for the URLs to actually be purged on the backend (you would need to add some sort of wait time to do the purge so additional URLs can be queued so they can run together).

Either way, the limit is 1000 urls/min, regardless if it's done in 50 API calls or 250.

For very high traffic sites getting hundreds of posts per minute, it probably makes sense to not purge the cache in realtime. Realistically, are unregistered (guest) users really sitting there reloading a thread continuously to see if there are new posts, but not registering? The guest page caching is more to make the site extra fast for guest users coming in via something like Google search... so how important is it really that the user sees a post on the last page of the thread that happened in the last couple minutes? Again, it only applies to unregistered users anyway...
 
I understand. This happened at 6 AM, so will see when will it happen again.

Regarding Cloudflare connectivity issues we are experiencing, there are 6 lost packets out of 10k and that's on IPv6 only. Having no loss with IPv6 when pinging Google.

Linode support thinks there is some rate limiting in place.

After 10 days Cloudflare support still haven't replied...
 
Not specific to Cloudflare, but I've definitely seen more general networking issues with IPv6 vs. IPv4 so it doesn't surprise me if you are seeing packet loss somewhere along the route with IPv6 but not IPv4. I've seen things like hardware switches that mostly support IPv6 (but not completely) so it works most of the time, but when something starts doing things like adjusting packet frame size, the hardware doesn't know what to do. It's getting better, but IPv6 is newer, and not all switches fully support IPv6 standard (network equipment needs to explicitly support IPv6 and all it's intricacies). Not saying that's what's going on (because I don't know), but I have seen things like that.

As far as packet loss to Cloudflare but not Google, it's not really an apples to apples comparison unless the network traffic is taking the same route (and for that to happen, the Google data center would more or less need to be plugged into the same switch as the Cloudflare data center). Different destinations take different routes, and packet loss is dependent on all equipment the traffic passes through along the respective route.

Cloudflare rate limiting would yield an error message about rating limiting, it wouldn't result in simply dropped packets. Cloudflare is handling literally trillions of requests per day, it would be problematic and a support nightmare for them (to say the least) if they were simply dropping packets because too many requests came in.

If it's just an issue with IPv6 and not IPv4, the easiest fix is just going to be to use IPv4 on your server instead of IPv6. IPv6 advantages are not as great as they are with end-user devices.
 
Added /etc/gai.conf to prefer IPv4 connections over IPv6, so will see how it goes from now:
Code:
precedence  ::1/128       50
precedence  ::/0          40
precedence  2002::/16     30
precedence ::/96          20
precedence ::ffff:0:0/96  100
 
Preferring IPv4 over IPv6 is probably going to have the same net effect as just disabling IPv6 (I'd think anyway) because all normal things that support IPv6 are also going to allow IPv4 (basically everything supports IPv4, and only some things support IPv6).
 
digitalpoint updated [DigitalPoint] App for Cloudflare® with a new update entry:

Ability to do edge caching of media attachments

IMPORTANT for existing users: New functionality requires 2 additional API permissions in order to use the new functions. You can go to your Cloudflare API Tokens, edit the token you have and add the following permissions:
  • Account.Allow Request Tracer: Read
  • Account.Intel: Read
At this point, you should have a total of 18 permissions for...

Read the rest of this update entry...
 
thanks for this! i haven't seen any new errors since my post so i think i am not in api hell right now. and i think those errors cropped around the time i was merging a bunch of threads so maybe it also hit the cache purge like post delete like you mentioned above.
 
Just now updated to the latest version and it broke my entire forum.

Now all I get when I go to my forum home page is this:

An exception occurred: [ErrorException] [E_WARNING] Undefined array key "cfMediaCachingSeconds" in src/addons/DigitalPoint/Cloudflare/Listener/AppPubComplete.php on line 10

  1. XF::handlePhpError() in src/addons/DigitalPoint/Cloudflare/Listener/AppPubComplete.php at line 10
  2. DigitalPoint\Cloudflare\Listener\AppPubComplete::run() in src/XF/Extension.php at line 69
  3. XF\Extension->fire() in src/XF/App.php at line 2990
  4. XF\App->fire() in src/XF/Pub/App.php at line 478
  5. XF\Pub\App->complete() in src/XF/App.php at line 2486
  6. XF\App->run() in src/XF.php at line 524
  7. XF::runApp() in index.php at line 20
Are you sure you updated it (vs. just copying the files)?

If you did update it, you may want to rebuild the master data for the site. For whatever reason the new option didn't get "installed" somehow.
 
about whois... are you using cloudflare api for it? i cannot find whois frontend anywhere on cloudflare website. i ask because it fails to fetch data for my domain which somehow also fails on whois.com 😛
 
BTW, the new option to cache media attachments has the potential to make media-heavy sites significantly faster. It will store images, video and audio attachments (including XF Media Gallery stuff) at the network edge (in Cloudflare data centers).

Especially if you are using a cloud-based filesystem for storing attachments (like R2 or S3). Normally the request flow looks like this:
  • Request goes to origin server
  • Origin server checks permissions for user making request
  • Read attachment via API from cloud (in the case of cloud-based system)
  • Pass through that attachment to the user from origin server
Being able to cache it at the network edge simply serves it directly from the closest Cloudflare data center. The downside is it doesn't perform permission checks on a per request basis. So if a user knows the unique URL to a media attachment, they would be able to potentially bypass permission checks. So if you have image attachments that are viewable by some users but not other users, maybe it's not a great idea. However, if generally speaking all users can view image attachments, it can make things very fast.
 
about whois... are you using cloudflare api for it? i cannot find whois frontend anywhere on cloudflare website. i ask because it fails to fetch data for my domain which somehow also fails on whois.com 😛
Yes, it's done with Cloudflare API. Nothing I can do to change the results... all it does is pass through the results from the API call.
 
Back
Top Bottom