[DigitalPoint] App for Cloudflare®

[DigitalPoint] App for Cloudflare® 1.8.2

No permission to download
@Kirby

BTW, this is my current implementation I'm testing that (I think) maintains backward compatibility (again, not in a position to change XenForo's fundamentals myself, so not trying to deal with the "should GET requests even have CSRF tokens?" issue) and hopefully solves all the problems.

JavaScript:
XF.config = new Proxy(XF.config, {
set: function(object, property, value){
     object[property] = value;
      $('.has-csrf').each(function()
      {
         let url = new URL($(this).attr("href"), XF.config.url.fullBase);
         url.searchParams.set("t", value);
         $(this).attr("href", url.toString());
      });
      return true;
   }
});

A little annoying to have to explicitly tag the links it should apply to by adding the has-csrf class, but it's not a crazy number of them.

So no JavaScript needs to happen on click, it doesn't get applied to links that it may not be intended for, etc.
 
This should work (for core links, probably not for links added by 3rd party Add-ons), though I'd probably not use a proxy but an event listener on event CrossTab event keepAlive
 
Ya well if a 3rd party add-on is adding CSRF tokens to GET requests, they are doing it wrong (like XenForo). Can't realistically cover "what ifs" in third party add-ons. If they want to keep doing it "wrong" just because XenForo does it wrong, it's also not that difficult to add a has-csrf class. 🤷🏻‍♂️

And yes, ideally CrossTab event would be preferable (even better would be if XF.KeepAlive as a whole wasn't an anonymous function and then we could just extend XF.KeepAlive.applyChanges()), but have to work within the bounds of what I can work with.

Unfortunately it can't be extended since it's an anonymous function and along the same lines we also can't access XF.KeepAlive.crossTabEvent. And while it probably wouldn't change, in theory someone could be changing XF.config.url.keepAlive (don't ask me why anyone would do that, but I also don't get why people do a lot of things). So in the off-chance XF.config.url.keepAlive was changed, then the trigger wouldn't fire.

I'm basically forced with doing it in the least bad way rather than the best way because of how things are setup.
 
Can I use the same buckets, with the same data inside the buckets, for the same setup as a staging area?

Let's say I have XF forum on domian.com and the same setup on dev.domain.com – both have the same SQL+Files, but instead of having duplicate buckets, I want to use the same buckets with the same subdomain for the buckets on the live site, and dev area.

At some point, the dev area and the live site will have different attachments inside the buckets since it won't sync the dev area with the live site all the time in terms of posts and new threads.

Or is there a better solution to have an identical staging area of the live site that also includes attachments? (but changes with time between both platforms)
 
Last edited:
If both sites are writing new files, I’d say it’s best to not use the same bucket. If one is only reading, then it should be fine. Just like other components, a shared database between 2 sites, a shared file system between 2 sites probably isn’t a great idea.
 
If both sites are writing new files, I’d say it’s best to not use the same bucket. If one is only reading, then it should be fine. Just like other components, a shared database between 2 sites, a shared file system between 2 sites probably isn’t a great idea.
Is it possible for the dev website to read from the buckets but write in the internal file system?
 
No, it’s not. This applies to any file system adapter that XenForo uses, not just R2. XenForo doesn’t support reading from one file system and writing to another.
 
Last edited:
Not directly related to XenForo version of addon, but I started working on a WordPress version (which is a much better candidate for guest page caching in most cases). Site I'm testing it on went from 1.5% edge cache hit rate to 83.56%.

1673891240530.png

A 5,570% higher hit rate percentage seems like a decent improvement. 😂
 
I don't know if this is at all possible but would it be possible to integrate Wasabi somehow?

And a actual R2 related question. Anyone know how much data can be transferred before the cost kicks in? If I'm not mistaken you get something like 1 million free requests per month and after that the cost starts?
 
Can you think of anything that would cause a 403 on core-compiled.js?

I upgraded to 1.5.3 this morning and when I try to update an existing advertising slot or template, I get "Oops! We ran into some problems. Please try again later. More error details may be in the browser console." pointing to the referenced file and error code above.

The only thing that has changed on my site within the last 48 hours is I upgraded the CloudFlare add-on and installed the Better Google Analytics add-on. I only have one other add-on installed "Known Bots" and it's been installed for several years.
 
I don't know if this is at all possible but would it be possible to integrate Wasabi somehow?
Not with this add-on, no. This add-on is specific to Cloudflare (and the R2 API was custom built by me just for R2... it's not a generic S3 library). If you want to use Wasabi and there's nothing out there that already does it for XenForo, you should be able to do it with a generic S3 library like this: https://xenforo.com/community/resou...or-amazon-s3-for-file-storage-in-xf-2-1.6805/

And a actual R2 related question. Anyone know how much data can be transferred before the cost kicks in? If I'm not mistaken you get something like 1 million free requests per month and after that the cost starts?
You get 1,000,000 writes (users uploading new attachments and avatars basically) per month and 10,000,000 (class B operations... effectively reads) per month. Publicly accessible files (in the data directory... things like avatars) can actually be fetched by users much, much more than 10,000,000 per month in real-world scenario if you setup caching properly for the sub-domain for that data. Specifically, the 10,000,000 "reads" you get for free is only counted if whatever it is (like an avatar) isn't in the network edge cache (only actual backend API calls count... even if it's Cloudflare itself edge caching for you). With a properly setup cache rule, you could probably get closer to 100,000,000 or 200,000,000 image views (maybe even a lot more) per month for no cost (again, class B operations only count if whatever it is isn't in cache already).

I'd say even if you have a crazy amount of traffic, you are going to have a hard time going over the free allotment for the underlying API operations (IF you setup caching properly)... at which point you are just left with storage costs (first 10GB is free, then $0.015 per GB per month). So for example if you were storing 1TB of files in R2, the storage cost would be $14.85/month (and there are no egress/bandwidth fees). 990 x 0.015 = 14.85


Can you think of anything that would cause a 403 on core-compiled.js?

I upgraded to 1.5.3 this morning and when I try to update an existing advertising slot or template, I get "Oops! We ran into some problems. Please try again later. More error details may be in the browser console." pointing to the referenced file and error code above.

The only thing that has changed on my site within the last 48 hours is I upgraded the CloudFlare add-on and installed the Better Google Analytics add-on. I only have one other add-on installed "Known Bots" and it's been installed for several years.
Do you have the actual error from the console?
 
Do you have the actual error from the console?

newerror.png

Weird (and not sure if helpful) but on this particular existing advert slot, if I remove both <script> references, it will then save. Curious, I then went to the Page Container template since I know it has multiple file references and tried to edit it and get the same error. If I go something else within the CP to edit like a reaction, phrase, etc... they all save fine without issue.
 
I don't know if this is at all possible but would it be possible to integrate Wasabi somehow?
Unrelated to R2, but one thing I've always found annoying about Wasabi is their minimum storage duration for files. 90 days for normal pay as you go. What this means is if you upload a 1GB file (as an example) and then immediately delete it, you get charged for 1GB of storage for that file for the next 3 months even though it's long gone. So in a real-world example in XenForo, every attachment a user uploads (even if immediately deleted before they post the message it was part of), you get billed for that storage space for a full 3 months.

With Cloudflare R2, it's a relatively realtime average (actual storage used is calculated hourly and then averaged for the month). There is no such thing as "minimum storage duration".


Also Wasabi's "free" egress/bandwidth is only free as long as the bandwidth doesn't exceed the storage. Which is crazy because I would expect everything to be downloaded/viewed at least once a month normally.
  • If your monthly egress data transfer is greater than your active storage volume, then your storage use case is not a good fit for Wasabi’s free egress policy
For example, if you store 100 TB with Wasabi and download (egress) 100 TB or less within a monthly billing cycle, then your storage use case is a good fit for our policy. If your monthly downloads exceed 100 TB, then your use case is not a good fit.

If your use case exceeds the guidelines of our free egress policy on a regular basis, we reserve the right to limit or suspend your service.

 
Last edited:
Top Bottom