[DigitalPoint] App for Cloudflare®

[DigitalPoint] App for Cloudflare® 1.9.6

No permission to download
It can, but you would need to enable it (installing addon doesn’t make it happen just from installing). R2 can be used for data and/or internal_data folders, keeping permission checking intact.

Thanks, I've done a little more digging into the settings now.

So if I wanted to have minimal disruption to the live forum, I can stage the data using rclone by just doing a raw copy of the relevant data (e.g. /data/ or /internal_data/attachments) to an R2 bucket before enabling the setting. This will result in XF retrieving the files on behalf of the user from the R2 bucket, up until I enable the presigned URLs option at which point the users will grab the attachments directly from Cloudflare.

Does this have an effect on SEO/search engine results? It would seem to me like, if the search engine grabs a presigned URL during indexing (i.e. an attachment that is publicly viewable), the media link will not be relevant to hand out during, say, a Google Image search. Is this a reasonable concern or is there something I'm overlooking?

Edit: this also seems to result in some slightly weird user experiences - like a user can load a page, but then be unable to open an image in a new tab because the link expired. Hm. I understand why this is, just thinking about the tradeoffs.

Really appreciate the amount that you contribute from your own development work. I bought your Wordpress plugin and would happily pay for this one if it actually cost money!
 
Thanks, I've done a little more digging into the settings now.

So if I wanted to have minimal disruption to the live forum, I can stage the data using rclone by just doing a raw copy of the relevant data (e.g. /data/ or /internal_data/attachments) to an R2 bucket before enabling the setting. This will result in XF retrieving the files on behalf of the user from the R2 bucket, up until I enable the presigned URLs option at which point the users will grab the attachments directly from Cloudflare.
Presigned URLs are an option, but not a requirement. You don’t need presigned URLs at any point for R2 to work. The rest is correct though… you can use rclone to pre-populate your bucket with existing data before you enable R2 in XenForo.

Does this have an effect on SEO/search engine results? It would seem to me like, if the search engine grabs a presigned URL during indexing (i.e. an attachment that is publicly viewable), the media link will not be relevant to hand out during, say, a Google Image search. Is this a reasonable concern or is there something I'm overlooking?
If you rely on attachments (the actual attachment hidden behind XenForo user permissions) for search engine traffic, I wouldn’t use the presigned URL option.

Really appreciate the amount that you contribute from your own development work. I bought your Wordpress plugin and would happily pay for this one if it actually cost money!
No worries… and thanks!
 
Presigned URLs are an option, but not a requirement. You don’t need presigned URLs at any point for R2 to work.
Am I correct that without presigned URLs there is no permissions check?

Is it still slow due to using http/1.1 or is it using http/2 or /3 now?
 
Am I correct that without presigned URLs there is no permissions check?
There are still permission checks for attachments with or without persigned URLs.

Is it still slow due to using http/1.1 or is it using http/2 or /3 now?
Last I checked, presigned URLs are still limited to HTTP/1.1 (basically because presigned URLs are a part of Amazon's S3 protocol spec... and the entire Amazon S3 API only works under HTTP/1.1).

Either way, presigned URLs have never been a requirement for this addon to function normally. It's an option, not a requirement.
 
Thank you for explaining.
Could you please explain what the point is of presigned URL's instead of hosting from r2.domain.com/data/attachments/ and serve xf attachments from CF R2?
 
Thank you for explaining.
Could you please explain what the point is of presigned URL's instead of hosting from r2.domain.com/data/attachments/ and serve xf attachments from CF R2?

Presigned URLs are only used for things that would be in the internal_data directory in XenForo (things that require permission checks). Nothing in the data directory (things like avatars and attachment thumbnails) have permission checking, so presigned URLs are not an option there.

At a basic level, a XenForo attachment is proxied through the /attachments/ route because that route acts as a gate to the actual attachment data that resides in say, internal_data/attachments/0/100-1234567890abcdef1234567890abcdef.data. The underlying attachment data is never accessed directly because that would bypass permission checks. By default XenForo proxies a local file through the /attachments/ route so it can do permission checking. R2 is effectively the same thing... the data is proxied through the /attachments/ route so permission checking can be done.

Presigned URLs allow you to still have permission checking, but without needing to proxy the data through your server. Basically presigned or not is just a difference if you want the attachment data itself to pass through your server or not. But both cases still keep permission checking intact. Presigned URLs might make sense for a server that is bandwidth resources constrained... like say a user is downloading a 1GB attachment, maybe you don't want 1GB of data going into your server from R2 and back out when you can just send them straight to the R2 server via a presigned URL.
 
Thank you Shawn!
It seems to me that presigned URL's are not very useful due to the slowness of http/1.1
We can still use R2 without presigned URLs, but as that hits the server, what is the point of doing this? What are the benefits?
 
HTTP/1.1 is really only noticeably slower when you hit the concurrent hostname limitations (not using multiplexing). For a “normal” site where a user is only downloading 1 attachment at a time, it’s not measurably slower. If you are using full attachments for things like serving images on a page (so the user visiting a page is downloading 20 or 30 attachments concurrently because they visited the page), yes, it will be slower. But honestly, a site is probably doing it wrong in that case because why are you spinning up the entire XenForo stack for every image on a page in order to do permissions checking on the image? It’s a terrible idea for efficiency.

Personally, I use presigned URLs because I use attachments “normally”. I don’t have situations where users are downloading a zillion full attachments at the same time. That frees up bandwidth usage on my servers. But sites are free to do whatever best suits them (that’s why it’s an option).
 
Last edited:
Awesome stuff with this add-on!

Bit of a feature request from our organisation: We'd like to use a specific R2 bucket token rather than giving access to all of our R2 buckets (some of which are private) from the data security perspective of giving least-privilege.

We've achieved this internally by patching the add-on, but it would be preferable to have this feature built-in?
I'm curious if scoping an existing API token to a specific bucket isn't working for you for whatever reason?

If you view your API tokens from within the R2 area of the Cloudflare dashboard, any existing tokens that have R2 access can be edited. Couldn't you just scope a normal token to only allow read/write to specific bucket(s)?
 
After updating the latest Addon I get the below Server error.

My XF: v2.3.6

How to fix it to prevent this Error ?

1761227682603.webp
 
Which setup you mean?
Your site.

I updated the addon from an older version (I guess 4-5 versions previously).
Unfortunately I don’t remember my previous installed version.
What version of the addon is installed now (as it shows under the Add-on section of the admin area)? If it's showing the latest version there, it's possible XenForo failed at rebuilding the addon when it was installed somehow. You can try rebuilding it manually from the addon page.

1761230844051.webp

Basically it looks like there is a mismatch somehow. Internally (what's installed in XenForo) is earlier than version 1.9.4 (where that option was introduced), but with PHP/addon files from after that (they are trying to use the option).
 
Question on "Guest page caching".

We have a single domain (deeperblue.com) covering both our wordpress (www.) and xenforo (forums.) subdomains.

When I enable Guest Page Caching and look at the rule it looks for cookies but not limited to the forums subdomain so looks like it will apply to both www and forums subdomains which is not what we want to do (as we run different rules for each subdomain). Is this something we can manually change or is there something we should be considering?

I compared this to the Media Attachment Caching option which does limit to the attachement directory of the subdomain.
 
Feature suggestion: ability to move the internal_data/image_cache to R2 (and support presigned URLs) for sites that proxy images.
You technically can by adding it to your config.php file. However...

If you move your cache to a remote server, it's not really a cache anymore and sort of defeats the purpose. For example, imagine if your browser cache was in a remote data center and your browser is constantly making requests to check it's (no longer) local cache. At that point, it might as well just pull the files from the origin server. Same basic principle applies to the image cache... moving your server's cache to a remote location more or less defeats the purpose of having a cache (not much different than fetching it from the originating web server vs. a Cloudflare R2 bucket... both are remote to your web server).

Generally speaking, caches are short-lived (non-permanent) items that should be as close to the process using the cache (in this case your web server). It would be similar to putting the internal_data/temp folder in a remote data center.

There's a reason that by default, only the attachments folder within internal_data is sent to R2 (it actually was a lot more work doing it that way, than just sending the entire internal_data folder... it definitely wasn't because no one thought of it.. hah)

...but if you really want to do it, it's your setup/server. It can be done by adding the info to the config.php file.
 
Because we've lost over a million historical images linked to our site over the 15 years we've been around, including some that were so popular in our early days we still regularly get complains and requests to "fix" them. We're tired of relying on random external image hosts that our members use for various reasons, that then change policies and delete the images over time, causing our member's comics and screenshot runs to be lost forever, often without the original author's knowledge. And don't get me started on the number of people who still think Discord is a good place to host images with the current 24 hour limit on their URLs, that's a whole other restoration project we need to get moving on before they start purging images for good there too...And frankly I personally am just sick of seeing this all over the forum when I browse old threads:

View attachment 320204

Now, we can't bring back the vast majority of broken links that already exist, but we can make absolutely sure that we don't allow the number of broken images to grow any longer. We've tried alternative methods to force users to use the attachment system but got major pushback because the interface is so clunky to use when people are trying to insert 100+ images into their posts and need to manage them. And the media gallery is not well-liked either, so we have hundreds of members that are trying to get around that because it's more efficient to use something like imgbb or whatever they are used to.

Our only remaining option that we're aware of is the proxy image cache. So yeah, we're not treating it as a cache. We're treating it as the only way to preserve as much of our site as possible and safeguard against link rot. And that is an intended use case for the cache, the options menu specifically calls out how to use the cache to protect against link rot by setting a refresh time when images are retained indefinitely:

View attachment 320201

Fun note there, if you use the cache refresh time instead of also disabling it, when your image hosts replace the original images with a stock "this image can't be found" piece instead, it doesn't protect against that and we still lose the image. So that and the proxy lifetime are both set to zero now. We learned to set them the hard way. But I digress...

If we're not supposed to use it this way, why would they tell us to do it? Our members have been asking us for recommendations on an image host that will truly retain their work indefinitely. We don't know of one, because any time we've suggested one in the past it seems the terms always change, and then we're ultimately responsible (due to making the recommendation) when (not if) something breaks. If we're going to own that anyway, we'd better handle it and host it ourselves.

If you do have a better option, please tell us, because this is all we know how to do in a way that will work within our budget.

EDIT: Oh, we've also tried Andy's convert image and convert image all addons to automatically convert linked images to attachments but there were too many bugs and edge cases where it actually destroyed some of our posts and I had to restore them from a SQL backup manually, we can't rely on it. I'm grateful for his addon collection but it seems his code in these in particular just isn't well-suited for our needs, so after several months trying to make it work (all the while link rot still occurring...) we were forced to remove that as well. Nothing we try does the job we need it to do. And, honestly, what's the difference between us loading it through an "attachments" bucket in R2 vs a "proxy" bucket in R2? Same resources, same CDN, same fetch times, same storage and transaction pricing...Am I missing something?

Just wanted to surface this because I think it's a good use case for the R2 storage.

The folder is labeled image_cache but in fact it's just serving data for the image proxy, which isn't exactly a fast cache and may not be ephemeral/easily rebuilt. It's not really intended for the purposes of a quick local lookup, it's there to reduce mixed content errors and potentially (as @Fullmental is using it and as my site has used it in the past) to preserve the integrity of discussions as the hotlinked images change or disappear.

edit: sorry @digitalpoint - I deleted my post because I decided to do some thread searching and found this post, so I thought I'd provide a little more context for the request.
 
Just wanted to surface this because I think it's a good use case for the R2 storage.

The folder is labeled image_cache but in fact it's just serving data for the image proxy, which isn't a fast cache and may not be ephemeral/easily rebuilt. It's not really intended for the purposes of a quick local lookup, it's there to reduce mixed content errors and potentially (as @Fullmental is using it and as my site has used it in the past) to preserve the integrity of discussions as the hotlinked images change or disappear.

edit: sorry @digitalpoint - I deleted my post because I decided to do some thread searching and found this post, so I thought I'd provide a little more context for the request.
While it could keep a hotlinked image available in the short-term after the origin server deleted it, it's not intended to be permanent (it's a cache, not storage). If you want to keep hotlinked images permanently, I think there are addons that effectively convert hotlinked images into attachments... that will keep them permanently at that point (sounds like what you are going for... storage rather than cache).
 
@Ridemonkey

Forgot to give you this link in case you really want to do what I consider a terrible idea. 😂

 
While it could keep a hotlinked image available in the short-term after the origin server deleted it, it's not intended to be permanent (it's a cache, not storage). If you want to keep hotlinked images permanently, I think there are addons that effectively convert hotlinked images into attachments... that will keep them permanently at that point (sounds like what you are going for... storage rather than cache).

Sure, I get that. But honestly, I'd rather keep the flexibility. If I use an addon that converts hotlinked images into attachments, I destroy my ability to ever decide "hey this is too much, we'd like to just let the historical stuff go."

The image proxy is a very convenient way to preserve history where we can, but still maintaining some control.

I understand your point. But the feature permits using the proxy as a permanent image store so it's not unreasonable to use it that way.
 
Back
Top Bottom