Yes it does. I've been using it since it was released and works with RM / XFMG / Showcase addonsThis is awesome. Doese this also work for xenforo gallery files or resource manager download files?
Having looked at the code, the answer is no. While it is technically possible for what you want to be coded, its not that straight forwarded as whatever loading mechanism needs to get assigned IAM credentials for a one time use only. It's not that straight forward and really isn't practical. Dropbox, for example, runs off S3, but they dont allow to upload directly to S3 either.Now the question is, does this addon still upload to the site's server and copy the file to S3 or using the addon the upload to site can be totally bypassed?
Can this addon upload the file directly to S3 from the RM? If we could upload directly to S3 we would possibly bypass the host restrictions.
It is not possible unfortunately, at least for now. Exactly as @Jim Boy said, you can workaround it like that.Hi xfrocks
I have a question...
My client is hosted on a shared host which provides some other business services due to which she cannot change her host. Now we have installed XF and the RM and we want to upload large PDF files upto 80 MB into the RM. The upload of large files into RM fails because of low values of upload_max_filesize and post_max_size php settings.
One solution we are thinking of is storing the large pdf files in the resource manager to Amazon S3.
Now the question is, does this addon still upload to the site's server and copy the file to S3 or using the addon the upload to site can be totally bypassed?
Can this addon upload the file directly to S3 from the RM? If we could upload directly to S3 we would possibly bypass the host restrictions.
I hope my question is clear...
Regards
That's a good idea. Maybe I will add a new option for S3 for that.One enhancement I would like to see in this is the ability to add custom items to 'meta' when saving to S3, that way I can do things like add ache-control and expire values.
Yes.its addon is support using cloud storage with s3 protocol compatible or not?
Keep a back up? Personally I didn't bother moving our large collection of existing attachments over, people dont tend to view the older stuff much. I've witten my own script for migrating, but i haven't bothered to ususing it.I was playing around with the add-on before deploying it. I noticed that using the tool to move attachments to S3 deletes them locally, which seems unusual. Why would I want to immediately delete them locally? What if something goes wrong and I want to revert without delay?
Why would you do that? S3 is rock solid and web servers are inherently ephemeral. If you are hosting on EC2, expect your web server to disappear at any time, if you aren't prepared for that, then you aren't using EC2 correctly.I also noticed that once files are uploaded to S3, they won't be uploaded again, even if they're restored locally and deleted from S3. This is concerning, as we'll likely be using a combination of local storage and S3 during the transition to prevent downtime. We'd like to synchronize twice, but with this design, we're fairly certain that won't do any good.
I dont see why you would have any downtime in ralation to attachments, just turn it on and it works, attachments that were local will continue to be served from the local server. New files will get stored on S3 and will be stored on s3.I'm considering migrating to local storage in /data/ first, which will be quick, then using s3cmd or a similar tool to synchronize with S3 twice while switching over (once before switching, once after switching). Is there any reason that wouldn't work? It seems like a more durable solution. On the same note, it would be convenient to be able to move data between storage options before switching where they are served from in order to avoid downtime--that is, with the built-in tool, instead of this hackish method.
Keep a back up? Personally I didn't bother moving our large collection of existing attachments over, people dont tend to view the older stuff much. I've witten my own script for migrating, but i haven't bothered to ususing it.
...
Why would you do that? S3 is rock solid and web servers are inherently ephemeral. If you are hosting on EC2, expect your web server to disappear at any time, if you aren't prepared for that, then you aren't using EC2 correctly.
I dont see why you would have any downtime in ralation to attachments, just turn it on and it works, attachments that were local will continue to be served from the local server. New files will get stored on S3 and will be stored on s3.
Why bother migrating existing attachments? The addon does serve existing local files, if the system wasn't setup to use S3 at the time the attachment was uploaded. The attachments are flagged in the database as being S3 or not. If they are not S3, they get served locally. Switching on the add on means existing attachments will continue to get served from the local server and any new attachments served from S3. Bake the old data into your ami and you will be fine. When I switched to this arrangement, we had 4GB of attachments, not that that is a large amount, but significant enough and we've had zero problems with this add-on on a very large instalation. Chopping and changing is a really bad idea.In order to migrate the attachments with the built-in tool, you have to first change where your attachments are being served from. Between the time you switch the settings from default to S3 and the time migration completes, some attachments will be unavailable, with older attachments becoming available before more recent attachments. In my testing, the add-on refused to serve local files while in S3 mode; it would only use S3, and would not fallback to local storage.
That sounds like over-engineerng and could potentially lead to other issues related to how XF uses data within the internal-data directory. I've found it best just to leave them alone on a per-webserver basis.Right now I'm using aufs to mount a bridge at /internal_data/; after the add-on "deletes" all of the attachments, I just remove the whiteout files (rm -f **/.wh.* in the writable directory). It's a bit cumbersome to configure, though.
Why bother migrating existing attachments? The addon does serve existing local files, if the system wasn't setup to use S3 at the time the attachment was uploaded. The attachments are flagged in the database as being S3 or not. If they are not S3, they get served locally. Switching on the add on means existing attachments will continue to get served from the local server and any new attachments served from S3. Bake the old data into your ami and you will be fine. When I switched to this arrangement, we had 4GB of attachments, not that that is a large amount, but significant enough and we've had zero problems with this add-on on a very large instalation. Chopping and changing is a really bad idea.
That's why I've written my own script to migrate, it will copy over the existing locally held data into S3. It will copy everything over, I'll then test on my test machine and if happy I'll make the database change to get the attachment add-on to use the S3 held data rather than the locally held data. But I'm really in no hurry to d that as it works perfectly well now anyway.
That sounds like over-engineerng and could potentially lead to other issues related to how XF uses data within the internal-data directory. I've found it best just to leave them alone on a per-webserver basis.
I hope you aren't doing anything like that with the external data directory, by the simplest option there is to register an S3 stream.
I just dont know why you need any solution. Ive used gluster etc in the past past but XenForo wasn't really designed to be distributed. I've found that their is no need to have a shared internal_data directory as long as you use this add-on. If you do share the directory, it is just another thing that could fail. If you are running multi-az as well, it just adds to your bill. We will scale from one to as many eight web servers in a day on the most punishing of XenForo sites and I have never seen any issues at all related to each web server maintaining their own internal_data directory.It's unlikely that we'd use aufs as a permanent solution on production
You seem to be seriously underestimating the reliability of S3, it's 11 9's, not 5 as you earlier stated. Plus you can put in versioning and if you're really keen, run an s3cmd sync command from a non AWS box to store in a third party location. Add in a regular back up of your core software (eg add-ons etc) and daily backup of the database and you'll be protected in the most catastrophic of circumstances. Not to mention appropriate use of IAM to ensure against acts of stupidity.We're not willing to risk losing data.
There is an option called "keep local copy" that will do that you want. It basically keep a copy in the default XenForo internal_data directory so you disable the add-on at any time and files are still being served without disruption.I was playing around with the add-on before deploying it. I noticed that using the tool to move attachments to S3 deletes them locally, which seems unusual. Why would I want to immediately delete them locally? What if something goes wrong and I want to revert without delay?
I also noticed that once files are uploaded to S3, they won't be uploaded again, even if they're restored locally and deleted from S3. This is concerning, as we'll likely be using a combination of local storage and S3 during the transition to prevent downtime. We'd like to synchronize twice, but with this design, we're fairly certain that won't do any good.
I'm considering migrating to local storage in /data/ first, which will be quick, then using s3cmd or a similar tool to synchronize with S3 twice while switching over (once before switching, once after switching). Is there any reason that wouldn't work? It seems like a more durable solution. On the same note, it would be convenient to be able to move data between storage options before switching where they are served from in order to avoid downtime--that is, with the built-in tool, instead of this hackish method.
class bdAttachmentStore_Zend_Service_Amazon_S3 extends Zend_Service_Amazon_S3
{
public function _makeRequest($method, $path = '', $params = null, $headers = array(), $data = null)
{
if (isset(self::$_httpClient)) { // Don't bother if we're creating a new client
self::getHttpClient()->resetParameters(true);
}
return parent::_makeRequest($method, $path, $params, $headers, $data);
}
}
AND (
attachment_data.bdattachmentstore_engine NOT LIKE ?
' . (empty($defaultEngine) ? 'AND attachment_data.bdattachmentstore_engine IS NOT NULL' : 'OR attachment_data.bdattachmentstore_engine IS NULL') . '
)
...
array(
$position,
empty($defaultEngine) ? '' : $defaultEngine,
$options['batch']
)
When we use this tool, attachments at /internal_data/ are deleted or I need to delete it manually after rebuilding? ("keep local file" not selected in options)Go to AdminCP > Tools > Rebuild Caches >Move Attachment Data
Do it from the start, apart from anything else it means that should anything happen to your server, the attachments will be safe - you'll only have to back up the core code and the database. Any good design architecture for a web application needs to have static assets stored in a location that can be accessible from multiple locations, much like the database is. I'd argue that XF's biggest design fault is that it doesn't do this out of the box.I have some beginner questions here.
1. Does it make sense to use this add-on for a forum from the start (so no members, no attachments at this moment)? Or should someone wait until one have GBs of attachments?
A CDN means data is served to users from a location closer to them, for some sites it has advantages others dont. It can be enabled later without major headaches2. What is this CDN your are talking about? Some enable it, some dont. For what do I need it?
Depends a bit on how much gets download, lets say 50GB, about a $5 a month3. Let's say a forum with 30 gb attachments and 5000 users. The Amazon S3 would charge how much estimated?
I dont believe so, its not a backup tool4. Can we use this to save the attachments on our own personal computer which is not 24/7 online?
We use essential cookies to make this site work, and optional cookies to enhance your experience.