Who uses SSL with XenForo?

Been playing around with activating SSL on my forums and finally have it how I want:
  • 3 domains all secured via SSL, 1 dynamic content, 1 static and 1 running piwik.
  • All running nginx including support for SPDY.
  • All using ECDHE_RSA.
Next is to ensure that non SSL embedded content does not throw an alert in the browser and why under non-SSL FB/Twitter return number of likes/tweets whereas under SSL they don't.
 
The SPDY draft 2 module for nginx is in beta, I wonder how stable it is for production. I will wait for now.
You can pipe the non-SSL data through your server but you will assume the extra bandwidth used. For example, any external image could be pushed through SSL with a simple Nginx rewrite rule. However, this will open your site to various attacks and discard the purpose of using SSL in the first place. Personally, I don't allow external images and ask the users to attach the images, like Facebook does.

Also, on my forum I display fine the number of Likes/Tweets through SSL.
 
Yeh I know it is in beta, happy to give it a test, so far so good. As for non-SSL content, will look at camo, don't care about the bandwidth.

Interesting, what settings do you have on Facebook/Twitter? Are they https callback urls?
 
Are you talking about Camo proxy? I'm not sure how suitable the idea of a non-secure to secure setup is. Personally, I will never do this for the simple reason that you open holes into your site security. A file stored outside can be used to pull all your server info, defying the SSL purpose. Let me know your thoughts on this.

About Facebook/Twitter settings, I left them as default settings in XenForo? I implemented SSL since I opened the forums and I saw the Facebook Likes showing 3 likes (definitively not a busy site here).
 
Yes, the same proxy that github use. How would I open up holes in my security? All I am doing is converting something that has already been linked to in the site from HTTP to HTTPS and using a proxy to display the original linked content over a secure manner.
 
Yes, the same proxy that github use. How would I open up holes in my security? All I am doing is converting something that has already been linked to in the site from HTTP to HTTPS and using a proxy to display the original linked content over a secure manner.
It checks for content type and afaik is safe. Not had any complaints myself. If you want to glance at the XF addon once you have node and camo running feel free to shoot a pm.
 
It checks for content type and afaik is safe. Not had any complaints myself. If you want to glance at the XF addon once you have node and camo running feel free to shoot a pm.
Sure send it over if you wouldn't mind. Euro 2012 is taking my time tonight (England playing so gotta watch it!). I'll have camo running tomorrow morning.
 
All I am doing is converting something that has already been linked to in the site from HTTP to HTTPS and using a proxy to display the original linked content over a secure manner.
That is exactly where you have your security issue. You are basically displaying http://xenforo.com/community/styles/default/xenforo/logo.png as https://mysite.com/images/logo.png. Nobody is telling me that the image is actually a pixel tracker that records all the data through your SSL connection. At least that's how I see it. If you went through all this trouble to protect your site through SSL, you are opening it back. And Github comment area is not a place where sensitive data is stored, so obviously they don't care. The project states it clearly: "Camo is all about making insecure assets look secure." They make it look secure, is not secure at all in reality.

Again, you can do whatever it pleases you. I simply shared my thoughts, based on my experience with SSL.
 
That is exactly where you have your security issue. You are basically displaying http://xenforo.com/community/styles/default/xenforo/logo.png as https://mysite.com/images/logo.png. Nobody is telling me that the image is actually a pixel tracker that records all the data through your SSL connection. At least that's how I see it. If you went through all this trouble to protect your site through SSL, you are opening it back. And Github comment area is not a place where sensitive data is stored, so obviously they don't care. The project states it clearly: "Camo is all about making insecure assets look secure." They make it look secure, is not secure at all in reality.

Again, you can do whatever it pleases you. I simply shared my thoughts, based on my experience with SSL.
Then don't use forward-for headers and all they see is your camo server IP and whatever headers you're sending along. Unless i'm missing something here, afaik images are passive assets, connections aren't made to the other domain by the client so no cookie stuffing. Where's the security issue? IMHO twitter / facebook / google + JS are a bigger issue than proxied images.
 
At work, we use SSL pixel trackers that allow us to know everything about every user who visits targeted sites we own. Technically, I could take that tracker and post it into any site, it will start feeding the data. That is what I was referring to. Personally, I simply do not allow external images to be linked from outside on my site. I believe that is the main reason any service out there hosts all images into their servers and do not allow external links. (Google+, Facebook, Twitter, etc.)
 
At work, we use SSL pixel trackers that allow us to know everything about every user who visits targeted sites we own. Technically, I could take that tracker and post it into any site, it will start feeding the data. That is what I was referring to. Personally, I simply do not allow external images to be linked from outside on my site. I believe that is the main reason any service out there hosts all images into their servers and do not allow external links. (Google+, Facebook, Twitter, etc.)
Again, how is that pixel of any use to you when the request is fetched by a server in between you and the client visiting a site? All you're going to see is one hit from a server, not a client, the next request you'd see (again from the server) is once that image is flushed out of the cache. In the mean time it could have been served a thousand times and you wouldn't be the wiser.
 
Again, how is that pixel of any use to you when the request is fetched by a server in between you and the client visiting a site? All you're going to see is one hit from a server, not a client, the next request you'd see (again from the server) is once that image is flushed out of the cache. In the mean time it could have been served a thousand times and you wouldn't be the wiser.
Exactly my thoughts, you would only ever see my camo server IP address and it's headers I chose to pass along and not the clients (the camo server will be serving them the image).
 
Exactly my thoughts, you would only ever see my camo server IP address and it's headers I chose to pass along and not the clients (the camo server will be serving them the image).
Thank you for the info, I did not know this. This looks like an elegant solution, maybe we can work together and share the procedure? Implementing Node/V8 is a breeze, I would not be bothering to maintain an RPM as is provided by the Node guys.
 
Thank you for the info, I did not know this. This looks like an elegant solution, maybe we can work together and share the procedure?
That sounds like a very good idea, as it is I passed on my ducttaped together addon to deebs, he said he'll have his developer look at it. Now if he doesn't end up gauging out his eyeballs and fix up the atrociously inefficient code I hacked together and is ok with releasing it to the community, I'm all for it. As it is what I use rewrites the img bb url's to camo compatible requests/links, I'm not storing it in memcache, so it is quite the resource hog at the moment.

The one issue that will be kept being ran into is mixed content warnings when inserting an image into tinymce editor, but maybe that'd be something you could take a glance at.

Where I got stuck is that camo uses both a hash and private key to encode urls, thing which you (I believe) shouldn't do on the client side.

Implementing Node/V8 is a breeze, I would not be bothering to maintain an RPM as is provided by the Node guys.
It is, that and monit to check if the server is alive and restart it if necessary makes it a breeze.
 
Unfortunately my developer is not near his development laptop until the weekend but it will be something I throw at him when I get hold of him :)
 
Unfortunately my developer is not near his development laptop until the weekend but it will be something I throw at him when I get hold of him :)
I'm mostly interested on the Camo steps, guys. So far, I have the OpenSSL 1.0.1c RPM's for CentOS 5/6 available to everyone. Next, we install Nodejs and V8 which are needed by Camo. I did not looked at the configuration file yet, do we have a basic example to test it and see if it works with some basic image URL?

I would like to start with some basic tests, just to make sure everything works and look at the sent headers. Thanks.

@Deebs: Did you enabled the Google 64bits optimizations in OpenSSL? I have them enabled in my RPM's.
 
I'm mostly interested on the Camo steps, guys. So far, I have the OpenSSL 1.0.1c RPM's for CentOS 5/6 available to everyone. Next, we install Nodejs and V8 which are needed by Camo. I did not looked at the configuration file yet, do we have a basic example to test it and see if it works with some basic image URL?

I would like to start with some basic tests, just to make sure everything works and look at the sent headers. Thanks.

@Deebs: Did you enabled the Google 64bits optimizations in OpenSSL? I have them enabled in my RPM's.
I have camo running in my test environment but not had much chance to do any work with it, the test harness works so for me. Hopefully over the weekend I can nail the whole lot down and get a nice config I am happy with for camo.

As for Google optimisations, yes, they are enabled and working in OpenSSL 1.0.1c. Due to my setup I have statically linked OpenSSL 1.0.1c into my nginx binary (leaving the stock install of OpenSSL alone). Also, my version is now using the hardware AES-NI on the E5 processor (this setup is not publically accessible atm).
 
I have camo running in my test environment but not had much chance to do any work with it, the test harness works so for me.
Is this the default config? I'm going to test this over the weekend, post it here so we do it together.
Due to my setup I have statically linked OpenSSL 1.0.1c into my nginx binary (leaving the stock install of OpenSSL alone).
Are you on a Redhat based distro? I personally re-compiled all web dependent rpm's with 1.0.1c (Nginx, PHP, MariaDB, Sphinx, etc.) and kept the 0.9.8e libs for the OS deps. See why and how I did it in my tutorial.
 
Top Bottom