Would you recommend SSD powered server for speed optimization?

New server ordered: AMD Opteron CPU, 64GB DDR3 ECC, 1st (primary) drive: Intel 520 240GB SSD (with 30% over-provisioning [170GB]) - 2nd drive (1st backup): Intel 520 240GB SSD (with 30% over-provisioning) backing up on a 6 hour cycle as a straight drop-in if primary fails - 3rd drive (2nd backup): 250GB HDD - for nightly backup (since SSDs do have a finite life and will eventually need replacement). (y)

Should be fun. :D

Go on, give us a price :D
 
Speed really - 6Gb/s - higher IOPS than the 320 series - 5 year warranty - and Intel customised Sandforce controller.

And I went with Intel over some of the drives that scored slightly faster in the tests because I want reliability.

The bottleneck with my current 5 year old 32-bit dual-core server is disk I/O - I'm simply hitting the limits of what the machine can do. I've spent a lot of time optimising it (making sure as much is cached as possible) and it scores really well in tests etc. but my main site just keeps getting busier and busier (and is attracting more and more bots and scrapers into the bargain) so rather than just clone to an SSD I thought I might as well go the whole hog and get a new box with lots of RAM that will serve me well over the next 3-5 years of growth. (y)

You ran into exactly the same situation we had. Our disk I/O on our database drive was killing us and driving the server loads high (thanks to swapping memory to disk), and partly because we were on an older server running FreeBSD 32-bit, so we were not even utilizing all of our memory. I had every optimization installed that I could find, and was 100% "tweaked out", so it's not like I had lost any performance due to my ineptitude. ;) APC and Sphinx were the two most dramatic improvements we ever made on the server--APC cut our load literally in half, and Sphinx had nearly the same effect. This was while running vB 3.7, I might add.

We went with 16GB for now on our new server and it is proving to be plenty, as we're not swapping to disk anymore and we have some nice large caches set up for MySQL. It actually does help to have the database on a second disk on a busy server--I can only imagine how much worse our swapping could have been if we'd only a single disk!

We went with two Intel SSDs also. And since our host runs a "shadow drive" on each (giving us, in reality, four SSDs), it takes my mind off of the finite life of SSDs in general since we always have a backup disk ready to swap in place if the main drive fails. We're happy with the setup, and thanks to 15 years of loyalty, the hosting company worked with us to build a server that fit our needs.

One curious thing: after converting to XF from vB 3.7, we had some really poor performance. Pages were taking several seconds to load. But now it runs OK...? Maybe the caches were being refreshed? It still runs noticeably slower than vB 3.7 but I have not yet been able to do any tweaking since we have so many issues after the conversion (not XF's fault--vB's archaic permission system left me with a lot of garbage to clean up).
 
The default engine for MySQL (prior to 5.5) was MyISAM - your vB install will likely have been using this engine. XF uses InnoDB (a different engine) and unless you've optimised your MySQL config to account for this you may find it running slower than it should.

Have a look in the server resources section here on XF.com for InnoDB optimising tips. Perhaps start a thread with your my.cnf file contents and see what advice you get. There may be room for even more improvement.

BTW did you set "noatime" and "nodirtime" on your SSDs?
 
We had InnoDB on our busiest tables on vB, so everything's installed properly with XF. As I mentioned, it was odd that it ran slowly at first, but it is OK now (if a hair slower). I have more pressing issues to worry about so any further tweaking will have to wait. I have some staff with no moderating permissions where they need to have them...vB left us with a mess there unfortunately (very long story).

Our host handles all the hardware and OS settings--using a managed host. I don't have the time to babysit hardware, so we gladly pay for them to handle it all. I may ask them about the two SSD settings--they likely have it covered already. Seeing how much abuse we've dished out to our dedicated servers over the past several years, they know what to expect. :D
 
BTW, I know this has been mentioned here before (including myself), but wow, visits and participation are up after the conversion. My other smaller forums saw a nice bump afterward, and this "big board" was no exception: we have around 700 online and our quad-core server is cruising along at about a 1.15 load. Not even breaking a sweat!
 
Do a graph of your daily "likes" in about six months time and you'll see a nice upward curve - people really like the like button. I've seen a similar increase in participation overall in my own big board too and have no regrets about moving to XF. (y)

Just looking forward to a time when development restarts. There's so much untapped potential - it would be great to see it captalised on and not squandered. :D
 
I'm looking forward to it also--there is a lot of goodness ahead once development restarts. I already have half a dozen features my staff is requesting that are missing from vB 3.7. Once I iron out the multiple conversion issues we're dealing with, administering XF will be a simple task. Even the staff is finding their way around and liking a lot of the new moderation and admin tools.

We have an older demographic on our site and many are very resistant to change. Even the "like" button is causing some grief. Although you have to consider that the vocal few represent only a tiny percentage of the overall active forum membership, and don't necessarily speak for everyone. What they miss will likely not be missed a year or so from now. vB seems so 1999 in comparison... :D
 
Speed really - 6Gb/s - higher IOPS than the 320 series - 5 year warranty - and Intel customised Sandforce controller.

And I went with Intel over some of the drives that scored slightly faster in the tests because I want reliability.

The bottleneck with my current 5 year old 32-bit dual-core server is disk I/O - I'm simply hitting the limits of what the machine can do. I've spent a lot of time optimising it (making sure as much is cached as possible) and it scores really well in tests etc. but my main site just keeps getting busier and busier (and is attracting more and more bots and scrapers into the bargain) so rather than just clone to an SSD I thought I might as well go the whole hog and get a new box with lots of RAM that will serve me well over the next 3-5 years of growth. (y)


Clickfinity, is there any information about Raid Controller easier crash than single SSD Drive?
I'm planning to use 4x Intel 920 RAID 10. So I ask about this.
 
New server has finally arrived!!! :D

It's being burned-in for 24 hours before the software build and will be tweaked and tested with a view to moving sites over sometime next week; or maybe a little later as the wife will probably want me to spend some time with the family at Christmas. ;)

Clickfinity, is there any information about Raid Controller easier crash than single SSD Drive?
I'm planning to use 4x Intel 920 RAID 10. So I ask about this.

Nothing definitive Dinh, just a few bits and bobs I read in various forums/blogs and a discussion I had with my host along the lines of ... if the Intel SSDs are on a par with HDDs (or better) for reliability then adding RAID potentially adds another failure point (in terms of the RAID controller card) so why not just go with a single drive?

And because I wanted the speed but couldn't really afford the luxury (and expense) of the more complex RAID set-up I went ahead with a single drive. Although I risk possible downtime if the drive fails, I've got data backup covered by adding a second SSD for six-hourly rsync backup, with a HDD for overnight backup and also (since I first posted) an Amazon S3 bucket rsync just for good measure.

I expect I'll go with RAID for the next server as I'll hopefully be able to afford a better machine by then, but I've read that plenty of hosts are using SSDs in RAID with no problems so I'm sure yours will be fine. (y)

Cheers,
Shaun :D
 
I wouldn't do SSD raids for database or web servers. It just makes everything more complicated and doesn't help for performance or durability.

SSDs should last a few years without problems, even in a 24/7 server. It is much less likely for an SSD to fail compared to an HDD.

If you try to be careful, just buy spare SSDs every second year and change them regularly. You can even sell your used server SSDs at eBay. ;)

We had our first server with SSD built 2 years ago (very expensive!) and had no problem ever since. Due to economical and performance reasons, we'll never use a HDD in a server again (except for storage or backup disks). We used different SSDs (even cheaper ones for consumers) and haven't had a single failure until now.

In my opinion HDDs will be a historic device 5 years from now.
 
Just for interest's sake here's what we're doing (in terms of setting up to get more life out of the SSDs - one is the main "live" operating drive, the other is a backup "slot in" drive in case of complete failure of the first [and I've also added a third regular HDD in there for fall-back if both SSDs fail]):
  • Over-provision both SSDs by 30%
  • noatime and nodiratime mount flags for SSDs (stop writes for last access time on files)
  • NOOP scheduler for SSDs (default CFQ organises writes to optimise I/O for spinning disks - not needed for SSDs)
  • /tmp ramdisk noexec, nosuid [plenty of room with 64GB] (reduce writes to main SSD for temporary files)
  • Percona MySQL with innodb_file_per_table (optimising InnoDB)
  • 1st SSD backed up to 2nd SSD on six hour schedule
  • 2nd SSD backed up to HDD on daily schedule
  • daily rsync to Amazon S3 bucket (cheap off-site backup)
Hopefully that should give us a good, fast set-up that will last for the 3-5 years before we upgrade again.
(y)
 
relatime is what we use at ALL disks. ;) SSDs additionally get a discard flag at mount.
In addition we use fstrim daily via cron at all partitions on SSDs.

I personally think that SSDs are much more robust as the industry tries to make us believe. They are new, so no one knows. But from our experience, we would not expect any problems. Even cheap MLC SSDs are expected to run several years 24/7 without failure. As said, one of our servers runs 2+ years now and SSD works like new.

And IF one fails due to it's age, I doubt it would happen spontaneously and you would not be able to save almost all data on it (unlike most failed HDDs).

But your setup is very robust. I would use different kinds of SSD in the same machine for security reasons. At least with HDDs it was not uncommon that all identical drives bought at the same time failed also almost at the same time. :)

innodb_file_per_table is a must also for backup and security reasons.
 
innodb_file_per_table is a must also for backup and security reasons.

Yeah, got that in hand too - I didn't have it enabled when I originally set-up CycleChat and after a few years of running IP.Board, doing backup copies to test conversion to XF, and then final migration etc. my current ibdata1 file is over 5GB on disk (but with actual data around 1.7GB - the rest is whitespace).

I only discovered file_per_table afterwards so the other forums are all okay - so when I move everything to the new server it'll be files all the way ... lol :D
 
Depending on the server, I would customise its hardware.

Database server: SSD, RAM, CPU are the priority.
Web Server: RAM, CPU are the main priority over the disk.
Mail server? Don't really need an SSD


For my own server, a general purpose, I have:
Intel Xeon E3 1245v2 + 32GB RAM with an Intel 520 SSD in RAID 1.
 
25m posts DB is running on Raid 6 SSD Intel 520 now.
Great speed & performance. Much better than Raid 10 of 10xSAS 15rpm. No more IO issues!
Wow, you really need such high hardware specs for 25m posts?

Our forum currently has 24m posts and is running fine on 5x10K disks in raid5...(not XF yet). I guess it would run even better on a simple raid1 ssd raid set...

How many users are online on any specific moment?
 
Wow, you really need such high hardware specs for 25m posts?

Our forum currently has 24m posts and is running fine on 5x10K disks in raid5...(not XF yet). I guess it would run even better on a simple raid1 ssd raid set...

How many users are online on any specific moment?

Normally around 6000 - 9000 users online (via Google Analytics Real time), how about you?
 
Top Bottom