I HIGHLY recommend XenForo on SSDs!

Jaxel

Well-known member
So yesterday I had my entire server rebuild, with SSDs in it! Now my website is way faster than it ever was.

http://8wayrun.com

One of the problems I've always had with any VPS was that you're paying so much for bandwidth and disk space; when you don't need it, simply because you want more ram/cpu power for your database driven website. It was always very expensive, and not as fast as you wanted it. I never ever reached even close to 10% of disk space and bandwidth usage, but I always peaked server load.

Well my host just recently started offering SSD VPS options. They only have about 60% of the disk, which is fine, since I don't even need 1/3rd of that; but they have a massive increase in performance! Not only that, but the costs are 40% less than what I was paying for my HDD VPS! I highly recommend you guys try out SSDs for your XenForo forum.

http://www.knownhost.com/managed-ssd-vps-hosting.html
 
I was thinking of doing this myself as well. I was deciding between the pure SSD vs accelerated SSD offerings. Guess I'll go pure :).
 
Agree! I've made some more changes to my.cnf today based on some recommended SSD settings from Percona

Code:
innodb_flush_neighbor_pages=0
innodb_ibuf_max_size=4M
innodb_buffer_pool_instances=4
innodb_io_capacity=30000
innodb_adaptive_flushing_method=keep_average
 
Agree! I've made some more changes to my.cnf today based on some recommended SSD settings from Percona

Code:
innodb_flush_neighbor_pages=0
innodb_ibuf_max_size=4M
innodb_buffer_pool_instances=4
innodb_io_capacity=30000
innodb_adaptive_flushing_method=keep_average
Are those changes useful for SSD disks regardless of memory size?
 
Are those changes useful for SSD disks regardless of memory size?

On a dedi, yes, on a VPS, no as your container settings for what resources they allow you to use will be different depending on host. Also on a dedi, divide innodb_io_capacity=30000 by half if your SSD is a older 3gbps rather then the newer 6gbps read/write.
 
Pseudo side-note... you can make terribly fast disk i/o systems without the expense of SSD drives as well. :)

For example, copying a 416MB log file in 0.352 seconds, which is a disk i/o throughput of ~1.15GB/sec (gigaBYTES, not gigaBITS).

Code:
time cp /var/log/nginx/digitalpoint.com.access.log-20130920 ./;ls -al digitalpoint.com.access.log-20130920

real    0m0.352s
user    0m0.004s
sys    0m0.328s
-rw-r----- 1 root root 436692500 Sep 20 15:56 digitalpoint.com.access.log-20130920

In case you are curious what sort of "traditional" disk setup that is... it's Seagate Savvio 900GB 10K.6 drives (6 of them), in a RAID-6 setup (any 2 drives can fail without any issues... 3.6TB useable space). It has a 1.2GB/sec theoretical max since reads are split across the 6 drives (each drive can read at 204MB/sec). I never really tested it's *actual* speed until now, but 1.15GB/sec is pretty close to the theoretical max. But what I *really* like about these drives is that they are ultra reliable (a mean time between failure of 2,000,000 hours). Which is good when you have a few of them (I have 48 total) and trips to the data center to swap failed drives can be minimized.

And yes... I know SSDs will perform better with a zillion tiny concurrent reads/writes... but not by much, and my disk i/o load has very little random reads/writes... it's 99.9% sequential disk access, so.

The closest comparable drive in an SSD form would be something like this: http://www.seagate.com/internal-hard-drives/enterprise-hard-drives/ssd/1200-ssd/ And who knows how much those drives are going to cost... probably somewhere around $4,000 for a single one, vs. $423 for the ones I use.
 
  • Like
Reactions: HWS
SSDs are the way to go for medium priced standard servers since several years now.

But if you can afford a RAID-6 setup with Seagate Savvio drives you can go that way also. It may be even better on the long run, since SSDs will need to be swapped every 7-10 years.
 
Now everybody run
dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync

through SSH/terminal and see what's your SSD I/O :)

Write where you hosting (company/datacenter) your VPS/dedi.
 
Now everybody run
dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync

through SSH/terminal and see what's your SSD I/O :)

Write where you hosting (company/datacenter) your VPS/dedi.

Code:
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 0.464725 s, 2.3 GB/s

Self-hosted/dedicated.
 
Clustered.net VPS
Code:
16384+0 records in

16384+0 records out
1073741824 bytes (1.1 GB) copied, 1.72283 s, 623 MB/s

DigitalOcean $5 droplet
Code:
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 2.95001 s, 364 MB/s
 
Back
Top Bottom