1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

I HIGHLY recommend XenForo on SSDs!

Discussion in 'Forum Management' started by Jaxel, Sep 20, 2013.

  1. Jaxel

    Jaxel Well-Known Member

    So yesterday I had my entire server rebuild, with SSDs in it! Now my website is way faster than it ever was.


    One of the problems I've always had with any VPS was that you're paying so much for bandwidth and disk space; when you don't need it, simply because you want more ram/cpu power for your database driven website. It was always very expensive, and not as fast as you wanted it. I never ever reached even close to 10% of disk space and bandwidth usage, but I always peaked server load.

    Well my host just recently started offering SSD VPS options. They only have about 60% of the disk, which is fine, since I don't even need 1/3rd of that; but they have a massive increase in performance! Not only that, but the costs are 40% less than what I was paying for my HDD VPS! I highly recommend you guys try out SSDs for your XenForo forum.

    Sage Knight, Shelley and MattW like this.
  2. John L.

    John L. Well-Known Member

    I was thinking of doing this myself as well. I was deciding between the pure SSD vs accelerated SSD offerings. Guess I'll go pure :).
  3. MattW

    MattW Well-Known Member

    Agree! I've made some more changes to my.cnf today based on some recommended SSD settings from Percona

  4. JulianD

    JulianD Well-Known Member

    Are those changes useful for SSD disks regardless of memory size?
  5. Da Bookie Mon

    Da Bookie Mon Well-Known Member

    We have been running all our clients forums MySQL on SSD since day one. You really can see the difference between SATA and even SAS on larger forums doing so.
    Brad L and SneakyDave like this.
  6. digitalpoint

    digitalpoint Well-Known Member

    Brad L, Alfa1, SneakyDave and 2 others like this.
  7. Da Bookie Mon

    Da Bookie Mon Well-Known Member

    On a dedi, yes, on a VPS, no as your container settings for what resources they allow you to use will be different depending on host. Also on a dedi, divide innodb_io_capacity=30000 by half if your SSD is a older 3gbps rather then the newer 6gbps read/write.
  8. Sage Knight

    Sage Knight Well-Known Member

  9. Adam Howard

    Adam Howard Well-Known Member

    6.52 second on index
    6.41 on one of the forums

    @Jaxel thanks for sharing. It is an improvement.
  10. Adam Howard

    Adam Howard Well-Known Member

  11. Slavik

    Slavik XenForo Moderator Staff Member

  12. digitalpoint

    digitalpoint Well-Known Member

    I think back then we were using ReiserFS.
    SneakyDave and Adam Howard like this.
  13. Jaxel

    Jaxel Well-Known Member

    I'm using SSD-4... I was previously using HYB-2; which KH no longer offers... probably because it was too popular and cut into their dedi sales.
    Sage Knight likes this.
  14. digitalpoint

    digitalpoint Well-Known Member

    Pseudo side-note... you can make terribly fast disk i/o systems without the expense of SSD drives as well. :)

    For example, copying a 416MB log file in 0.352 seconds, which is a disk i/o throughput of ~1.15GB/sec (gigaBYTES, not gigaBITS).

    time cp /var/log/nginx/digitalpoint.com.access.log-20130920 ./;ls -al digitalpoint.com.access.log-20130920
    real    0m0.352s
    user    0m0.004s
    sys    0m0.328s
    -rw-r----- 1 root root 436692500 Sep 20 15:56 digitalpoint.com.access.log-20130920
    In case you are curious what sort of "traditional" disk setup that is... it's Seagate Savvio 900GB 10K.6 drives (6 of them), in a RAID-6 setup (any 2 drives can fail without any issues... 3.6TB useable space). It has a 1.2GB/sec theoretical max since reads are split across the 6 drives (each drive can read at 204MB/sec). I never really tested it's *actual* speed until now, but 1.15GB/sec is pretty close to the theoretical max. But what I *really* like about these drives is that they are ultra reliable (a mean time between failure of 2,000,000 hours). Which is good when you have a few of them (I have 48 total) and trips to the data center to swap failed drives can be minimized.

    And yes... I know SSDs will perform better with a zillion tiny concurrent reads/writes... but not by much, and my disk i/o load has very little random reads/writes... it's 99.9% sequential disk access, so.

    The closest comparable drive in an SSD form would be something like this: http://www.seagate.com/internal-hard-drives/enterprise-hard-drives/ssd/1200-ssd/ And who knows how much those drives are going to cost... probably somewhere around $4,000 for a single one, vs. $423 for the ones I use.
    HWS likes this.
  15. HWS

    HWS Well-Known Member

    SSDs are the way to go for medium priced standard servers since several years now.

    But if you can afford a RAID-6 setup with Seagate Savvio drives you can go that way also. It may be even better on the long run, since SSDs will need to be swapped every 7-10 years.
  16. Moshe1010

    Moshe1010 Well-Known Member

    Now everybody run
    dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync

    through SSH/terminal and see what's your SSD I/O :)

    Write where you hosting (company/datacenter) your VPS/dedi.
  17. digitalpoint

    digitalpoint Well-Known Member

    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 0.464725 s, 2.3 GB/s
    Adam Howard likes this.
  18. Moshe1010

    Moshe1010 Well-Known Member

  19. digitalpoint

    digitalpoint Well-Known Member

    I like fast servers. :)
    Moshe1010, principia and Daniel Hood like this.
  20. MattW

    MattW Well-Known Member

    Clustered.net VPS
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 1.72283 s, 623 MB/s
    DigitalOcean $5 droplet
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 2.95001 s, 364 MB/s

Share This Page