SAS or SSD for a Big Forum on Servint?

Markos

Well-known member
I run a very big forum in vb (+2M messages, 100k users and an average of 200 users online) and i want to migrate to Xenforo and transfer my forum to Servint (now i use OVH, with a "normal" sata HD).

I see that there are 2 options:
1) Flex Pro v3 with 150 GB 15K SAS
2) 60 GB SolidFire SSD
(I use another server for backups, the disk space it's not a problem for me)

I have read that SSD is faster, but SAS is more affidable.

What is the best solution in your opinion for a busy forum? @digitalpoint @MattW @Slavik @oman
 
I am not one of the above... but personally if I was going to choose one I'd go with SAS.
 
Isn't their solid fire SSD their SSD SAN offering, and would make it quite expensive to go with that option.
 
I'd go with the SSD myself, but it depends how busy your site is. Being big doesn't play too much into it.
 
@Slavik, wouldn't the I/O of the SAS be higher than the SSD (I'm pretty sure raw throughput is higher on the SSD). If the site was DB intensive I've always understood that SAS was a much better choice.
 
@Slavik, wouldn't the I/O of the SAS be higher than the SSD (I'm pretty sure raw throughput is higher on the SSD). If the site was DB intensive I've always understood that SAS was a much better choice.

Even a top-end 15k sas will be outperformed in iops by a cheap ssd. Raw throughput could favour the sas in sequential writes

The only reason to go for sas is if you're concerned about ssd write cycles or failures from sudden power loss, neither of which are big issues (watch smart stats for the former and buy ones with protective capacitors and/or not sandforce controllers for the latter)
 
Last edited:
  • Like
Reactions: Xon
Hmm... things must have changed quite a bit. I understood for a DB intensive structure that primarily did writes the SAS RAID would beat an SSD and on reads the SSD was the winner. Of course, a forum is primarily read with fewer writes - so in that case the SSD would be better. We went with SAS on all the servers at the Dr. office primarily for that reason - more writes than reads.
I'm pretty sure that life-span wise the SAS owns the SSD.
 
Last edited:
Even if SSDs were the same price as SAS, I'm not sure if I would use SSDs. They just aren't as reliable over long periods of time at this point. I think a lot of servers are using SSDs when they really should be focusing on more RAM. Any process that is hitting the drives super hard with random access is most likely because the server is short on memory.

Ideally servers really should be doing primarily sequential reads/writes (other than small things here and there). And for sequential i/o, you can make SAS setups just as fast (if not faster) than SSD. With the bonus of long term reliability.

And they can even be very fast for random i/o (from one of my servers just now)...
Code:
twin1:~# dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 0.468828 s, 2.3 GB/s

So 2.3GB/second on purely SAS drives.

And I don't even use 15K rpm drives. The ones I use are 900GB, 10K rpm... specifically the 10K.6 generation of these: http://www.seagate.com/internal-har...rd-drives/hdd/enterprise-performance-10K-hdd/

They ARE enterprise drives, but you should be using enterprise drives (SAS or SSD in any server imo). So in addition to being very fast, they are also very reliable... mean time between failure of 2,000,000 hours (228 years). The closest SSD drive would be this 800GB one: http://www.seagate.com/internal-hard-drives/enterprise-hard-drives/ssd/1200-ssd/ which means 6 of those in a server would run ~$16.2k, vs. ~$2.4k for the ones I have.

I have 6 of those drives in each server (in a RAID-6 configuration), which means the throughput is roughly 4x the throughput of a single drive and any 2 drives can fail without the volume being destroyed.

Either way... if web server, MySQL or anything else is thrashing the disk with a ton of random read/writes, I'd seriously look at putting more memory in the server.
 
Hate to disagree with DigitalPoint, but he is just making up his data as far as failures are concerned. Yes, the lifespan will be less with SSD drives, but when you're not buying them, does it really matter? I have yet to have a SSD drive fail...ever...and we have deployed probably close to 1000, since we started using them. SAS drives? Have lost plenty over the years. Most other providers agree with me. They don't see the massive failure rates that everybody talks about. Neither do consumers. If you're worried about reliability, you use the SSD drives in RAID10 and introduce drives into the array at different times. That's what we do, so they will not EOL at the same time. We personally burn out RAID cards before we do SSD drives.

SSD drives performance-wise will beat the hell out of SAS drives any day of the week, and twice on Sunday. Unless you need a lot of disk space, there is NEVER a reason to choose SAS over SSD.
 
One should clarify that the RIGHT SSD is better than SAS especially when it comes to MySQL data serving. Not all SSDs are made equal and yes the RIGHT SSD will be more expensive than SAS in general on a $/GB basis.

But as Shawn also said, memory bandwidth (in MySQL Cluster, ramdisks) is even faster than SSD but again more expensive than SSD on a $/GB basis as is the fact that for sequential read/write I/O, SAS in many disk Raid configuration is more than capable of handling the performance. But yes SSD for random disk I/O is king.

For SSDs, I mainly use Intel 520, 530, S3500 and S3700 series SSDs for reliability and performance. I've had clients use Intel 320, Crucial M500 and Samsung 840 Pro and Micron P300 SSDs as well. For Crucial M500 definitely only consider 480GB and higher capacities. The only failures I've ever had was with one client's use of Intel 320 SSD which was replaced by web host within a 4hr window with data restored to the replacement.

Whatever you do backing up data regularly is the only safeguard against any type of disk failure be it SAS, SATA or SSD ;)

Last but not least directly answering the original poster's question, which is best ? Make sure you have the right monitoring tools setup to profile your server resource and mysql usage in terms of disk I/O to make sure it is actually even a problem. If disk I/O ain't even a bottleneck for your usage profile, then you may not even need SSDs or using SSDs might not make a tangible difference for you as opposed to putting the SSD usage saving $$$s to better use. For example more memory installed and optimally allocated to where it is needed - MySQL etc.
 
  • Like
Reactions: Xon
I use enterprise SAS drives.
+1, SAS drives are reliable and also you can expand them in a nice array, resulting in performance close to SSD.
Each has pros and cons. A SSD drive does not have movable parts so it generates no heat and is not prone to mechanical failures. But it does wears slowly over time, so you have to check their status every year. A SAS drive is the opposite, you risk to have mechanical failures. However, when set in a proper array, you can hot swap them with no down time. On the other side, using a hot swappable setup with SSD's (so you avoid down time) will raise your server costs substantially, compared to SAS drives. @WSWD has some very valid points too.
 
Last edited:
Hmm... things must have changed quite a bit. I understood for a DB intensive structure that primarily did writes the SAS RAID would beat an SSD and on reads the SSD was the winner. Of course, a forum is primarily read with fewer writes - so in that case the SSD would be better. We went with SAS on all the servers at the Dr. office primarily for that reason - more writes than reads.
I'm pretty sure that life-span wise the SAS owns the SSD.

In terms of IOPs SSD > SAS and easily SSD better by a factor at least 10x to 40x times SAS 15k - especially for random read/writes.

Sequential on the other hand, like Floren and Shawn have stated putting enough SAS disks in an array can achieve sequential disk throughput than would match SSD disks.

Although, you'd reach a point where physical housing restrictions would limit you. For example, if your MySQL server and MySQL app are requiring closer to 5000 IOPs then using even the fastest SAS 15K would be pushing ~250 IOPs per disk. So you would need at least 20x SAS 15K in Raid 0 or 40x SAS 15K in Raid 10 to achieve what 1 or 2 RIGHT SSDs could do.
 
Hate to disagree with DigitalPoint, but he is just making up his data as far as failures are concerned. Yes, the lifespan will be less with SSD drives, but when you're not buying them, does it really matter?
I suppose it doesn't matter if you aren't buying them and it doesn't cause any downtime for your site... in that regard you are right. I'm most speaking from my point of view, where I want to minimize physical trips to my data center to replace failed hard drives (even if it doesn't cause down time, it's still a hassle).

And I was only speaking of consumer SSD failure raid compared to enterprise SAS drives (which are roughly the same price). I've seen/heard of very few enterprise SSD drives in web hosting companies simply because they are literally thousands of dollars for a single drive. But enterprise SSD drives are just as reliable as enterprise SAS drives (they just cost 8x more).

Either way, I'm a very strong believer in minimizing disk i/o on servers whenever possible... stuff in memory will always be faster than stuff coming from disk (regardless of the type of disk). As an example, I use MySQL Cluster (all data is 100% memory resident), so the only disk access it generates is writing out redo logs (which is only periodic sequential writes, and no disk access needed for any sort of read). We've load tested our setup and it's able to do ~12M SQL reads (select) *while* doing ~5M SQL writes (update/insert/delete)... all the while never needing to read anything from disk.
 
I suppose it doesn't matter if you aren't buying them and it doesn't cause any downtime for your site... in that regard you are right. I'm most speaking from my point of view, where I want to minimize physical trips to my data center to replace failed hard drives (even if it doesn't cause down time, it's still a hassle).

And I was only speaking of consumer SSD failure raid compared to enterprise SAS drives (which are roughly the same price). I've seen/heard of very few enterprise SSD drives in web hosting companies simply because they are literally thousands of dollars for a single drive. But enterprise SSD drives are just as reliable as enterprise SAS drives (they just cost 8x more).

Either way, I'm a very strong believer in minimizing disk i/o on servers whenever possible... stuff in memory will always be faster than stuff coming from disk (regardless of the type of disk). As an example, I use MySQL Cluster (all data is 100% memory resident), so the only disk access it generates is writing out redo logs (which is only periodic sequential writes, and no disk access needed for any sort of read). We've load tested our setup and it's able to do ~12M SQL reads (select) *while* doing ~5M SQL writes (update/insert/delete)... all the while never needing to read anything from disk.
Curious how would MySQL cluster perform in a degraded state anyway ? For example with SAS disk failures in arrays ? IIRC, you said your MySQL cluster has much more overhead capacity than you need right now, so even if you had disk failures + being in Memory data, it should NOT be an crucial ASAP event matter that would cause downtime for you either way ?

Unfortunately, the outlay costs for MySQL cluster servers with ample ECC Registered memory for in memory storage would be beyond most folks here and definitely more costly than even a pair of enterprise level SSDs like Intel S3700. For example, 400GB Intel S3700 goes for around $930-950 each, so a pair in Raid 1 or 4x Raid 10 would cost = $1,900 to $3,800. As opposed to just ~400GB worth of memory at $90/8GB ECC Reg or $160/16GB ECC Reg would be 50x 90 = $4,500 or 25x160 = $4,000 - excluding the rest of the cost of the cpus, motherboards, chassis, redundant power supplies, system cooling.

In some ways, SSD should not be seen as a more performant and expensive alternative to SAS but as a cheaper alternative to in memory storage - especially when data set size is greater than the amount of physical memory available :)
 
Last edited:
Curious how would MySQL cluster perform in a degraded state anyway ? For example with SAS disk failures in arrays ? IIRC, you said your MySQL cluster has much more overhead capacity than you need right now, so even if you had disk failures + being in Memory data, it should be an crucial ASAP event matter that would cause downtime for you either way ?
Yeah... we have a huge amount of unused overhead (I don't like upgrading servers very often). I'm not 100% sure how it would perform with a degraded disk array (so far none of the 48 drives in our cluster have failed [6 drives per server, 8 servers]), but it should be fine. Our disk i/o peaks at around 1% capacity, so even if it had a 5% temporary overhead of rebuilding a replaced drive on a single machine, it should be fine. You could also route requests around that machine until it was done rebuilding if you had to, but I don't think it would be necessary.
 
We've load tested our setup and it's able to do ~12M SQL reads (select) *while* doing ~5M SQL writes (update/insert/delete)... all the while never needing to read anything from disk.

Is that it? I could work faster on an abacus.
 
Top Bottom