Even if SSDs were the same price as SAS, I'm not sure if I would use SSDs. They just aren't as reliable over long periods of time at this point. I think a lot of servers are using SSDs when they really should be focusing on more RAM. Any process that is hitting the drives super hard with random access is most likely because the server is short on memory.
Ideally servers really should be doing primarily sequential reads/writes (other than small things here and there). And for sequential i/o, you can make SAS setups just as fast (if not faster) than SSD. With the bonus of long term reliability.
And they can even be very fast for random i/o (from one of my servers just now)...
Code:
twin1:~# dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 0.468828 s, 2.3 GB/s
So 2.3GB/second on purely SAS drives.
And I don't even use 15K rpm drives. The ones I use are 900GB, 10K rpm... specifically the 10K.6 generation of these:
http://www.seagate.com/internal-har...rd-drives/hdd/enterprise-performance-10K-hdd/
They ARE enterprise drives, but you should be using enterprise drives (SAS or SSD in any server imo). So in addition to being very fast, they are also very reliable... mean time between failure of 2,000,000 hours (228 years). The closest SSD drive would be this 800GB one:
http://www.seagate.com/internal-hard-drives/enterprise-hard-drives/ssd/1200-ssd/ which means 6 of those in a server would run ~$16.2k, vs. ~$2.4k for the ones I have.
I have 6 of those drives in each server (in a RAID-6 configuration), which means the throughput is roughly 4x the throughput of a single drive and any 2 drives can fail without the volume being destroyed.
Either way... if web server, MySQL or anything else is thrashing the disk with a ton of random read/writes, I'd seriously look at putting more memory in the server.