Would you recommend SSD powered server for speed optimization?

TheBigK

Well-known member
My web host offers SSD powered servers. I wonder whether SSDs would be a better solution for Community powered websites?
 
I have them in all my machines now and they're very fast - my only caution with a server would be to put them in as a mirrored RAID - just in case. ;)
 
It's simply a different type of hard drive.

SSD stands for "Solid State Drive".

You might know that traditional hard drives contain a set of spinning discs with a special arm that glides over them to read the data? Well, basically SSDs don't have that. It's flash memory, similar to what you get in camera memory cards and USB sticks.

Benefits are it's much faster to read and write to them and in theory less prone to errors (due to almost no moving parts - if any).
 
I suspect it won't be too long before SSDs will be the only option for storage. If the price is right then there's really no reason not to use them.
 
For a database server, yes.
If you have a database under heavy load, in the past the way to go was a RAID10 with 15k SAS hard drives. Nowadays you use one or more SSDs instead and you get wonderful database performance.

If you are talking about CrazyEngineers, that probably would be overkill for such a small database. You should be able to keep the whole database in memory and nothing is faster than memory :)
 
If you are talking about CrazyEngineers, that probably would be overkill for such a small database. You should be able to keep the whole database in memory and nothing is faster than memory :)
Interesting. How do I go about keeping the database in memory? o_O
 
SSD's are the way to go...if you can afford them.

They are indeed much faster for things like file transfer but the biggerst gains are found in accesstime.
My guess is you could replace 10 15K disks with only one (fast) ssd and get even more performance. Especially in DB servers.

Another big advantage is power usage. 15K SAS disks can use up to 15-20W so a large fast RAID array costs you a lot on power.
AN SSD on the other hand uses around 1-2W and oyu only need 1 or 2 of them (2 in raid1 is a wise choice).

Only one possible disadvantage is the limited amount of write cycles. I don't have any figures around but a heavily used DB for example will wear out an SSD quite fast.
We're talking real heavy use btw...like GB's of data a day.

This is relevant for the cheap MLC (consumer) drives btw. If you really have some dollars you have to buy SLC drives, they are more suitable for server environments.
But then again, they are so expensive that I would buy 2 MLC disks in raid1...
 
Interesting. How do I go about keeping the database in memory? o_O

I think if you have enough RAM and decent settings, this happens to a large degree anyway. I don't see a need to do it since the various cache and buffer schemes effectively accomplish the same thing!
http://www.centos.org/modules/newbb/viewtopic.php?topic_id=25383&forum=41

My guess is that the next steps up after having plenty of RAM and caches would be to have
faster drives...for other data (images, etc.) being called
SSD - faster than most fast drives....
and then...
Specialty RAM products such as Fusion I/O which are purposely designed to speed up the data path.
http://www.fusionio.com/solutions/database/

Of course, all of the above will still be subject to the whims of the internet, so you'd want to distribute it all like google and the other biggies......

I think, for us little folks, there is a point of diminishing returns...that is, is it fast enough?
 
Would I recommend getting an SSD? In most cases, no. Instead of paying extra for an SSD it'd be better to use a high-performance CDN like EdgeCast. They store all your small files on SSDs anyway and all the content is served up closest to the user's location, meaning lower latency.

If you're running into database performance issues, that means you probably don't have enough RAM. We do 930 queries / sec on a meager 7200 RPM HDD.
 
Just off the cuff, I suspect a good I/O system will speed things up a bit. My server, and I am assuming quite a few others, seems to use a small % of CPU - even under load. This may indicate that I/O is the weak link in the chain.

So far I am only up to about 600 users at one time (XF measurement, so not the same instant) and the server load of a quad-core 2.8 has not gone over about .6 in the 15 minutes window. As I understand it, the system CPU can handle a load of 4 (one for each core), so there is perhaps some overhead that the disk cannot feed quickly enough? Others things may enter the equation, such as the network...although I know my ISP is pretty good. Still, it may be that modern servers can saturate their network connections, causing a small wait at the server.

I suspect much of the delay of already quick sites is in the internet itself and in the users computer (rendering). In other words, there is only so far you can go.
 
I may be able to answer that before too long. I've asked my host for a new server quote with an Intel 520 240GB (MLC) SSD and 32/64GB RAM. (y)

New server ordered: AMD Opteron CPU, 64GB DDR3 ECC, 1st (primary) drive: Intel 520 240GB SSD (with 30% over-provisioning [170GB]) - 2nd drive (1st backup): Intel 520 240GB SSD (with 30% over-provisioning) backing up on a 6 hour cycle as a straight drop-in if primary fails - 3rd drive (2nd backup): 250GB HDD - for nightly backup (since SSDs do have a finite life and will eventually need replacement). (y)

Should be fun. :D
 
Oh, and whilst I suggested the use of SSDs in a RAID earlier in the thread, we decided against it for our new server because it added another point of failure.

When we researched the use of SSDs in server farms/datacentres we found that there was potentially more risk from the RAID controller failing than there was from the SSD drives themselves failing, so we went for the KISS approach and just added a secondary backup SSD instead. (y)
 
Why a 520? I know the old 320 uses the "powercap" so the SSD can write it's caches to disk after a power failure. The 520 doesn't have that I believe...
 
Speed really - 6Gb/s - higher IOPS than the 320 series - 5 year warranty - and Intel customised Sandforce controller.

And I went with Intel over some of the drives that scored slightly faster in the tests because I want reliability.

The bottleneck with my current 5 year old 32-bit dual-core server is disk I/O - I'm simply hitting the limits of what the machine can do. I've spent a lot of time optimising it (making sure as much is cached as possible) and it scores really well in tests etc. but my main site just keeps getting busier and busier (and is attracting more and more bots and scrapers into the bargain) so rather than just clone to an SSD I thought I might as well go the whole hog and get a new box with lots of RAM that will serve me well over the next 3-5 years of growth. (y)
 
Back
Top Bottom