Would you recommend SSD powered server for speed optimization?

SSD's are the way to go...if you can afford them.
It depends, do you value your data and don't mind replacing your SSD's on a regular basis? Do you have a reliable backup solution in place?
From what I read in this thread, not a lot of people are aware of the danger using SSD's on a busy site.
In case you wonder, SSD drives suffer from write degradation over time. In other words, when you install your SSD array you will notice a HUGE performance boost. Then, 8-12 months later (on your busy site) you decide to run some performance tests to shockingly determine that your ultrafast SSD's are slower than regular 5400K disks... few days later your site is down because all your SSD's failed. As Redhat engineers said very well: "Performance degrades as the number of used blocks approaches the disk capacity. The degree of performance impact varies greatly by vendor. However, all devices experience some degradation." There are firmware updates for many SSD's that try to reduce this degradation, but there are limits to what the SSD controller can do. There is even software that can measure the level of degradation your SSD's are at.

In conclusion, nothing will replace a fair number of SAS 15K disks mounted in RAID (Redhat) or ZFS (FreeBSD). At least not for now, with current SSD technology.
 
Normally around 6000 - 9000 users online (via Google Analytics Real time), how about you?
Holy crap! Your site isn't that old then. Ours is online for more then a decade.

We have around 500-1000 users online normally.

@Floren:
That was the case for most of the earlier SSD's. The one I meantioned earlier is the newest generation, you can fill that drive 100% 10 times a day...for 5 years.
The firmware of most modern SSD's is so advanced now that degredation isn't really an issie anymore unless you have a ridicilous large website/application that writes TB's a day...
 
It depends, do you value your data and don't mind replacing your SSD's on a regular basis? Do you have a reliable backup solution in place?
......
In conclusion, nothing will replace a fair number of SAS 15K disks mounted in RAID (Redhat) or ZFS (FreeBSD). At least not for now, with current SSD technology.

True. I also recommend checking SSDs regularly and replacing them every 3-5 years (depending on their quality).
And, who runs a business site without a reliable backup in place? Regardless of the types of disks?

We have a web server with a first generation SSD running since 2 years and a few months and our checks show only very little performance impact since then.

So I would find my conclusion in installing just one single SSD instead of a "fair number of SAS 15K disks" in a server and invest the saved money in a second backup server with also just one SSD. And you will still have some money left to replace the SSDs after 3-5 years. This will result in huge performance for high reliability at a fair price.
 
And IF one fails due to it's age, I doubt it would happen spontaneously and you would not be able to save almost all data on it (unlike most failed HDDs).

This, plus you can track the total full write cycles and sometimes estimated life percentage via SMART, so it's easy to predict when an SSD is nearing the end of its life.

Personally I've found server load to be significantly lower than what an SSD gets put through when it is the sole disk in a desktop/laptop (obviously this depends on what you're hosting). The SSD in my laptop has gone through the same number of write cycles as the SSD in my server, but in 10x less power on hours. (And the SSD in my laptop is 4x larger so that number can be scaled to 40x)
 
Holy crap! Your site isn't that old then. Ours is online for more then a decade.

We have around 500-1000 users online normally.

@Floren:
That was the case for most of the earlier SSD's. The one I meantioned earlier is the newest generation, you can fill that drive 100% 10 times a day...for 5 years.
The firmware of most modern SSD's is so advanced now that degredation isn't really an issie anymore unless you have a ridicilous large website/application that writes TB's a day...



I also have a plan to replace/upgrade these SSDs every 2 years.
 

Attachments

  • Screen Shot 2012-12-21 at 9.49.09 PM.webp
    Screen Shot 2012-12-21 at 9.49.09 PM.webp
    7.6 KB · Views: 32
Server move completed. (y)

Have a look and see what you think, speed wise - www.cyclechat.net - personally, I'm very happy with it. :D

Old server average load 1.20 / 1.40 with spikes of up to 25.00 when it had I/O bottlenecks - the new server? Idling at around 0.04 / 0.08 ... (y)

Cheers,
Shaun :D
 
I keep seeing people talk about write degregation... There were tests done recently and certain ssd, particularly Intel 520 and Samsung 840 pro (850 pro is better) can handle more than a petabyte of writes

If you don't swap, and have noatime and nodirtime on, and you aren't running heavy logging, you generally don't do that much in writes in the first place.

As to raid controllers being dangerous, yeah, they can be. Software raid 5 or 6 is probably the safest you can be if you don't mind going back in time a bit. Software raid 1 is fine if you're cheap.

The purpose of a raid controller is to reduce strain on CPU for raid, and to prevent transactional data loss during full system stop.

If going back in time 30 seconds doesn't concern you, skip it, and go with software, as its safer
 
In general, if there isn't a substantial cost difference, I would infact recommend SSD over hard drive, if it fits within your storage requirements.

The primary advantage of ssd is that it prevents random IO derps.

While on SSD, IO is very rarely your limiter, and usually moves the limitation to something else. Also, SSD can lead to lower cpu usage due to lower wait times on a single thread.
 
The managed host I am with monitors the hardware, so if an SSD fails, it can switch over to a mirrored drive that is a clone of the first, and they'll swap out the faulty drive within minutes at no cost to us. And they do use the enterprise level Intel SSDs as well. We've had ours since moving to XenForo and we serve up to 1500 online users during peak hours, with no reliability issues. We were starved for memory, though, so we had to upgrade the server.
 
raid 1/5/6/10 is pretty normal

Shouldn't run without it.
Uhmmm.. not a good idea to run SSD's in RAID 5 (or it's associated types) if I remember correctly. RAID 1 or 10 was the suggested at the time I was looking at it.
 
Uhmmm.. not a good idea to run SSD's in RAID 5 (or it's associated types) if I remember correctly. RAID 1 or 10 was the suggested at the time I was looking at it.
Actually, it depends on your specific workload

Raid 5 has parity protection, which can fix corrupted data, while raid 1/10 can only fix
Raid 1 and raid 5 are approximately the same for write speed, but raid 5 has a faster read speed.

In a 3 drive configuration, with 3 120GB drives, raid 5 should give around 240GB storage space, while it takes 4 120GB or 2 240GB drives to get 240GB storage with raid 10/1

Raid 10 with 4 drives will have twice the write speed as raid 5 with 3 drives, slightly faster read speed, but still lacks parity protection.

If you're in a situation where you need more storage and read speed, but don't need more write speed, upgrading to raid 5 from raid 1 makes sense.

Most webservers fall into this category.

This applies to linux/bsd based systems, and not windows. Windows does not have read speed advantages with raid 1 or raid 5.
 
Actually, it depends on your specific workload
From what I remembered, it was more than write speed... it was the actual # of writes involved. The RAID 5 had a higher number of writes because of it's nature, and with the finite (although a high number) for SSD's it would kill them sooner than a RAID 1/10 config.
 
From what I remembered, it was more than write speed... it was the actual # of writes involved. The RAID 5 had a higher number of writes because of it's nature, and with the finite (although a high number) for SSD's it would kill them sooner than a RAID 1/10 config.

The difference is insignificant.

What you're describing is an issue of SSDs before 2009 that didn't have wear leveling technologies and high write tolerance.

Anyways you can write over a petabyte to a high grade SSD
That requires torture conditions of constant writes with no reads at all for over 7 months straight

If you do have an operation with that much in writing, you probably have a large enough dataset to warrant raid 10 spinning disk, as you won't be able to fit it on disk.

Also, you have the option of running heavy duty operations in memory, instead of on disk, which is faster anyways.

On a website, the operations are mostly read only, so it'd take over a decade to kill the drive.
 
What you're describing is an issue of SSDs before 2009 that didn't have wear leveling technologies and high write tolerance.
Anyways you can write over a petabyte to a high grade SSD
This may be part of the issue. Not all hosting providers are using actual enterprise level SSD's - a lot of them are using desktop versions.

Like I said, it was a while ago. We typically just stick with SAS drives for what we use.
 
This may be part of the issue. Not all hosting providers are using actual enterprise level SSD's - a lot of them are using desktop versions.

Like I said, it was a while ago. We typically just stick with SAS drives for what we use.
The petabyte write tolerance was tested on an intel 520, and a samsung 840 pro

If you're going to do something that write heavy, SSD doesn't make sense in the first place, as SSDs have issues with variable write speed.

When you write someone on a SSD, to change what is in the block, it has to be erased first which means that it's 2 write operations. Under normal circumstances, trim is used to eliminate written to but unallocated sectors, during low load periods. If you never have them, performance suffers.

SSDs for long duration sustained writes aren't very good period.

They also don't have the necessary capacity for that.

For example, a 500GB ssd can fill itself up in 20 minutes
For this reason, torture tests show unreasonable conditions.

It'll take 8 hours for a 6TB drive to fill itself up.

I see no reason not to use raid 5 where appropriate, such as mostly read workloads.

If you're writing that much to a SSD, you really just need more ram :P
 
Top Bottom