Pros and Cons of Dedicated Server vs Cloud VPC ?

I use namecheap stella hosting.
costs me $101AU a year.
but does the job.

If i wanted to go with cloud service it is perfectly ok for me to do so.
 
And this is one reason that with ANY dedicated server that I have used I chose to use hardware RAID instead of software.
That's a double-edged sword. If your RAID card messes up, it corrupts everything and you lost.
Software RAID is much more popular (even given unlimited budget, that is) these days, for good reason. Losing N drives is less likely than losing 1 card (so long as N > 1 anyway).

The problem with software raid is that everyone tries to penny pinch on the layout and uses parity to lose less space. Parity-based recovery is indeed incredibly slow and resource-expensive.
But there is always the good old option of more replicas. Storage is cheap enough these days that a simple 3-replicas setup is often feasible. And if not (due to performance or space needs), something like RAID 10 can be a good option.

But at the end of the day, nothing beats backups. At which point it doesn't really matter how slow/fast your solution is at recovery :)
 
I've not had a RAID card fail in nearly 25 years of using them. Meanwhile, I've had several drive failures, and I've used good quality enterprise drives.
 
I've not had a RAID card fail in nearly 25 years of using them. Meanwhile, I've had several drive failures, and I've used good quality enterprise drives.
Same here. In fact one of the old Dell servers I have as a NAS here at the house is around 15 years old and still on the original SAS drives and RAID controller card.
In my 20+ years it has always been a drive that failed and never a RAID controller card.

There is a reason almost every corporate use server uses hardware RAID and not software.
 
Last edited:
Software raid is much more popular on Linux systems compared to Windows orbMacOS, I'd even say it's the default.
For good reason because software raid on Linux works really well.

I agree software raid sucked in the old days but hardware has become so fast these days those drawbacks are close to gone now.

For me, the benefits of hardware raid don't outweigh the drawbacks and I have used software raid on my dedicated servers without any issues.
 
Software raid is much more popular on Linux systems compared to Windows orbMacOS, I'd even say it's the default.
And this is a providence of what?
Sorry.. but hardware RAID will allways exceed software RAID. The simple basis is that software RAID is cheap to do.. Hardware RAID costs more. And the ultimate determinator is the actual cost to benefit ratio... most want cheap... not quality. 😉
Seems way to many pay more attention to their pocketbook than good computing.
 
Software raid is much more popular on Linux systems compared to Windows orbMacOS, I'd even say it's the default.
For good reason because software raid on Linux works really well.

I agree software raid sucked in the old days but hardware has become so fast these days those drawbacks are close to gone now.

For me, the benefits of hardware raid don't outweigh the drawbacks and I have used software raid on my dedicated servers without any issues.

All the dedicated servers I ran with Linux were always hardware raid (perc) controllers, I wouldn't say that software raid was/is the default, maybe if you just want cheap then yes you would get software raid as there is no cost of the dedicated hardware raid card.
 
Software raid is much more popular on Linux systems compared to Windows orbMacOS, I'd even say it's the default.
For good reason because software raid on Linux works really well.

I agree software raid sucked in the old days but hardware has become so fast these days those drawbacks are close to gone now.

For me, the benefits of hardware raid don't outweigh the drawbacks and I have used software raid on my dedicated servers without any issues.

Software RAID is popular for people because of cost, not because its better. There are reasons why enterprises primarily choose hardware RAID over software RAID.

There are some things software RAID simply cannot do. For instance, a good hardware RAID controller with BBU or cachevault, you can turn on write-back caching instead of default write-thru caching, and the result is fsync write speeds superior to fast NVMe drives while still being able to survive a power outage. Software RAID's fsync speed is limited to that of the drive, and it can't use write-back caching.

Rebuild speeds are faster too with hardware RAID, plus it's usually easier to hot-swap a drive.

Software RAID has its place, and some of my servers use it. But those of mine and clients who need the fastest database write speeds, and most resiliency... it's hardware RAID.
 
It’s fine if you want to believe things that are wrong, but HBAs + storage systems like Ceph have long eclipsed hardware raid when it comes to top-tier storage solutions…

Write caches aren’t a magical feature of hardware RAID and can be done just fine with RAM + any JBOD based system…

Whatever the motivations of « most » people for either, I don’t know (as they vary and I haven’t done a massive polling effort on the matter). But they may be to save a buck in some cases.
But you still really can’t classify things like that…
 
does not get around the FSYNC speed barrier
Depends, the hypervisor can be the one to manage IO caching for example, so your guest OS would be none-the-wiser.

It is however a tremendously bad idea in almost all cases… fsync exists for a good reason…

If things were THIS tight, having a 3-replicas setup with one of the replicas, handling initial writes, running on Optane or the likes, sounds like a decent approach.
I distrust that there are really any cases where you’re using both off-the-shelf software that can’t be distributed somehow AND also where this isn’t enough performance.

But if you’re in that situation, fine. The problem is however not the storage but the technological choices, frankly. But you can argue that this is moving goalposts, and it’d be fair. Pretty sure this doesn’t apply to anyone here though.
 
Last edited:
I went from dedicated to cloud. Currently using Hetzner in Germany, but considering a move back to the U.S with Hetzner.

Running some smaller instances alongside my primary server - using Hetzner's firewall and private networking features. For example, my Elasticsearch server is a tiny instance running Debian and configured per ES's configuration guide; it's not pubicly exposed and works great just supporting searches.

Used to distrubte static content via an AWS source node also in Germany for Cloudfront, but have since taken the CDN and Cloudfront down. I'm still using Litespeed, LSCache, and some other bits. A CDN still makes sense, but I'm not using one and have a image-heavy site.

The majority of my users are US, Canada, UK, and Australia.

Dedicated may make sense again someday, but am going to try to stick with cloud for now. Don't really know how Hetzner compares to others, but they have an impressive platform.
 
I do my own off-site twice daily backups, too, from past experience. No matter how much a company is prepared, 'stuff' happens.

One of the WTC towers partially fell on a previous host on 9/11, and their data lines were also destroyed for months.

On another host, a staff member accidentally typed a command that wiped their whole data center. All local backups were lost and the only option was remotely restoring thousands of accounts, across a bunch of servers. It would have taken weeks because of max bandwidth limits, so they had to fly in drives from the remote site to the host. They were physically back online within the hour, but everything was blank/defaulted.

I had my own backups each time, thankfully, and was back online quickly. It taught me a valuable lesson! It also taught me to register domains to multiple email addresses, including at least one that is NOT on the same ISP. (9/11 was a nightmare trying to change to another isp, because it was all going to a different email address that was also on that server/down. It was with Netsol at the time, and I had to fax in documents and wait)
 
Top Bottom