Nimbus Hosting vs DigitalOcean

You can't prevent a DDoS attack on their routers, you just can't.

Huh??? Of course you can!! They CHOOSE not to, as it isn't part of their business model. That's fine, but then you risk issues like they had. The majority of their network is down/facing major connectivity issues, and it takes them a month to fix it? That's no datacenter I would ever want to be in. In fact, that almost has to be a new record. I don't think I've ever seen a provider take a month to mitigate an attack. I'm glad you only experienced random timeouts and long ping times. Peoples' sites, businesses that make real money, etc., were completely down and inaccessible for weeks. Long ping times and timeouts are not acceptable either, for that matter.

As for: https://blog.linode.com/2016/01/05/security-notification-and-linode-manager-password-reset/
I'd say that's a security precaution. I asked, and they didn't have immediate evidence to say there was any compromise of their servers, but there could have possibly been some leaked data.

So how do you think that data was leaked? Their client system just...happened to spew out a bunch of client data to an unknown 3rd party, for absolutely no reason? Their site was compromised, and client data was stolen. They say it affected 3 clients. There are databases out there on the interwebs that say otherwise....i.e. the entire client database being dumped, with all your personal information.

Then you have the major hack back in 2013 (there have been some minor ones since), and who can forget the Great Bitcoin Heist of 2012, when Linode could once again not secure their network? http://www.theregister.co.uk/2012/03/02/linode_bitcoin_heist/

These things are not commonplace in the industry. These are not things that happen to the majority of hosts. Yet knowing all this, people keep recommending them. Very confusing to me.
 
Huh??? Of course you can!! They CHOOSE not to, as it isn't part of their business model. That's fine, but then you risk issues like they had.
I don't think you've ever ran a host. When you colocate somewhere and your routers are being stacked, you can mitigate it with your team (which they did), attempt to hire an external agency to deal with it, or just leave it until they stop. They did the first one.

Anyone can get attacked. It's not specific to Linode. Microsoft, Google, Amazon, all of them can also be attacked. They probably do as well, but they don't serve clients, they replicate data in the case that one node goes down, along with various other techniques, most of which I don't know. The point is, you can take down Linode just as you can anything else. It's harder to mitigate them than you think, depending on how powerful the attack is.
 
There are databases out there on the interwebs that say otherwise....i.e. the entire client database being dumped, with all your personal information
Probably the 2013 one repeated

The Bitcoin heist affected them, not the customer. A lot of this is affecting more them or you than your users. So at the end of the day, there are less problems reported with individual customer servers.
 
I don't think you've ever ran a host. When you colocate somewhere and your routers are being stacked, you can mitigate it with your team (which they did), attempt to hire an external agency to deal with it, or just leave it until they stop. They did the first one.

Uhhhhhhh.....I've run a host for the past 20 years, and have multiple cabinets of co-located equipment. In this day and age, it should never, ever, ever take a month to mitigate a DDoS attack. In my 20 years in the industry, I can't tell you a single other incident in which a host took over a month to mitigate such an attack.

Probably the 2013 one repeated

Oh no. All new information from new clients, as recent as 2015. ;)

The Bitcoin heist affected them, not the customer.

This is true, but it goes to show that they are repeatedly incapable of securing their network.
 
information from new clients, as recent as 2015. ;)
It's stated a lot of the time but usually it's a repeat of old dbs - I will check my account in the db though, to see how recent it is

In my 20 years in the industry, I can't tell you a single other incident in which a host took over a month to mitigate such an attack.
It was more like 2 weeks I think, but yeah, a bit OTT. I'd definitely say some incompetence was at play, but you need to admit it's not a case of press a button and fix. It's expensive, tedious and annoying.
 
My Linode instance was in the Dallas DC, I can't say I noticed any downtime/slowness.

If it happens again and I do experience downtime, well yes.. I would have to reconsider my host.

Right now so far so good with Linode.
 
Newark and Fremont aren't really the most common locations for serving data globally (which were the affected nodes), which I assume most forum owners do

You cannot be serious. Newark is across the river from NYC, economically the most important city in the world. It's probably their most popular location.
 
Please just stop.. Your just pulling random ideas from the sky and making yourself look silly... WSWD is prolly the longest running web host in this community... NY Metro is one of the largest peering points in the world.
My bad on Newark, I was thinking of something else (not a NY location) - Dallas is definitely one of the most popular choices for general US distribution and NY for global. Anyway, that comment I made (second quotation) is aimed at a DDoS attack not being easy to mitigate if in large size. I don't think you can deny that with any sense. Some of my information above was misinformed or biased, but you definitely can't mitigate DDoS attacks easily if they're large scale and directly attacking and flooding routers. And if someone was attacking all of their locations simultaneously, it's probably going to mean they have enough power to keep an attack going.

I don't think you can solely blame it on Linode. They might've dealt with it wrongly or been incapable of mitigating it even better, but it's not an easy task. You or I couldn't do it.
 
Wow!! Thank you for the link. It was only a 80Gbps attack, apparently. That's not a large attack by any stretch of the imagination. :( I really do feel bad for them, and hopefully they learned a lot of lessons from these attacks and actually do something to prevent them in the future.
I'm not surprised they had issues with relying on facility transit at their scale. Especially coordinating with all their different providers when all their infrastructure is each datacentres is being attacked at once.
 
Thanks!

Wow!! Thank you for the link. It was only a 80Gbps attack, apparently. That's not a large attack by any stretch of the imagination. :( I really do feel bad for them, and hopefully they learned a lot of lessons from these attacks and actually do something to prevent them in the future.
Yeah, it isn't. I haven't had a chance to read it yet, though. I'm sure there's some better explanation for the time taken, but yeah, I feel bad for them too - it's always a pain on business for that to happen and I am sure they would've tried many ways to overcome the attacks... But 80Gbps isn't much to mitigate
 
Back
Top Bottom