Cisco UCS (Unified Computing Platform)

Deebs

Well-known member
This is mainly for the techie geeks amongst us (looking at you Slavik!).

Over the past few weeks I have been commissioning a Cisco UCS platform in one of my Employer's datacentres. UCS is Cisco's answer to CPU powered devices on a blade. It consists of the following:

The chassis, this holds the blade servers.
The fabric switch, this supports IP and FCoE
The network switch, this supports IP.

I recently ordered quite a large UCS platform (not the largest by any means), 5 chassis, lots of different types of blade servers - both full width and half width. Two 6248s (the fabric switches), two 5548 (the IP switches). Each chassis has 2 x 4 port Flex module although I am only using 2 ports on each Flex module, giving 40gb of bandwidth to the fabric switches. (Around 20 chassis - each can hold upto 8 blade servers, is the recommended maximum per set of switches)

The fabric switches are connected in turn to the 5548's via 2 x dual 10gig uplinks (all bonded), giving 40gig of bandwidth again. Finally the two 5548's are connected via 2 x 10gig uplinks into the metro ring (also running at 10gig) using Cisco 4948E and Cisco REP (their better version of spanning tree).

That was the hardware layer, on top of that I have installed ESXi 5.0 and configured each host to have resilient, load balanced nics (each at 10 gig) to the core. These nics carry IP, iSCSI and FC0E traffic. There are 3 iSCSI servers connected with dual 10gig interfaces, into a non-routable VLAN, which only carries iSCSI traffic.

Why? We need performance and low latency. What is running on the ESXi platform? A multitude of servers:
  • MySQL servers
  • Asterisk servers
  • NGINX servers
  • PHP FPM servers
  • Windows 2008 servers
The network is configured as layer 2 so when I install UCS systems into the other sites I can easily implement VMware HA/FT without IP changes. Next step is to install vCloud Director along with vShield. This will give the ability to do clever things like deploying an entire application stack (including duplicate IPs, MAC addresses) and presenting it to the outside protected by a virtual firewall.
Happy days. Feel free to ask questions if you have any.
 
So obviosuly this is designed to improve scalability and redunancy while keeping costs lower than average...

The question next is.. what does your employer do to warrent such amounts of power? I mean this isn't your standard distributed or load balanced system... they must have some pretty specific requirements.
 
So obviosuly this is designed to improve scalability and redunancy while keeping costs lower than average...

The question next is.. what does your employer do to warrent such amounts of power? I mean this isn't your standard distributed or load balanced system... they must have some pretty specific requirements.
Voice in the cloud. Low latency and zero jitter.

Best thing is that we have 6gig of transit to the internet (with Tier 1 providers) and I can use some of the platform for my own use :)
 
This is mainly for the techie geeks amongst us (looking at you Slavik!).

Over the past few weeks I have been commissioning a Cisco UCS platform in one of my Employer's datacentres. UCS is Cisco's answer to CPU powered devices on a blade. It consists of the following:

The chassis, this holds the blade servers.
The fabric switch, this supports IP and FCoE
The network switch, this supports IP.

I recently ordered quite a large UCS platform (not the largest by any means), 5 chassis, lots of different types of blade servers - both full width and half width. Two 6248s (the fabric switches), two 5548 (the IP switches). Each chassis has 2 x 4 port Flex module although I am only using 2 ports on each Flex module, giving 40gb of bandwidth to the fabric switches. (Around 20 chassis - each can hold upto 8 blade servers, is the recommended maximum per set of switches)

The fabric switches are connected in turn to the 5548's via 2 x dual 10gig uplinks (all bonded), giving 40gig of bandwidth again. Finally the two 5548's are connected via 2 x 10gig uplinks into the metro ring (also running at 10gig) using Cisco 4948E and Cisco REP (their better version of spanning tree).

That was the hardware layer, on top of that I have installed ESXi 5.0 and configured each host to have resilient, load balanced nics (each at 10 gig) to the core. These nics carry IP, iSCSI and FC0E traffic. There are 3 iSCSI servers connected with dual 10gig interfaces, into a non-routable VLAN, which only carries iSCSI traffic.

Why? We need performance and low latency. What is running on the ESXi platform? A multitude of servers:
  • MySQL servers
  • Asterisk servers
  • NGINX servers
  • PHP FPM servers
  • Windows 2008 servers
The network is configured as layer 2 so when I install UCS systems into the other sites I can easily implement VMware HA/FT without IP changes. Next step is to install vCloud Director along with vShield. This will give the ability to do clever things like deploying an entire application stack (including duplicate IPs, MAC addresses) and presenting it to the outside protected by a virtual firewall.

Happy days. Feel free to ask questions if you have any.
The only problem I have with vCloud Director is that it can't do bare metal provisioning. The vSphere hypervisor has to be installed before it can manage the blade. I have also heard reports of the UCS blades overheating and catching fire, although that has only been as part of a VCE vBlock. Other than that, pretty solid technology that should support what you're trying to do. What are you doing on the storage side?
 
The only problem I have with vCloud Director is that it can't do bare metal provisioning. The vSphere hypervisor has to be installed before it can manage the blade. I have also heard reports of the UCS blades overheating and catching fire, although that has only been as part of a VCE vBlock. Other than that, pretty solid technology that should support what you're trying to do. What are you doing on the storage side?
Why don't you use Deployment Services to get ESXi onto the blades? As for overheating, not experienced any of it so far.

For storage I am looking at the EMC VNX series or the 3PAR series (have 5 years of experience using these SANs).
 
Can't go wrong with VNX.The best of Clariion and Celerra without all the baggage. Just iSCS and NAS or FC as well?

My typical strategy is iSCSI for all boot drives. NFS4 (due to security) for NAS. For non-prod, all storage via iSCS, and for prod, data storage vs FC.
 
Can't go wrong with VNX.The best of Clariion and Celerra without all the baggage. Just iSCS and NAS or FC as well?

My typical strategy is iSCSI for all boot drives. NFS4 (due to security) for NAS. For non-prod, all storage via iSCS, and for prod, data storage vs FC.
I would want a mixture of iSCSI and FC presentations for raw disk, then as you do, NFS for any NAS. I've had a meeting with EMC already, just need to work on the pricing.
 
VBLOCK safe? Safe from fire, probably. I'm betting those were isolated incidents. But a safe IT investment? I wouldn't go that far. Its not nearly as mature of a solution as EMC and Cisco would have you believe. There are serious technology challenges involved, mostly within the Cisco portion of the solution stack.

Personally, I think its still too early for converged infrastructure. The cost in converting a data center is enormous and the RIO doesn't fall within a 3 year timeline, putting it outside of most corporate refresh cycles. Even waiting 18 months would put one in early adopter status.
 
Just finished (apart from some cable changes) a new install, 3 chassis, 20 blades (256gb ram each, e5 cpu), 5548 and 6296. FC and iSCSI. VCE charged too much for the same thing with their brand.
 
VBLOCK safe? Safe from fire, probably. I'm betting those were isolated incidents. But a safe IT investment? I wouldn't go that far. Its not nearly as mature of a solution as EMC and Cisco would have you believe. There are serious technology challenges involved, mostly within the Cisco portion of the solution stack.

Personally, I think its still too early for converged infrastructure. The cost in converting a data center is enormous and the RIO doesn't fall within a 3 year timeline, putting it outside of most corporate refresh cycles. Even waiting 18 months would put one in early adopter status.

serious technology challenges? laughable comment. better check facts, UCS is now the second largest server platform in volume, only behind HP in both the US and worldwide. UCS also continues to move "up and to the right" on the gartner quadrants, only barely behind HP.

id rather have the single interface of UCSM to configure a blade and all associated infrastructure than 5-9 interfaces that competing blade vendors require.
 
serious technology challenges? laughable comment. better check facts, UCS is now the second largest server platform in volume, only behind HP in both the US and worldwide. UCS also continues to move "up and to the right" on the gartner quadrants, only barely behind HP.

id rather have the single interface of UCSM to configure a blade and all associated infrastructure than 5-9 interfaces that competing blade vendors require.
Notice the date on that post?

And I can do it on a single interface on a Dell blade too at a fraction of the cost.

I can configure everything including the kitchen sink from a single interface on IBM Pure Flex. It'll cost more, but it'll also include technology advances such as true storage virtualization that UCSM can't even come close to.

Drink your Koolaid all you want. In the end, all hardware does the same 90%.
 
Top Bottom