Deebs
Well-known member
This is mainly for the techie geeks amongst us (looking at you Slavik!).
Over the past few weeks I have been commissioning a Cisco UCS platform in one of my Employer's datacentres. UCS is Cisco's answer to CPU powered devices on a blade. It consists of the following:
The chassis, this holds the blade servers.
The fabric switch, this supports IP and FCoE
The network switch, this supports IP.
I recently ordered quite a large UCS platform (not the largest by any means), 5 chassis, lots of different types of blade servers - both full width and half width. Two 6248s (the fabric switches), two 5548 (the IP switches). Each chassis has 2 x 4 port Flex module although I am only using 2 ports on each Flex module, giving 40gb of bandwidth to the fabric switches. (Around 20 chassis - each can hold upto 8 blade servers, is the recommended maximum per set of switches)
The fabric switches are connected in turn to the 5548's via 2 x dual 10gig uplinks (all bonded), giving 40gig of bandwidth again. Finally the two 5548's are connected via 2 x 10gig uplinks into the metro ring (also running at 10gig) using Cisco 4948E and Cisco REP (their better version of spanning tree).
That was the hardware layer, on top of that I have installed ESXi 5.0 and configured each host to have resilient, load balanced nics (each at 10 gig) to the core. These nics carry IP, iSCSI and FC0E traffic. There are 3 iSCSI servers connected with dual 10gig interfaces, into a non-routable VLAN, which only carries iSCSI traffic.
Why? We need performance and low latency. What is running on the ESXi platform? A multitude of servers:
Happy days. Feel free to ask questions if you have any.
Over the past few weeks I have been commissioning a Cisco UCS platform in one of my Employer's datacentres. UCS is Cisco's answer to CPU powered devices on a blade. It consists of the following:
The chassis, this holds the blade servers.
The fabric switch, this supports IP and FCoE
The network switch, this supports IP.
I recently ordered quite a large UCS platform (not the largest by any means), 5 chassis, lots of different types of blade servers - both full width and half width. Two 6248s (the fabric switches), two 5548 (the IP switches). Each chassis has 2 x 4 port Flex module although I am only using 2 ports on each Flex module, giving 40gb of bandwidth to the fabric switches. (Around 20 chassis - each can hold upto 8 blade servers, is the recommended maximum per set of switches)
The fabric switches are connected in turn to the 5548's via 2 x dual 10gig uplinks (all bonded), giving 40gig of bandwidth again. Finally the two 5548's are connected via 2 x 10gig uplinks into the metro ring (also running at 10gig) using Cisco 4948E and Cisco REP (their better version of spanning tree).
That was the hardware layer, on top of that I have installed ESXi 5.0 and configured each host to have resilient, load balanced nics (each at 10 gig) to the core. These nics carry IP, iSCSI and FC0E traffic. There are 3 iSCSI servers connected with dual 10gig interfaces, into a non-routable VLAN, which only carries iSCSI traffic.
Why? We need performance and low latency. What is running on the ESXi platform? A multitude of servers:
- MySQL servers
- Asterisk servers
- NGINX servers
- PHP FPM servers
- Windows 2008 servers
Happy days. Feel free to ask questions if you have any.