VMware vSphere on IBM BladeCenter H – (Part 2 of 2)

Yes, finally! It’s been like what? Five months?! Well, the delay in publishing this part was mainly because of the delay in certifying the new IBM HX5 blades on vSphere. It’s a quite long process that you can read about it here, but the good news is that the hardware is finally on VMware’s HCL, and that I can comfortably blog about the subject now without causing any confusions to the readers.

Before we dig deep into the new designs, I’d like to mention some minor changes in the diagram.

Updated Diagram

I’ve included the old configurations along with the new ones in one updated PDF. The main difference now is that I’m using normal pages for showing each configuration. In the old version I used the layers to show and hide the configurations as you select them. I thought now that using separate pages for different diagrams would ease the process of browsing through the configurations, and to tell you the truth, to reduce also the high complexity of designing the diagram. It’s a crazy process to keep track of all these layers in Visio especially when we are talking about more than 7,000 shapes floating on the same design area!

Now let’s get down to business.

Configuration (5) – HX5:

This is the Big Blue’s latest two-node blade technology. I emphasized on the “two-node” here because it’s the only configuration certified to run with vSphere as of the time of writing these lines. Please note that you can use up to 4 nodes with the HX5 but this won’t be supported by VMware. When we talk about two nodes here we mean the following:

  • Having the base blade (CPU + Mem + HDD) (+Plus+) the MAX5 expansion try to scale up more memory for the blade.
  • Having the base blade (again CPU + Mem + HDD) (+Plus+) another similar expansion board to scale all the blade components, that’s 4 x CPUs + 2 x Memory modules + 2 x IO expansion cards.

As you will see in the diagram, I chose the second option to talk about.

Now, what do we have here? it’s simply the redundancy at its best! We can place our networks here freely with full redundancy as you see in the layout of the vNICs. For example, if we have a failure in the CFFh expansion card on any of the two nodes, we will still be able to flow the traffic without any issues on the other CFFh card. Same thing holds true for the on-board ports, if for any reason one of these posts fail, the traffic will flow on the other node’s board.

Apart from that, I’m introducing here the DMZ networks for the first time. Most of the enterprises prefer to separate the DMZ networks/servers on different chassis for security reasons. While this is a valid decision, we can have with this blade configuration a workaround for organizations that are less paranoid about the DMZ security, yet with good isolation. Let’s see how this is done in details:

  • For the networks, we have two dedicated blade switches that will be uplinking *only* to the corp DMZ switches (in this case Bay 9 & 10). This means we will have no traffic following from either the internal networks or the VMkernel networks. Same thing for the blade ports, you will always have the NICs 4, 5, 10 and 11 dedicated for the DMZ networks and running in full performance and redundancy.
  • For the SAN, we can also ensure that we have a dedicated HBAs as well as an isolation. The uplinks to the SAN switches will be segmented across the two bays 3 and 4, and connected directly/physically to the appropriate SAN fabrics.

Configuration (6) – Virtual Fabric:

Before we start with this configuration, I would like to state that I am not quite sure whether these Emulex Virtual Fabric Adapters (VFA) are supported by VMware or not. While I can’t see them clearly on the HCL with the name VFA, I can see some Emulex documents saying that they are. Of course the reference here should be always the VMware HCL itlself, not anything else, but I will double check on that and update this post later. With that said, please refer to this configuration carefully and make sure to confirm this point before engaging with any vSphere design around it.

Now let’s dig deep into this cool technology. IBM simply has this Virtual Fabric concept of slicing your CFFh expansion card into 8 different ports. This doesn’t only mean that you have the flexibility to adjust the speed, but also the protocol. For example, you can choose to use either Ethernet, Fibrechannel, FCoE or even iSCSI with hardware initiators.

In our case here I used only Ethernet as the protocol for these ports, and then sliced them into 8 different vNICs with various link speeds. Perhaps a screenshot from the diagram would make things more clear.

As you see, we set the bandwidth for the SC to 1GB since we normally don’t require high BW for management, while we set 3GB and 5GB link speed for the Fault Tolerance and VM Networks respectively. By default these ports are set to 2.5GB ( 4 x 2.5GB = 10GbE into two ports), but you have the full flexibility to change that as you see.

Configuration (7) – CNA:

A very simple design to wrap up this series with. It’s the traditional CNA (oh yeah, it’s a common and traditional technology now!). As you see in the diagram, we have here a CFFh expansion card, and it has got four ports:

  • Ethernet ports: that’s 2 x 10GbE Ethernet ports for the networking traffic. We will treat them here normally as we treat any 10GbE port. We will slice them via the vNetwork traffic shaping in vSphere to achieve the bandwidth that we want.
  • FibreChannel ports: that’s 2 x HBA ports for SAN traffic. Instead of going into the traditional Bay 3 & 4 as we’ve see across the whole series and configurations, this time the traffic is multiplexed and pushed to the Nexus 4000 blade switches.

Did I just say Nexus 4000?! yep, that’s a specially developed Nexus switches by Cisco to be used only/currently with the IBM BladeCenter H/HT. But here is the catch, you will still need to have the Nexus 5000 switches to segregate the FCoE traffic coming from the Nexus 4000 and then forward the network and FC traffic to the existing LAN and SAN respectively. Of course we should have redundancy here at all layers. In the BCH we have two Nexus 4000 sitting in bays 7 and 9, while we have two Nexus 5000 switches in the back end.

Now what?

Well, as much as I worked really hard in this series to come up with different kind of configurations and design scenarios, as much as I enjoyed it! Now I need to move on to another vendor, but without all these mad options. I initially was planning to jump straight to the HP realm, however, i found myself involved in two different Cisco UCS vSphere designs lately, so it would make much sense to me if I blogged about this platform now. Don’t take my word for it though, I might surprise with a Dell or Fujitsu series, who knows?!

Share Button