vSphere Hosts: Blades vs. Rack-mounted Servers

One of the political-like eternal debates is the debate around vSphere Hosts Form Factor, which to choose: Blade Servers or Rack-mounted Servers. Both technologies are mature and support high computing power and Hardware Vendors offer both of them now equally. In addition, VMware vSphere supports using both of them and puts no limitation on the form factor of the hosts. They’re dominating now over Tower form factor which began to disappear because of its large foot print and high power usage. Confusing to choose between them, right?

Long story short, both options have their own Pros and Cons that should be aligned with your (customer) case and your (customer’s) requirements and constraints. In the following table, a summary of the main differences between both form factors mentioning the aspect. It’s not vendor-related, some other differences might exist according to the vendor(s) you’re comparing.

Blade Servers Rack-mounted Severs
Cost Doesn’t increase linearly. It increases little for single blade, then with huge amount (nearly 30k) for a new Chassis and hence, many blades should be purchased with the first blade chassis to justify its initial price. However, cost reduction is massive for Switches, FC Fabrics, cablings and power equipment. Increases linearly with low initial cost, as you can purchase just even single rack server. Additional cost is needed for Switches, FC Fabrics, cablings and power equipment per each rack server.
Availability Some Single Points of Failure exists, as failure of a Chassis or a midplane, although extremely rare, will lead to failure of many servers. No single point of failure here, but for an entire rack failure (extremely rare). Failure of a single Rack-mounted server will lead to failure of single host.
Manageability Blade Servers requires co-operation between different teams (Network, Storage, etc.) as well as deep knowledge of management of this blade vendor. Rack servers are known entities and their manageability is relatively easy compared to Blade Chassis and Blade Servers. Management separation between teams still exists as usual.
Networking Configuration & Cabling -Require less cabling.
– Harder to configure its networking and harder to manage.
– Require more cabling
– Easier to configure their networking and to manage.
Performance Blade servers have less space inside and hence lower RAM or less IOps, but a fully-populated blade chassis in some U’s may provide Computing Power higher than a group of rack-mounted servers occupying a single rack. Rack-mounted servers have wide space inside and hence many RAM, PCI Cards and Disks slots.
Security Blade Servers enforce the use of VLANs for logical separation between networks. Sometimes, Blade Servers can’t be used when physical separation is required. Rack servers may be required for physical separations of workloads or network traffic.
Upgrade & Scalability Blade Serves are really limited in their upgrades and scalability. You’re stuck in a couple of blade generations till replacing the entire blade chassis. although it’s not needed usually, adding modules in a blade server is very limited. Rack servers can’t be matched in their upgradability and scalability. Any HW piece (CPU, RAM, IO Card, etc.) can be upgraded individually.
Density High number of servers in a small foot print, usually some U’s. Low density.
Environmental Issues Uses low power consumption per blade and per blade chassis, however a full blade chassis produces a concentrated heat in a small place and hence, requires concentrated cooling. High power usage per rack server. Entire rack can hold limited number of servers and hence, a full rack produces less amount of heat than a blade chassis and requires less cooling.

Conclusion:
Both options are strong valid options. As nothing is perfect, both Rack-mounted and Blade servers have their own Pros and Cons.
Define your (customer’s) requirements and constraints then according to the case define which option will suit you (your customer).
Finally, I’ll conclude with a quote from vSphere Design Sybex 2nd Edition by Forbes GuthrieScott Lowe and Kendrick Coleman:

“Both blades and rack-mounted servers are practical solutions in a vSphere design. Blades excel in their ability to pack compute power into datacenter space, their cable minimization, and their great management tools. But rack servers are infinitely more expandable and scalable; and they don’t rely on the chassis, which makes them ultimately more flexible.
Rack servers still dominate, holding more than 85% of the worldwide server market. Blades can be useful in certain situations, particularly in large datacenters or if you’re rolling out a brand-new deployment. Blades are inherently well suited to scale-out architectures, whereas rack servers can happily fi t either model. Blades compromise some elements. But if you can accept their limitations and still find them a valuable proposition, they can provide an efficient and effective solution.”

2 Comments on vSphere Hosts: Blades vs. Rack-mounted Servers

  1. To some extends I don’t agree with your analysis if you allow me, 1- blades are easier to manage and operate, everybody knows that and even the book you quote has specified that saying:
    ===
    lades excel in their ability to pack compute power into datacenter space, their cable minimization, and their great management tools
    ===
    for VLAN usage, you can always have a physical separation, my comment that it is old school, yet required sometimes, but can be implemented depending on our blade technology.

    IMHO, you don’t save much on the blade servers from FC fabric, you might need FC usage or connect to FCOE/FC enabled upstream switch, which is added cost and not cheap, yet it depends on the implementation/vendor.

    In the performance section, I don’t agree here, a full width blade server can do miracles as long as you don’t need local attached storage to it.

    I don’t see why blade servers would required coopearation between different teams, and what an advantage a rack server has here, you still have FC/FCOE and network as before even with converged NICs, it is just different implementation of the same technology IMHO.

    Simplifying the comparison between concentrated heat and cooling. Well I don’t see it is valid with modern DCs and modern ventilation, also you will have to realize that in very large environments, blades excel in rackspace/computer power, considering that they are both at the same cost from compute/rackspace/utilization point of view.

    conclusion, it depends on the vendor and the implementation, there is no single straight answer to this puzzle.
    Great article and keep it up

    • Dear Mahmoud,
      Thanks for your comment.
      IMHO, Manageability in Blade Chassis requires deep knowledge of the management tools of that vendor. In balde chassis, you may require to provision vNICs/vHBAs, assign VLANs, create servers profiles, etc. In comparison to any rack-mounted server, where there’s no management nearly required, that a huge manageability load, specially if you come from rack-mounted environment.

      Also, for Cost part, saving here is from reduced cabling and ports denisty required for same number of servers in both cases. Suppose you have environment that would require 30 servers. You get them in 2-3 blades chassis or 30 distinct rack-mounted ones. imagine the difference required for cabling and switches ports required between both cases, assuming you’d use same storage technology. add to this, the cost of racks and etc.

      For VLANs part, there’re many cases when your customer requires Physical Air Gap between his networks. That’s somehow hard to be done in Blade Chassis, although now all Blades chassis vendors (nearly) support these configuration, but still hard to be done. Refer, for example, to disjointed DMZs in Cisco UCS example. In case of rack-mounted, just set VLANs on different NICs and that’s it.

      For cooling and heat part, again let’s follow the same example of 30 servers, these 30 servers can be put in 3 blade chassis in a single rack and that would concentrate the heat in so small area, while rack-mounted would span 2 racks and that helps in heat distribution. Also, it’s known that a full blade chassis generate less heat than the same number of same-hw-config rack-mounted servers. Add to this, some blade chassis can put power cabs on their blades when they’re idle.

      For cooperation between teams, this point may be somehow vague and really comes from my own practical experience. When you deploy rack-mounted servers, you just deploy and give the other teams the required info, like: IPs, MACs, WWNNs, WWPNs, etc. and they’d do whatever they want to bring them up and running. For Blade chassis, you need network guy to help you configure your Network modules and a storage guy to help you configure the storage part, etc. it’s not that easy.

      In the end, it’s all depends on your experience and your point of view. I’m now in Blades camp although my post seems to be opposite 🙂
      Thanks again for your time and if you liked my blog, vote for it to be No.1 vBlog on: info.infinio.com/topvblog2015
      It’s also nominated for Best New(Starter) Blog in 2014 and Best Indep. Blog.

Leave a Reply