The Blog is Back!and I Made a Change!
Learn more featured post
Jasonnash
15
April, 2010

The Case for the Vblock

Yesterday after Tech Field Day there was a very good debate on Twitter and here about what a Vblock is, and more importantly, what it isn’t. Lately I’ve been so busy I haven’t blogged but this exact topic was on my short list to do when time became available. It’s a discussion I’ve had with a number of customers lately. Without a doubt, the interest in Vblock is increasing. With interest comes questions and to be honest there isn’t a lot of concrete information out there on Vblocks for interested customers. Unless you talk to one of the big three or a partner you really only get to see pieces of the idea.

There is no real magic in a Vblock. The beauty is that it’s an architected and tested environment where there is very little guess work involved in the hardware platform. The problem with a pre-architected system is that there are assumptions made to create this design. With assumptions come limitations. A good example of that is what I’m using to write this post, an Apple iPad. To maintain the experience that Apple wanted some things had to be restricted. Would I love to run any application or attach any peripheral? Absolutely, but at that point Apple can no longer guarantee that my experience using the device will be where they want it. So I decide what is more important to me, customization or assured level of experience and performance.

That is the idea behind the Vblock architectures. The three different types of Vblocks each have a minimum and a maximum BoM (bill of materials). The VCE group really wants customers to stay within those guidelines. There can be exceptions if the customer has data to support them, but you can’t go wildly away from the pre-determined configurations. The idea is that the compute blocks are balanced with the storage blocks which in turn are balanced with network connectivity to handle the expected throughput requirements. Once you size a sub-block, such as the compute blades, up to a maximum the expectation is that you bring in another Vblock to continue scaling. This is where the conversation can take an ugly turn, depending on the audience.

Techies, like we always do, want to know why they can’t just continue to add UCS chassis and blades for more compute power or more spindles to get more IOPS. In most circumstances, it’s a valid question. But remember, the Vblock is a set of components designed and spec’d to work and scale together. If they start scaling one piece independently of the rest of the ecosystem you run in to problems, and that’s what the Vblock is trying to fix. It’s trying to remove this problem of playing whack-a-mole with bottleneck limitations in the environment.  How many times have you added capacity to one area only to reveal a bottleneck or limitation in another?  You end up chasing these and the project cost can escalate very quickly.

What is interesting is that, usually, the higher up in an organization you are communicating the better the Vblock conversation goes. Remove the detailed technical questions and the value of the Vblock idea really shines. You get a known “product” from trusted sources. You get known costs today as well as known costs for future expansion. It greatly removes the risk from the organization with unknown infrastructure expenses. How many people have worked in or with companies that had a certification department or even someone with a spreadsheet of drivers and firmware and operating systems to track tested variations? EMC has had the E-Lab for a while to do this with their products, but now with the Vblock you get it for the entire environment. Pre-tested and pre-certified without the organization having to manage that. CIOs love this and rightfully so. Combine this with the single support, reduced risk, and known expansion costs and it’s a great combination.

So, is the Vblock the right answer for your organization? In my usual fashion, the answer is that it depends. Do you want exact customization or do you want infrastructure that can be deployed very quickly with low risk? There is customization within the Vblock BoMs, though admittedly limited. It comes down to the needs and wants of the organization.  The up side to a Vblock is that if your organization finds the limitations acceptable there is really no extra expense to get a Vblock “certified” configuration over a non-Vblock but you get all the benefits including consolidated support from the VCE group, pre-architected systems, and component certification.  One other thing to keep in mind is that Vblocks are relatively new.  I expect we’ll eventually see more variations in BoMs and better methods for gathering information to qualify environments for Vblock status.  Like everything else in technology it will continue to evolve.

1 thought on “The Case for the Vblock”

Leave a Reply

Your email address will not be published. Required fields are marked *