The VCE Model : Yes, it is different

This post comes out of a slide deck I authored last week for a partner event. I decided I was going to try and illustrate why the VCE model really is such a different approach to other datacenter and private cloud models.

Normally my blog is light on vendor specific commentary. I see myself more as a virtualization geek who just happens to work for an awesome company (EMC) than a hardcore analysis/blogger. But I have seen so much messaging lately that distorts the VCE message, I really felt the need to offer my own perspective.

Part 1: Establish the motivation

I do not think this a point that has to be argued. Virtualization and cloud computing have been the leading topic of discussion in the IT world. The results of this may still be a matter of opinion. The enabling technology of virtualization (see: logical abstraction) creates brand new arguments on how to delineate, consume, and secure (Public vs. private, IaaS vs. SaaS, etc). But without diving through mess of concepts and ideas, let us look first at how we got to this point.

Fig 1In essence it all started with a simple idea: we are not using x86 infrastructure resources effectively. Because in the old days(see: not that long ago) life was a little tougher. The Operating System (OS) and Application (APP) that resided on it were both rather tightly coupled to the underlying hardware. This meant moving the OS/APP to another piece of hardware frequently meant quite a bit of work. You had to reinstall everything, data migrate, and then test/validate to confirm you did it right.

Add in the fact that intermixing applications within a single OS instance was a risk in itself, most IT shops used single pieces of iron(server hardware) to host single OS/APPs with the x86 world. Naturally this led to terrible efficiency and terrible operational expenses when shops had to power, cool, and manage rows and rows of x86 servers running at 8% utilization.

vm_02This is where VMware made their initial mark immediately. They took our immense datacenters full of x86 and decoupled the hardware from the OS/APPs (abstraction). The initial benefit was the ability to stack OS/APP instance on single pieces of hardware and avoid operational risk and drive up utilization. Soon after VMware developed additional uses leveraging this logical abstraction allowing VMotion, HA, DRS, and many others to solidify their place in a modern datacenter. Now you could move an OS/APP from hardware to hardware without worry of hardware coupling, testing cycles, or even affecting end user’s consumption of the application(no downtime).

Initially the arguments against VMware’s technology were strongly in the stability of the concept. Fear, uncertainty, and doubt abounded because this concept was so new to a newly created pool of x86 experts. Which is not unlike what you are seeing with much of the Cloud discussion happening today(with our newly minted virtualization experts). Don’t worry, I am poking fun at myself also.

sw_01

What is important to realize is that just abstraction is not just limited to the x86 compute construct. VMware also abstracted the ports on our servers. Now twelve 1GB network interface cards could represent a hundred ports and multiple VLANs; which causes a dramatic shift of the datacenter physical network edge into a logical one. Centralization of configuration with virtual distributed switching allowed for clustering of these logical constructs and enabled pooling that had never existed before this close to the actual compute instance.

Quickly after the adoption of virtualization started to make critical waves ,we saw an emergence of the same model with other applications. A really good example of this is in the convergence of fabric. The old way was using separate fabrics for Ethernet(primarily TCP/IP) application traffic from Fibre-Channel used for storage access. Fibre-channel over Ethernet(FCoE) brought an extremely similar model to datacenter networking at a compute level as VMware brought to x86 via virtualization. You take a very low layer point in the stack, in this case Layer 2 Ethernet, and use abstraction to decouple the stack above. Ethernet had already possessed a mechanism for abstracting logical networks using VLAN tagging. FCoE adds to this mechanism by allowing fab_02the tagging of FIP and FCoE frames on the same stack. This means that a single root layer (like a hypervisor) can host not only multiple logical instances(VLANs) but also multiple protocol instances. The addition of Data Center Bridging extension to Ethernet brings similar service controls to allow for multi-hop and scale out designs.

What this meant in operational terms is extremely similar to the x86 virtualization effect. Now instead of running multiple cables of different sizes, speeds, and directions; you can run single, larger mediums which run multiple logical connections within them. With platforms that leverage this like Cisco’s UCS or Nexus series; you gain the ability to not only make operational changes quickly, but also use new methods of deployment using instantiation and decoupled identities(service profiles). The operational benefits to this are not only both obvious but can be appreciated from multiple angles. Everything from orchestration to expansion can be done in a way using this new abstraction that would have been much more difficult before.

fcoe_01

All of these applications of logical abstraction and instantiation (i.e. virtualization) create an interesting side effect. Where in the beginning a VM was tightly coupled to the resources (storage, compute, network):

cloud_01

In this new world of abstraction the multiple resources of each type can be consumed, migrated to, and pooled:

cloud_02

Or in another way to draw this:

cloud_03

This is the essence of how the Cloud idea was formed. It is the model of abstraction applied as a baseline across lower layers to enable consumption of multiple unique resources as a pool. As stated above this does a great many things. Promotes higher efficiency, reduces operational overhead, and enables the ability to redefine both business and service models in new and interesting ways. And as an innovation fanatic it also allows leveraging of the Cloud OS (hypervisor) to do fascinating things because of a new single point of orchestration and lack of physical constraints (see: Interactive Cloud).

Part 2: How VCE is built for this new paradigm

Now that we have established the motivation behind the Cloud/Virtualization idea, I want to explain why the VCE initiative is really a unique model for this movement. To do that lets talk about the problems behind the building of a Cloud.

Really, building a virtual host and hosting virtual machines (VMs) on it is quite simple. All you need in a couple pieces of media and some recent hardware and you have a full fledged mini-cloud spinning away.

ref_01The problems arise when you take this concept and scale into clusters of virtualized resources. All of a sudden you have to manage farms of compute that will be running at very high efficiency ratios. The more you can run the more you save. Added to that you have a requirement for shared storage to truly leverage advanced operational benefits from clustering. Also needed is a well connected Layer 2 network to allow for cross communication between nodes in the cluster(ESXi hosts). And a converged network would be nice also since it can better match long-term operational needs. Pretty soon this Cloud project is looking much like all the other datacenter infrastructure projects. You have many components that are needed and must not only work well with each other but also satisfy the specific need you are building for.

Historically the approach to solving this was quite simple and effective. The good old Reference Architecture. This was a blueprint, recipe, or how-to on what parts in what configuration would produce a specific infrastructure set. This method has been used countless times with decent success. The references architecture allows a company or group of companies to solve a specific problem caused by tightly integrated (coupled) requirements.

Reference Architectures, while giving an recipe of what the cookies will look like, still require specific steps to accomplish. Namely:

  • Build

Having a recipe does not mean you do not have to cook. The design must still be put together, usually onsite.

  • Test

Even though the recipe states that everything should work, no datacenter professional does not vigorously test each part of the build.

  • Validate

Likewise the overall excepted output of the design must be validate to ensure that the requirements are met before the resources will be used.

The interesting effect of this in based on the lifecycle of the infrastructure. Infrastructure has a defined lifecycle which is usually 3-5 years for most organizations. The time to do the Build, Test, Validate along with procurement and logistics eats into the useable lifecycle of the end result reducing the overall utility.

ref_02

kit_01Another way to picture this is when you think of a Kit Car. Kit Cars are ordered as parts from a manufacturer with a recipe to build them into a useable vehicle. The comparisons to reference architecture are compelling. With Kit Cars you must:

  • Build using parts

The ultimate assembly is your responsibility.

  • Spend time to assemble

The labor/time to put this together via the instructions are in your timeline.

  • Validate components

Validating the operation and integration of components is done by you after/during assembly.

Kit Cars can be a ton of fun and in themselves, similar to reference architectures, are not wrong as model. The one glaring thing from the above is that a good portion of the risk is assumed by the consumer. Any warranty for parts is going to be based on individual components much like a reference architecture. While the original ordering company may assist in communication they cannot treat the Kit Car as a single unit to support.

The the question is: on you average commute, how many Kit Cars do you see? The answer is likely none or very few. Why is that? Why does this model not fit in our modern consumption of personal transportation? To solve that, lets look at what an average consumer’s needs would be when purchasing a vehicle:

  • #1 Something that gets me there

Very few cars are bought without the intention of getting someone to school, work, or shopping with out issues. While requirements around how fast, how cheap, or how safe can change. Ultimately you need a car to do its job without issue. And proven measurements such as MPG, horsepower, and cubic feet of room are printed on the label.

  • #2 Will carry enough people/cargo to be economical

An example of the school bus vs. the compact car problem. You will choose a car properly sized to carry you and your cargo/family/friends in as few trips (hopefully one) as possible without becoming inefficiently expensive to obtain. Or in a simpler description: the right size for you.

  • #3 Built to work for an expected period of time

When you purchase a vehicle you have an implied expectation for it to function for a period of time. This can be 5-10 years or more depending on the person or vehicle.

In addition, you expect the manufacturer to back this vehicle as a complete unit with a warranty that covers all the above.

As you can see it is from these requirements we see that the Kit Car is not a great fit for modern needs. It is from this expectation (or the cause of it) that we have the modern manufactured car.

car_02

The modern car is pre-assembled, comes in different pre-determined sizes, and is available right now in your city.

vblock_01

And it is from this model we can see the correlation to where the VCE Vblock model is based. But, lets back up and cover over things again with the VCE Vblock in perspective.

If we take our modern vehicle requirements list and contrast we see a strong correlation:

  • #1 Something that gets me there Quantified Performance

The Vblock is designed, built, and tested to deliver specific performance levels. On top of this; solution validation for SAP, VDI, Exchange and more has/is being done to give specific performance metrics at an application layer.

  • #2 Will carry enough people/cargo to be economical Defined Capacity

Vblocks have established capacities and models for sizing all of which are tested and validated to meet primary design principles.

  • #3 Built to work for an expected period of time High Availability

Every component of the Vblock is designed for N+1 or 1+1 models. On top of that pluggable designs and technology for utilizing DR, multiple active sites, and more makes availability as a unit the first priority.

As you can see, there is strong similarity in these models. This is even more so when we contrast against our Kit Car requirements with our Modern Car/Vblock model:

  • Build using parts Engineered & Assembled

Vblocks are engineered and assembled as a unit. The end goal of a Vblock is a functioning product before it is in the consumer’s hands. Much like how a new car is a single completely assembled unit ready to be used.

  • Spend time to assemble Delivered / Fast Consumption

When you purchase a new car you can walk into a dealership and drive out ready to show it off. The Vblock is the same in that it arrives ready to be used. You roll the completely assembled units into the datacenter, connect power, and begin to consume/show off.

  • Validate components Validated / Documented

Vblock units are tested and validated before they arrive on your floor. Much like the rigorous testing done on a car assembly line floor, the Vblock process delivers a unit validated to provide a documented utility.

And last but not least, the Vblock units are treated as both repeatable and complete. From a support perspective the entire unit is supported for all the above. VCE assumes responsibility for the assembly, performance, availability, and validation just as a modern car manufacturer would.

car_01If we take a look at the infrastructure lifecycle vs. the personal vehicle lifecycle we see another correlation. The Kit Car does not fit well into modern car needs because very few people can afford/desire to spend a chunk of the car’s useful lifetime ordering, building, assembling, and testing.

  • Build

  • Test

  • Validate

Instead we go to purchase a vehicle with the expectation of being able to gain utility quickly. The same is true of the Vblock in that the goal from ordering to delivery is only 30 days. This drastically reduces the cost upfront on the infrastructure lifecycle and coupled with the Cloud approach to consumption leads to a powerful new model to build infrastructure.

  • Decide / Consume

ref_03

Wrapping it up

vblock_stackWhat is interesting about the new Cloud model is that abstraction has placed less of an importance on specifics within the infrastructure that used to exist. A lot of the reasoning for using reference architecture was that you needed a recipe for specific application models that could vary widely. But, with the logical abstraction the overall innovation of these components remains important (FAST, Memory Extension, DRS) but the pieces themselves become more decoupled and therefore easier to consume. This ultimately opens up a market for quickly consumable infrastructure designed for the cloud that are treated as repeatable units and with corresponding support as a unit.

That is in essence why the VCE Vblock brings something new to the table. In my opinion this is uniquely positioned to provide a service that reference architectures could not in the past. And with the enablement of the Cloud model approach, VCE Vblock is also offering a faster, efficient, and reliable way to enter this new era.

Sorry for the length of this post. I strongly feel that there is a misperception of the VCE model and wanted to write something to bring some clarity. I promise I will keep it short next time. If you have questions, comments, or criticisms feel free to comment below. I always appreciate your feedback.

.nick

Commentary EMC Featured

12 Comments Leave a comment

  1. Good overview Nick, and spot on. One of the things we try and educate customers on is the amount of “non-recurring engineering time” that is wasted with most reference architectures. Spend engineering time taking care of the business and not building out new infrastructure!

  2. Great post Nick,
    I never saw such a clear explanation of the VCE Model.
    I have just a “business” remark. When you get a ‘new’ car
    the biggest problem is to sell the ‘old’ car. I am afraid
    that in the same way, the biggest problem is not to get a new VBlock
    but to “sell” the old datacenter

  3. Recently, Acadia changed it’s name and business model. That was a good idea, IMHO. The contention I’ve heard from the channel was that Acadia competed with them. Now, that aspect is removed, at least on paper. I say this in all honesty, because is it not true that building and shipping a box with the entire stack, built and installed actually compete with much of a partner’s value add? Isn’t that, in large measure, what integrators are all about?

    I find that the channel partners are quite good at it, and are capable of fine tuning a configuration and matching precisely tto requirements. Taking that value off their plate is competitive, don’t you think? Just curious.

    Mike@NetApp

    • Mike – Hope you are doing well, we haven’t spoke in a while. I hope everything in RTP is going well for yourself and the family. I will address the question, which I think is an excellent one. Working in the partner community for a decade and also evangelizing the NetApp/Cisco value proposition to the channel (when I was there) and now responsible for worldwide channels engineering at VCE, this is something we are especially in tune with. Channel partners deal in revenues, costs and profits as you know. In a typical sale there are profits on the hardware sale and then there are profits on the services. Profits on services actually constitute the highest percentage of profits on a typical partner transaction. Now when you break apart the costs in services, the highest costs are equipment assembly in managing and scheduling resources for assembly, waiting for equipment delivery, having resources show up and equipment from x, y or z company not be there. Additionally, in converged infrastructure solution deployment, the partner is asked to build some of the most complex fast paced, ever-changing technologies that any manufacture in this space provides. This all translates into frequently evolving best practices, finger pointing as to which driver is compatible with which ESX update, which ESX update is compatible with which OS release on said storage array and then of course which firmware and OS releases were best suited for the network and UCS infrastructure. This all works itself into eroding service margins for partners in the business of assembly. At VCE we build the infrastructure baseline for the partner and let them deploy it to the customer premise. They focus their high cost and high value professional services resources on workload migration, application deployments, data protection design, high availability strategies and business continuity plans. All services which are highly profitable, predictable and consultative to a customer, they do, not us.

      At VCE we believe that virtualization is causing a fundamental shift in the life cycle of services for partners. Virtualization abstracts complexity away from the infrastructure to enable organizations to more rapidly deploy solutions. VCE is enabling partners and their services offerings to get up the stack faster and focus on the apps and the truly business impacting offerings. We want our partners to increase their value, profits and be laser focused on the customer and their business needs.

      That’s VCE’s mission…

      GameOn

      Trey@VCE

%d bloggers like this: