This post comes out of a slide deck I authored last week for a partner event. I decided I was going to try and illustrate why the VCE model really is such a different approach to other datacenter and private cloud models.
Normally my blog is light on vendor specific commentary. I see myself more as a virtualization geek who just happens to work for an awesome company (EMC) than a hardcore analysis/blogger. But I have seen so much messaging lately that distorts the VCE message, I really felt the need to offer my own perspective.
Part 1: Establish the motivation
I do not think this a point that has to be argued. Virtualization and cloud computing have been the leading topic of discussion in the IT world. The results of this may still be a matter of opinion. The enabling technology of virtualization (see: logical abstraction) creates brand new arguments on how to delineate, consume, and secure (Public vs. private, IaaS vs. SaaS, etc). But without diving through mess of concepts and ideas, let us look first at how we got to this point.
In essence it all started with a simple idea: we are not using x86 infrastructure resources effectively. Because in the old days(see: not that long ago) life was a little tougher. The Operating System (OS) and Application (APP) that resided on it were both rather tightly coupled to the underlying hardware. This meant moving the OS/APP to another piece of hardware frequently meant quite a bit of work. You had to reinstall everything, data migrate, and then test/validate to confirm you did it right.
Add in the fact that intermixing applications within a single OS instance was a risk in itself, most IT shops used single pieces of iron(server hardware) to host single OS/APPs with the x86 world. Naturally this led to terrible efficiency and terrible operational expenses when shops had to power, cool, and manage rows and rows of x86 servers running at 8% utilization.
This is where VMware made their initial mark immediately. They took our immense datacenters full of x86 and decoupled the hardware from the OS/APPs (abstraction). The initial benefit was the ability to stack OS/APP instance on single pieces of hardware and avoid operational risk and drive up utilization. Soon after VMware developed additional uses leveraging this logical abstraction allowing VMotion, HA, DRS, and many others to solidify their place in a modern datacenter. Now you could move an OS/APP from hardware to hardware without worry of hardware coupling, testing cycles, or even affecting end user’s consumption of the application(no downtime).
Initially the arguments against VMware’s technology were strongly in the stability of the concept. Fear, uncertainty, and doubt abounded because this concept was so new to a newly created pool of x86 experts. Which is not unlike what you are seeing with much of the Cloud discussion happening today(with our newly minted virtualization experts). Don’t worry, I am poking fun at myself also.
What is important to realize is that just abstraction is not just limited to the x86 compute construct. VMware also abstracted the ports on our servers. Now twelve 1GB network interface cards could represent a hundred ports and multiple VLANs; which causes a dramatic shift of the datacenter physical network edge into a logical one. Centralization of configuration with virtual distributed switching allowed for clustering of these logical constructs and enabled pooling that had never existed before this close to the actual compute instance.
Quickly after the adoption of virtualization started to make critical waves ,we saw an emergence of the same model with other applications. A really good example of this is in the convergence of fabric. The old way was using separate fabrics for Ethernet(primarily TCP/IP) application traffic from Fibre-Channel used for storage access. Fibre-channel over Ethernet(FCoE) brought an extremely similar model to datacenter networking at a compute level as VMware brought to x86 via virtualization. You take a very low layer point in the stack, in this case Layer 2 Ethernet, and use abstraction to decouple the stack above. Ethernet had already possessed a mechanism for abstracting logical networks using VLAN tagging. FCoE adds to this mechanism by allowing the tagging of FIP and FCoE frames on the same stack. This means that a single root layer (like a hypervisor) can host not only multiple logical instances(VLANs) but also multiple protocol instances. The addition of Data Center Bridging extension to Ethernet brings similar service controls to allow for multi-hop and scale out designs.
What this meant in operational terms is extremely similar to the x86 virtualization effect. Now instead of running multiple cables of different sizes, speeds, and directions; you can run single, larger mediums which run multiple logical connections within them. With platforms that leverage this like Cisco’s UCS or Nexus series; you gain the ability to not only make operational changes quickly, but also use new methods of deployment using instantiation and decoupled identities(service profiles). The operational benefits to this are not only both obvious but can be appreciated from multiple angles. Everything from orchestration to expansion can be done in a way using this new abstraction that would have been much more difficult before.
All of these applications of logical abstraction and instantiation (i.e. virtualization) create an interesting side effect. Where in the beginning a VM was tightly coupled to the resources (storage, compute, network):
In this new world of abstraction the multiple resources of each type can be consumed, migrated to, and pooled:
Or in another way to draw this:
This is the essence of how the Cloud idea was formed. It is the model of abstraction applied as a baseline across lower layers to enable consumption of multiple unique resources as a pool. As stated above this does a great many things. Promotes higher efficiency, reduces operational overhead, and enables the ability to redefine both business and service models in new and interesting ways. And as an innovation fanatic it also allows leveraging of the Cloud OS (hypervisor) to do fascinating things because of a new single point of orchestration and lack of physical constraints (see: Interactive Cloud).
Part 2: How VCE is built for this new paradigm
Now that we have established the motivation behind the Cloud/Virtualization idea, I want to explain why the VCE initiative is really a unique model for this movement. To do that lets talk about the problems behind the building of a Cloud.
Really, building a virtual host and hosting virtual machines (VMs) on it is quite simple. All you need in a couple pieces of media and some recent hardware and you have a full fledged mini-cloud spinning away.
The problems arise when you take this concept and scale into clusters of virtualized resources. All of a sudden you have to manage farms of compute that will be running at very high efficiency ratios. The more you can run the more you save. Added to that you have a requirement for shared storage to truly leverage advanced operational benefits from clustering. Also needed is a well connected Layer 2 network to allow for cross communication between nodes in the cluster(ESXi hosts). And a converged network would be nice also since it can better match long-term operational needs. Pretty soon this Cloud project is looking much like all the other datacenter infrastructure projects. You have many components that are needed and must not only work well with each other but also satisfy the specific need you are building for.
Historically the approach to solving this was quite simple and effective. The good old Reference Architecture. This was a blueprint, recipe, or how-to on what parts in what configuration would produce a specific infrastructure set. This method has been used countless times with decent success. The references architecture allows a company or group of companies to solve a specific problem caused by tightly integrated (coupled) requirements.
Reference Architectures, while giving an recipe of what the cookies will look like, still require specific steps to accomplish. Namely:
Having a recipe does not mean you do not have to cook. The design must still be put together, usually onsite.
Even though the recipe states that everything should work, no datacenter professional does not vigorously test each part of the build.
Likewise the overall excepted output of the design must be validate to ensure that the requirements are met before the resources will be used.
The interesting effect of this in based on the lifecycle of the infrastructure. Infrastructure has a defined lifecycle which is usually 3-5 years for most organizations. The time to do the Build, Test, Validate along with procurement and logistics eats into the useable lifecycle of the end result reducing the overall utility.
Another way to picture this is when you think of a Kit Car. Kit Cars are ordered as parts from a manufacturer with a recipe to build them into a useable vehicle. The comparisons to reference architecture are compelling. With Kit Cars you must:
Build using parts
The ultimate assembly is your responsibility.
Spend time to assemble
The labor/time to put this together via the instructions are in your timeline.
Validating the operation and integration of components is done by you after/during assembly.
Kit Cars can be a ton of fun and in themselves, similar to reference architectures, are not wrong as model. The one glaring thing from the above is that a good portion of the risk is assumed by the consumer. Any warranty for parts is going to be based on individual components much like a reference architecture. While the original ordering company may assist in communication they cannot treat the Kit Car as a single unit to support.
The the question is: on you average commute, how many Kit Cars do you see? The answer is likely none or very few. Why is that? Why does this model not fit in our modern consumption of personal transportation? To solve that, lets look at what an average consumer’s needs would be when purchasing a vehicle:
#1 Something that gets me there
Very few cars are bought without the intention of getting someone to school, work, or shopping with out issues. While requirements around how fast, how cheap, or how safe can change. Ultimately you need a car to do its job without issue. And proven measurements such as MPG, horsepower, and cubic feet of room are printed on the label.
#2 Will carry enough people/cargo to be economical
An example of the school bus vs. the compact car problem. You will choose a car properly sized to carry you and your cargo/family/friends in as few trips (hopefully one) as possible without becoming inefficiently expensive to obtain. Or in a simpler description: the right size for you.
#3 Built to work for an expected period of time
When you purchase a vehicle you have an implied expectation for it to function for a period of time. This can be 5-10 years or more depending on the person or vehicle.
In addition, you expect the manufacturer to back this vehicle as a complete unit with a warranty that covers all the above.
As you can see it is from these requirements we see that the Kit Car is not a great fit for modern needs. It is from this expectation (or the cause of it) that we have the modern manufactured car.
The modern car is pre-assembled, comes in different pre-determined sizes, and is available right now in your city.
And it is from this model we can see the correlation to where the VCE Vblock model is based. But, lets back up and cover over things again with the VCE Vblock in perspective.
If we take our modern vehicle requirements list and contrast we see a strong correlation:
#1 Something that gets me there Quantified Performance
The Vblock is designed, built, and tested to deliver specific performance levels. On top of this; solution validation for SAP, VDI, Exchange and more has/is being done to give specific performance metrics at an application layer.
#2 Will carry enough people/cargo to be economical Defined Capacity
Vblocks have established capacities and models for sizing all of which are tested and validated to meet primary design principles.
#3 Built to work for an expected period of time High Availability
Every component of the Vblock is designed for N+1 or 1+1 models. On top of that pluggable designs and technology for utilizing DR, multiple active sites, and more makes availability as a unit the first priority.
As you can see, there is strong similarity in these models. This is even more so when we contrast against our Kit Car requirements with our Modern Car/Vblock model:
Build using parts Engineered & Assembled
Vblocks are engineered and assembled as a unit. The end goal of a Vblock is a functioning product before it is in the consumer’s hands. Much like how a new car is a single completely assembled unit ready to be used.
Spend time to assemble Delivered / Fast Consumption
When you purchase a new car you can walk into a dealership and drive out ready to show it off. The Vblock is the same in that it arrives ready to be used. You roll the completely assembled units into the datacenter, connect power, and begin to consume/show off.
Validate components Validated / Documented
Vblock units are tested and validated before they arrive on your floor. Much like the rigorous testing done on a car assembly line floor, the Vblock process delivers a unit validated to provide a documented utility.
And last but not least, the Vblock units are treated as both repeatable and complete. From a support perspective the entire unit is supported for all the above. VCE assumes responsibility for the assembly, performance, availability, and validation just as a modern car manufacturer would.
If we take a look at the infrastructure lifecycle vs. the personal vehicle lifecycle we see another correlation. The Kit Car does not fit well into modern car needs because very few people can afford/desire to spend a chunk of the car’s useful lifetime ordering, building, assembling, and testing.
Instead we go to purchase a vehicle with the expectation of being able to gain utility quickly. The same is true of the Vblock in that the goal from ordering to delivery is only 30 days. This drastically reduces the cost upfront on the infrastructure lifecycle and coupled with the Cloud approach to consumption leads to a powerful new model to build infrastructure.
Decide / Consume
Wrapping it up
What is interesting about the new Cloud model is that abstraction has placed less of an importance on specifics within the infrastructure that used to exist. A lot of the reasoning for using reference architecture was that you needed a recipe for specific application models that could vary widely. But, with the logical abstraction the overall innovation of these components remains important (FAST, Memory Extension, DRS) but the pieces themselves become more decoupled and therefore easier to consume. This ultimately opens up a market for quickly consumable infrastructure designed for the cloud that are treated as repeatable units and with corresponding support as a unit.
That is in essence why the VCE Vblock brings something new to the table. In my opinion this is uniquely positioned to provide a service that reference architectures could not in the past. And with the enablement of the Cloud model approach, VCE Vblock is also offering a faster, efficient, and reliable way to enter this new era.
Sorry for the length of this post. I strongly feel that there is a misperception of the VCE model and wanted to write something to bring some clarity. I promise I will keep it short next time. If you have questions, comments, or criticisms feel free to comment below. I always appreciate your feedback.