Optimization vs. Scaling: How virtualization affects the scorecard

A typical server "rack", commonly se...
Image via Wikipedia

Many times I have seen situations where an application or process grows incrementally to a point where it is no longer able to meet it’s SLAs (whether official or imaginary). The cause of this can vary but is usually:

  • Overworked/Unbalanced teams –  Too much effort dedicated to new feature-add and not enough to technical debt
  • Poorly planned systems – Designs for immediate need without taking into account needs for things like instantiation or scaling of decoupled components.
  • Poor maintenance/understanding – Lack of knowledge or effort to tune application/process to more effectively use resources. This can exist in both the application and infrastructure groups.

Usually the performance degradation is known early on but accepted because the business users are not making a big enough stink; or at least not big enough to reduce the drive for new features. In addition, lack of monitoring and baselining of application performance is a critical problem. It eliminates the ability to effectively plan for growth and manage team resources.

Eventually the impact reaches a point where someone significant (business executive) resets priorities to fix it (technical debt due date). Many options will be evaluated immediately, from trying to buy time by tuning components to finding misconfigurations. However if no easy answers exist, it usually comes down to two options.

  • Optimize the application/process (Fix the code)
  • Scale the application/process with faster hardware (Throw metal at it)

Both of these options impact the same core factors: time and money. Depending on what time of the year it is, what the next feature would be, and what staffing is available, the choice can go either way.

Optimization has benefits in that better running code has long-term cost effectiveness built-in. But, optimization can have a reduced rate of return when repeatedly used without a complete architectural rewrite. Coupled with this, optimization often consumes productivity from the same teams that were unable to spend cycles maintaining it the first place. Also, the cost of optimizing is can be much greater than the labor involved. It includes what a possible delay on new features does to the firms overall revenue and commitments.

Scaling benefits from not directly affecting the product teams and being more focused on configuration and infrastructure resources. Scaling is also usually easier to estimate and deliver being that both the current application design and hardware resources are usually known. Where scaling can lose ground is in risk and cost. Anytime a change is made, a risk is taken. Moving a application from a set of hardware resources whether server, SAN, network, can end up being more disasterous for the business than the performance issue itself. Failues in configuration, QA testing, implementation, and planning are a dime a dozen. Cost can also be a problem when dealing with a fixed budget. The next incremental step to scale might be a big pill to swallow given the wiggle room available. Along with hardware itself, cost can be found in long-term commitments to power, space, cooling, and staffing to maintain ever increasing data centers.

Another factor is what I call the optimization-bias. If scaling will cause a IT leader to both beg for more money and possibly incur the risk of an outage, she/he may decide to trust in the application team instead. It is better to risk schedule under the covers than business ops and budget above the table.

This is where Virtualization can change the balance of this decision by improving the agility, cost-effectiveness, and reduces the risk of migration for scaling significantly. I see Virtualization as both a layer and a toolkit. It directly changes the balance of choices in the following ways:

  • Reduced configuration, schedule impact, and risk in migration
    • With an application that resides on virtual machine(s) the migration to new servers, SAN, or networking can be performed without a single change to the configuration of the application itself. Technologies like VMware‘s VMotion, Storage VMotion, and partnerships with major storage vendors such as EMC, NetApp, and 3PAR allow the application to truly be treated as an object. This eliminates the need to build platforms in parallel at anything above the hypervisor layer. This can be made even simpler with newer stateless platforms such as Unified Computing System (UCS) from Cisco which can reduce the hypervisor provisioning dramatically. In the end this can remove the need for QA resources and shorten implementation schedules. In some small cases migration can even occur while the application is under load.
  • Better efficiency and life-cycle management
    • Server consolidation has been a foundation of the VMware platform for a long time. When an application is primarily vertically scalable the efficiency of virtualization becomes a part of the life-cycle. An example is a database that must reside on a single server (not easily decoupled). In a physical migration, after the new server has inherited the application, the old server is placed into an equipment stack. Utilizing the old server for another existing application means starting all over again with another migration with the same risks. More often then not, this resource sits idle until it depreciates off the books or is needed in a lab.
    • In a virtual migration the new server would be added to an existing or new VMware cluster. When the application is moved to the new server the old server is still present and available for use. In fact with technologies like VMware DRS, this server would be immediately used for existing application loads. The available resource are equal for both physical and virtual models. However, the virtual model abstracts resources as a pool. This promotes efficiency in the long run and possibly significantly reduces hardware resource management and ultimately data center cost.

The advantages of virtualizing mean scaling as an option gains ground against optimization. Though every situation is different, the added sophistication and agility of virtualization provide any IT leader with possibilities they may not have had.

By far my favorite part of Virtualization is what it can do when coupled with a very well designed application platform. By designing application platforms with the intention of being decoupled and horizontally scalable, Virtualization can be used most effectively. Single components of the system can be quickly migrated to new hardware as needed, loads can be dynamically managed by VMware DRS, platforms can be quickly instantiated for new customers, and resources can be leveraged across physical locations with reduced cost. The major project of my career, of which I am working on right now, is this goal.

As always please comment if you agree/disagree.

Reblog this post [with Zemanta]

Commentary

1 Comment Leave a comment

  1. Virtualization can truly save you time, effort, and money (soon).But you need to have a system that is proven to help you with managing your database. I have had a lot of systems and softwares with my business, when I started going virtual, it wasnt easy. I am thankful to have a good HR management software, sales, stock management, warehouse management software and more. Together with virtualization is the risk of the virtual virus. However, all you need to have is a system, a software, a very good support team (IT),
    and most importantly a backup file on each important docs.

%d bloggers like this: