Archive for October, 2009

How Do You Say Good Bye?

So The Old Virtualization Platform Is Just Not Doing It for You Still (Part 2 of 2)

I suggest reading the blog entry, When Is It Time to Say Good Bye? before continuing to read this blog entry. It is not required but recommended.

Now that the decision has been made to change virtualization platforms… What is important? What should not have been forgotten or lost? What must be reaffirmed to be successful? There are three (3) tasks key to saying good bye to the declining virtualization platform, and hello to the emerging virtualization platform. All are easy to itemize, two (2) are easy to complete, and one (1) is not painless, but should be a known path, providing the lessons learned in the past are not forgotten, and the skills developed in the past are not abandoned.

  • Tell the original virtualization vendor… Goodbye! It was nice knowing you, you saved us money, thanks, but don’t let the door hit you in the ass on the way out!
  • Tell the new virtualization vendor… You are wonderful, but unless you are cheap, do everything we want immediately, you are gone, and you have been warned. Oh, and we signed a multiyear contract, but don’t expect it to be renewed, dude, unless your platform is better than expected.
  • Find all the documentation and people that implemented the declining virtualization platform, and get the ones worth leveraging, working on the emerging virtualization platform sooner than later. What? They were laid off? Reassigned? Or otherwise been regulated to a dark corner, and do nothing but drool and speak in a weird dead language no one has heard for 100s if not 1000s of years? Say it is not so!

Now of course, there is considerable humor implied in the above characterizations. Being professional precludes some of the semantic implications, so if anyone thinks said characterizations are realistic, my honest response is… What are you smoking, dude?

What is important? Moving to an emerging virtualization platform is a new platform, and although experience is important don’t let it bite you in the ass, because you don’t realize a new platform, is just that… a new platform. All the typical stuff, technical mumbo-jumbo, so no need to dive deep into that here, the new platform should have better or be reasonable in performance context to the declining solution. If not the scale will change in significant manner, vertical because better hardware is needed, horizontal if more virtual instances are needed to get the same performance to the end-users. This impacts cost of course. The cost avoidance model will change to some degree no matter what is done, so be prepared for that situation. From physical environmental factors to logical processing factors will change; don’t let this surprise anyone, including the customers. Again, this is a new platform, respect it as such. Engineering by Fact is a good concept to keep in mind. So is Management by Fact, so once the emerging solution is designed, tested, and implemented, expect surprises and differences based on actual results and situation. Unloading one hypervisor and loading another, even when stateless makes it simplistic, is just the tip of the iceberg. Management will want to believe otherwise, don’t let that happen.

What should not have been forgotten or lost? Some will disagree with this point, saying that things are not forgotten, well, yes they are. The skills that were needed to establish a viable and robust virtualization platform are not identical to those that maintain a virtualization platform over time. This is not a classic engineering versus operational scope argument. This is talent versus skill debate. Virtualization as many of you may recall, took insight and a bit of guess work, to get it off the ground. Yes, it was new and no one really knew what the upper limits were, etc., etc. Well, once again, that mind set is needed, walk before running, when running watch out for potholes. Scaled testing in a lab is never inclusive, but the temptation will be to rely on the past. That past experience will color the results that are gained in the lab, that is one of many potholes. Application designers and developers are not going to find the emerging virtualization platform easy to understand or deal with, build into the master plan for this confusion and frustration. Remember how some issues came out of nowhere when the declining virtualization platform was implemented? It surprised more often than not, right?

What must be reaffirmed to be successful? This is the ugly question. Why? Because what is already virtualized is, just that virtualized, and will be migrated in some fashion, be it virtual-to-virtual, or OVF, or whatever, beyond the performance and response differences, and the resulting marginal costing delta big or small, it is not low hanging fruit as once was the case. Those days are gone. If this migration is the first from the very first virtualization platform, this is going to be a culture shock event. Massive cost avoidance, beyond the savings (hope it is savings) of virtualization vendor licensing fees, is or has been accrued and tabulated in seconds, so that easy gainer is toast. Long term success now is based on incremental cost savings per host, per virtual instance, per flexibility of environment as managed, cough, cloud, or such, providing services and solutions on demand, with ease of availability.

How can ease of availability be enhanced, improved? A work-load-management solution, a deployment automation solution, and/or the use of a stateless model solution may or may not be opportunities for cost savings in the emerging virtualization platform. This is not a simple feat, if such components are or were already core to the declining virtualization solution, additional cost avoidance out of the cloud vapor, as such, may not be possible. In fact, adapting to the emerging virtualization might even increase expense sooner than later, ouch, taking time to net savings over time, ouch. Again, plan for this, control expectations, the emerging virtualization solution will need to mature and balance out within the organization, just as the declining virtualization solution did or required time to do, changing the culture of the organization, again. As my Mother, who was a manager for more than 30 years in a Fortune 10 firm was fond of reminding me… The only constant is change. To which my reply always was… Why does management always seem to forget that all change takes time? Do lt!

Add comment October 29th, 2009

When Is It Time to Say Good Bye?

So The Old Virtualization Platform Is Just Not Doing It for You Still (Part 1 of 2)

This time of year a lot of hard decisions are made in enterprise firms, I am sure many will identify with this statement. The New Year is approaching, the budget is due, the bonus if any is on the line, and now is the time to pull that rabbit out of the hat. There are a number of reasons to keep or leave your past virtualization platform, this article will explore a few points of on why a platform should or should not be continued, with the premise that it is time to say goodbye. The basic logic is evaluative analysis, something any of us in the computing industry know or should know well. The two key questions are:

  • Is the solution effective?
  • Is the solution efficient?

The basic objective is to save expense, total expense; this includes factors beyond the virtualization platform, and is reflective of the above questions of effectiveness and efficiency:

  • Can the platform be revised, improved, or otherwise developed?
  • What does management want, really want?
  • What is the complexity of the effort?
  • What is the time line and scope of the effort?
  • Can the customer base survive the transition?

The answers to these questions will be environment specific, again for the sake of illustrative explanation, each question will be explored in conceptual context only.

Is the solution effective? If the solution has been around for a few years, the expected answer would be yes. However, early in virtualization experimentation by a number of firms, mistakes were made, and true cost avoidance or savings may not have been as expected or hoped. Often virtualization platforms are kept conservative, and structured with caution, but over time, expectations for continual economies of scale demand more for less from the original design cloud the results if not the perception of effectiveness.

Is the solution efficient? Just because initial cost avoidance was good, or better than expected, does not mean the virtualization platform was well managed over time. This is an unfortunate, but a common problem for many firms that have significant virtualization. Staff changes or organizational reorganizations often break efficiencies gained as the virtualization platform matured, and just when the solution should be a smooth operating entity, things go wrong. Both management and engineers just cannot seem to leave something that works alone.

Can the platform be revised, improved, or otherwise developed? This should be an obvious yes. If not, then something is crippling the solution, or someone has made some horrible decisions about the virtualization solution. Management is often impatient for results; a short-sighted mind set for virtualization is often a key factor in failure of the solution or lack rational expectations. It takes years for a virtualization platform to mature regardless of the vendor. The vendor needs to be stable, consistent, and responsive, if not, walking away from the solution is more than possible. A struggling vendor is a key indicator that years later nothing but pain will result. The solution should work at reasonable scale, maturing feature set should never mask foundational or core functional issues. Fixing bugs and improving stability cannot be given a back seat to new feature development.

What does management want, really want? Never overlook the fact, that management may just not like the solution in place, the reasons for this are endless, but they often come down to two factors, or issues. The first issue is cost, management hates paying for anything twice, and so what was acceptable at the initial deployment of virtualization, is often a problem a few years later. VMware has this issue today, VMware costs go up, while customers are expecting costs to go down. Feature set grows, but most customers do not use all features or worse are forced to purchase many features just to get a few key needs addressed. This scenario just ticks management off, and is often the reason that a given solution is kicked to the curb even when successful. The second issue is competition, the best solution in the world, and I have discussed this issue in the past, often does not gain the greatest market share over time, Sony BetaMax versus VHS, Apple Macintosh versus PC clones, etc. True, the many will point out the iPod, but in fact, the iPod dominates only because it has no real or true competition, once there is something that is better, or even close to the iPod? Guess what! For example, the iPhone is already facing this issue with the latest generation of cell phones from other vendors, that are approaching iPhone class of service and range of use features.

What is the complexity of the effort? Or in other words, is it easy or hard to walk away from the existing virtualization platform? Today, moving from VMware VI3, e.g. VirtualCenter and ESX, to say KVM or Xen is not painless, but less painful every day. KVM and Virt-Manager make it easier to leave VMware vSphere now that KVM/Virt-Manager support OVF via Virt-Inst, and Open OVF expanding to support Hyper-V. In fact, once OVF supports references to virtual machine disks and does not embed virtual machine disk data, the transportability of virtual instances will be almost seamless, really closer to a classic cold migration, since the common shared storage is leveraged.

What is the time line and scope of the effort? Well, this question is not fair; it is a trick question of course. If the complexity of the effort is near painless and/or management has already decided it is time to go, then does the time and scope of the migration to a new solution really matter? Yes and no. Yes, in the sense that a complex migration may delay the migration. No, in that once the action plan is defined, and executed, the writing is on the wall. It is interesting that Microsoft has not been splashing all over the world, how they are stealing hand-over-fist customers from VMware? Why? Well Hyper-V until 2008 R2 with CSV (Cluster Shared Volumes), was not a true challenge for VMware VI3, never mind vSphere. Of course, KVM is still maturing. Oracle is stuck trying to figure out what to do with SPARC, so Zones never made it to the sand box? But once the pain of migration is resolved, watch out! Microsoft and others learned a number of things from killing off Novell. Microsoft Windows NT server even with faults was a general application server that was easy to management and use, oh and it did file serving as well. Microsoft made sure moving off Novell was a painless as possible, CSNW (Client Services for Netware) and GSNW (Gateway Services for Netware) for example. Novell 4 was a superior solution to Windows NT in serving files and using LDAP, but still it was painful to go from Windows to Novell, compared to Novell to Windows.

Can the customer base survive the transition? Aw, shucks, the end-users, who cares about them anyways? This is the most obvious sleeper issue. What? Wait! What the heck is an obvious sleeper issue? I gave away this one in the answer above… CSNW for example, let a desktop think it was working with a Novell server, when it was really a Microsoft Windows NT server. It was about as transparent and seamless as possible considering the radical differences between Novell and Windows NT. CSNW make the end-user experience, if everything was done right, painless as possible. Now consider virtual instances, which are several steps away from hardware, and even platform specifics, combined with V2V (Virtual-to-Virtual) tools and methods? Or even easier, a simple reboot, because KVM can support VMDK files? Sure KVM VMDK support many not be the best performance, but on the standard pain scale, is more like of an itch to be scratched, than a needle prick. OVF is another option as well. Unless someone is just plain goofy, customer impact should not be a factor.

As for the key question of this article… When Is It Time To Say Goodbye? When the answer to all of the questions above are… No big deal. Sure, many will say that the management demand issue trumps most if not all? Well, I disagree. Virtualization platforms that are run well, work well, and avoid cost, will and do take years to implement, taking years to retire, at an enterprise scale deployment with 100s if not 1000s of hypervisor hosts, and 10,000s if not 100,000s of virtual instances. Moreover, there is one superordinate concept that will take most if not all of the remaining pain out of an enterprise scale migration to different virtualization platforms on demand, wait for it, wait for it… Stateless! The emerging acceptance and implementation of stateless computing concepts, at a hypervisor level as well as at the guess OS level in virtual instances, is or was the last technical foundational stone holding virtualization platform mass migrations from taking place with little or no pain. Is 2010 going to be insane or what? Will it be the year of mass virtual migrations? I think so? Do you?

2 comments October 8th, 2009


October 2009
« Sep   Dec »

Posts by Month

Posts by Category