Archive for June, 2009

What Now, The Year Is Almost Over!

Virtualization Critical Comparison – Chapter 08

A few days ago, I had to call a friend of mine to discuss Hyper-V. This friend of mine is a smart guy, is a player, a mover, and a shaker, at the least when he voices an opinion about virtualization, vendors, a lot of vendors listen. It was once said, that when my friend and I agree, there is not a vendor on the planet that does not listen. True, it does not mean we get our respective wish or way with the vendor, but they do take serious note. Well, when we don’t agree, that is when things get interesting. And on Hyper-V versus KVM we do not agree, we see the points of each other respective perspective, but do not agree.

In short, Hyper-V is coming along well with the updates and changes in Windows 2008 R2, including CSV. However, also as I have said in this blog, KVM is coming along as well. With Xen and VMware on the side-lines, for different reasons, this is starting to feel like the finals in a World Cup Football (Soccer) tournament. Why? Well, everyone in the computing industry is watching, everyone. Regardless of whether they admit it or not, the finals are here. What? Wait, wait. I hear the calls and yells from the fans in the stadium, saying… What final 4? Parallels, Iron Works, etc. all are players. True, but did they make it to the finals this year? No. Virtualization container concepts took it in the teeth this year, this was the year of the mature platforms for virtualization, gearing up for cloud computing, the enterprise scale and, yes, and, hypervisor operating system isolation based big strikers.

This year is almost over, eyes are turning to 2010, and as KVM gains acceptance with the bigger life-cycle application vendors, Surgient, ManageIQ, and a few others that slip my mind at the moment, a bit limitation with KVM disappears. Whereas, Hyper-V paired with SCVMM could be seen as at a disadvantage, Microsoft has not as yet that I know of written a true virtualization agnostic management framework. Not when SCVMM has to drive vCenter, and vCenter has to drive ESX? The libvirt community is pushing but can only go so fast, although RedHat is going to change that to a reasonable degree, as well as IBM, when they drive more resources and gain results that support KVM, not Hyper-V. So where does that leave my metaphor for World Cup Virtualization, cough, Soccer, wheeze, Football?

After KVM and Hyper-V smash into each other, since they are on approximately on the same timeline/roadmap for parity in feature set, the winner of this clash will focus on VMware. Yes, yes, I know that regardless of what happens, neither KVM nor Hyper-V will really disappear, but someone has to be the true challenger to VMware. Oracle is still lost trying to figure out how to market Sun Microsystems technology, sad but true, that the precursor the true virtual containers in the modern age of virtualization, Solaris LDOMs (Zones) is cursed to oblivion. Sorry Oracle, but I have real trouble finding anyone that wants to pay a premium for Oracle and be locked into a narrow virtualization platform. Cost is a factor, interoperability is also a significant factor based on the informal questions I have posed to many of my friends in virtualization.

Xen, as much as I like the platform, is a spectator right now as well, Citrix has its niche, and will continue at some level but does anyone really think Xen will compete will with Hyper-V or KVM given the resources that Microsoft can bring to the arena of virtualization, or what RedHat can combined with IBM? Here is a radical idea… Parallels and Citrix should merge! The result is a mature virtualization container model combined with the now dominate application virtualization solution? That would be interesting, but the downside, is that it would cannibalize Xen. After all I do believe that operating system isolation cannot dominate the industry for much longer. The pressure to implement cloud computing with the absolute minimum disk footprint on SAN or NAS? Never mind the push to reduce processor packages per server node, and increase cores per package? That means, as least to me, lots of smaller, leaner servers hosting applications, databases, and very lean virtualization. Now where has that idea come from… Oh yes, PODS of course.

KVM fits this concept now, you run Linux application on the server, and when the given server has cycles free you run a few virtual instances, just a few. Radical but not unheard of, there have been quite a few operating systems that supported primitive application partitions while the parent partition/system did significant work. Hyper-V cannot do this effectively nor can VMware ESX notes on vSphere. Think I am crazy? Talk to your respective architecture teams, they are trying to figure out how to get the last 5 or 10 percent utilization out of dedicated application servers, as a dynamic resource to handle over flow capacity needs, in cloud, right?

Add comment June 30th, 2009

Learning to Hate VOIP

The Lack of a Virtualization End-User Rant?

In my opinion, I am learning to hate VOIP (Voice over IP). As an end-user of various technologies, mobile, virtual, media based, what-have-you, I have come to hold in considerable contempt some technologies with a passion, where others I have come to appreciate as innovative or even supportive to the potential for lifelong blissful use. So, as I noted above, one technology that I have learned to not enjoy is VOIP. VOIP is not evil; it is just, given my experience, not quite right. Good old analog phones where consistent, stable, and the sound quality was reasonable, compared to VOIP. Thus I have learned to view VOIP as horrible but necessary, meaning that as much as I don’t like using VOIP, I refuse to pay for analog, which was at times given my location and service providers as much as three (3) times more than any VOIP. My experience with VOIP is across many vendors, in many situations; so it is not a situation of a single provider or vendor had or did a poor job, but one of where they all are less than great, or should I say, less than good old analog.

So what does this have to do with virtualization? An interesting parallel that did not happen as yet, between VOIP and virtualization? For example, I hear on a routine basis people gripe about VOIP, the latest VOIP joke is the top topic at the being of conference calls, or the loud comment intended to be soft spoken when someone on a conference call is the latest victim of the echoing pop, static fuzzing, then line dead issue, that VOIP seems to suffer from more often than I would like. Where is the parallel to virtualization? Why are we not buried in a cascade of angry end-users deploring negative aspects of virtualization? I always think of the angry mob in original black and white Frankenstein movie, where the local villagers are out for blood, with pitch forks and flaming torches, to avenge the good Doctor, at the expense of the monster, when the idea of a virtualization end-user revolt comes to mind. I wonder if other technical resource people feel the same? Sure there is griping about virtualization, but it has not reached critical mass? I don’t see a flood of Google hits discussing the reversal of virtualization in Fortune 50 companies, where the volume of bad VOIP experience is growing slowly. So why does VOIP seem to have an image issue, where virtualization does not?

The easy answer one could submit is that the management above said, to the technology teams… Virtualization is going to happen or else, make it work, or we will find someone that can. Similar in tone, management above said, to the end-user population… Choice is not an option, use virtualization, deal with it, or we will find someone else that can. This is real, everyone, I have been in meetings and conference calls where these comments were made. But then again, this is the easy answer, and although true to a reasonable degree, it is not the full rationale for why we have never seen an anti-virtualization revolt of any significance.

The hard answer is that everyone, technical support or end-user alike knew virtualization makes sense, we had too much capacity and scale, that computing resources where not used effectively. But was this a fault of the technology, or the architects, project managers, design engineers? Again this is true but is it the entire answer? Management is convinced there is waste in computing, this is the new religion, and unfortunately this qualified waste, more than quantified, is extracted often by the reduction of FTE (Full Time Equivalency) in man hours. People cost too much. So enters the 100s of life cycle vendors and publishers, saying we can do more with less, meaning their application or solution suite eliminates people. Back in my MBA course days, I remember a professor smiling while saying… 80% of your business cost is people, most of the time. Is that the real answer? Reduce head count? For near 10 years, processor scaling has grown in the vertical direction, faster, more IOP based processors. But the cost curve has been near flat. So there was no logical reason not to chase the best computing platform, in reference to processors and related components, because after all no project manager wants to be responsible for implementing a slow system, right? And of course project x, y, z eliminates FTE on the end-user side! Bingo, we end up with extensive capacity, at the same time that networking and storage system performance has made some modest gains. In short, the computing industry has never done well with the idea of doing with less at the beginning versus at the end?

The slick, smooth, answer is that virtualization is cool, feels kewl, and makes management happy, no matter how effective it is. Tangible cost avoidance is a result of a good virtualization strategy and even better tactical implementation and support. But it can go wrong. What ineffective virtualization? Oh, yeah, do a bit of research, mucking up virtualization is not hard, it happens. The emphasis becomes cost saving, not customer quality, and/or the technical team that had the skills and knowledge to support strong virtualization deployment and support, is eliminated or out-sourced, so customer experience is impacted, all true, all has happened, just no one wants to acknowledge this. Imagine what a poor implementation of cloud computing is going to be like? Ha!

So, back to VOIP, yes, VOIP is doing more with less, which is a fact? No, it is providing some features at the expense of quality, with less cost. The quality is less, the performance is less, the stability and consistency is less, but, important to remember, the cost is less. Virtualization is the same idea, even cloud computing is the same basic mindset, doing more with less. What, wait, I can hear many yelling… You are wrong! Virtualization is doing more with what you already have. But is that true? I question it. Yes, having servers idling during the middle of the night was unused capacity. Yes, being able to re-provision systems on the fly, in a stateless model is nice, but is it really more effective or efficient? Would it not be more efficient to just purchase less equipment and lease less capacity up front? And achieve the same result without all the expensive virtualization components, tools, and added complexity? Oh, but this means the architecture and engineering has to get it right in the beginning? Yes!

Don’t be fooled, the end-users are not thrilled with virtualization so far; they have been pushed, threatened, and kicked into accepting the following perspective… It is just good enough computing resource model. Virtualization no matter how good the tools and procedures, requires smart, dynamic, and quality support, without this, expect to have a sinkhole in your respective technical support organization, that absorbs issues and problems, but never solves all issues or problems, so confusion then frustration builds. With VOIP it may be more visible, because VOIP is an average consumer level product. Just consider an end-user revolt against Cloud computing based on virtualization? It maybe more painful when it happens, than say the VOIP revolt? Is VOIP the, Canary in a coal mine? The warning indicator to how virtualization will be regarded in the future? Maybe, or will we all accept less overall believing we are getting more for less?

3 comments June 16th, 2009

Virtualization Adoption Strategies Visited

Virtualization Critical Evaluation – Chapter 12

Fired up the good old iPod, old since it is an original Video iPod via iTunes, which compared to the iPod Touch, is more than obsolete, try comparing a Ford Model-T to a SmartCar, and although they seem similar in some ways they are worlds apart. The iTunes semi-randomization or shuffle playlist selected Depeche Mode, Music for the Masses, which has a number of songs that just happen to match the topic for this blog entry. For those that are not Depeche Mode fans, the song…Never Let Me Down Again…is the song I am referring to among others. This specific song always seems to come to mind when I look at general release code that something-dot-zero. What is it about 1.0, 2.0, 3.0, etc., as a concept, which always freaks me out? It is not as though .0 releases are always good, bad or ugly, is it? What is the old axiom from my business undergraduate degree days? Oh, yes… Never buy an automobile from Detroit that was made on Monday.

There are, from my perspective three (3) basic scenarios for adoption of a new architecture/solution/infrastructure. This is not rocket-science more Common Sense, versus Profit Driven, versus how should I say this… Just-Must-Have? The definitions of the three approaches are outlined below per my perspective.

  • Common Sense – Cool your jets cadet, what many would call a late-adopter strategy. That really does describe the concept. Let someone else break their teeth on the newest release. In the case of VMware, which I would call middle of road in the quality of release code, it is Update 1 or version .0.1, which achieves the next order of magnitude, and is a reasonable bug-less state. No I am not saying VMware, Xen, or even Microsoft released horrible release code, which at times they have done, only that every code shop must at some point declare a .0 release as done, and prioritize any remaining issues for resolution in the next incremental release. In virtualization, this is significant, because of the eggs-all-in-one-basket issue with virtualization hosts. An issue on a host is always a factor times 16, 20, 24, or more, since multiple virtual instances are often impacted. It is not uncommon for some firms to let a general release, that is .0, mature, 90 or 120 days. In the case of ESX 3.0, given some nasty issues, and a major change in the ESX OS for a number of reasons, I know of one firm that waited more than 6 months to adopt a well patched ESX 3.0.1 before leaving ESX 2.5.2 after several years on 2.5.2. Yes, 2.5.2, it was stable and consistent for them, so moving to ESX 3.0 soon was not reasonable.
  • Profit Driven – This is similar to an early-adapter model, with the motivation specific to the situation. There can be a situation where a given general release has features or even fixes that the older major version just does not have or will never achieve, the newest version must be leveraged. Or the situation is such that to wait for a less painful implementation is a significant opportunity cost, or even competitive advantage issue. What pain is encountered with the .0 release is deemed acceptable because a significant feature is critical to near future success? The term bleeding edge is sometimes referenced for this scenario, but that is misleading, at least from my point of view. I would define the bleeding edge as implementing production on release candidate code, rather than general release code. Microsoft at times has done this, on their internal infrastructure, for example this was done with Hyper-V, since Microsoft deemed the later versions of Hyper-V stable enough for such action before the official general release was out the door.
  • Just-Must-Have – This approach is often confused with the Profit Driven approach. Management is profit driven, but there are personalities in the command structure that just want the latest and greatest, or the next wonder solution, because they can mandate it. Of course all the official communication will define the demand for action in a profit justification, the market share versus competitive feature set, buzz words inclusive. But in truth somewhere in the dark, someone just wants the latest new toy. New toys cost real money, in training, issue resolution, and mistakes, throughout the entire vertical product stream, from OEM to customer. Consider that there is always someone, somewhere, that needs the latest and greatest. Think about it, was there not someone you knew that had a High Definition (HD) television that was $10,000 or more, and there were at most a handful of television shows in HD? And how often did that wonderful HD unit fail? Was the service contract at least 10% of the total purchase cost, if not more? Now the same basic television, about 5 years later is around $600, and still HD television is not universal across the board? Thank goodness virtualization assimilation is not quite that slow!

Now, I am sure some are wondering what my strategy is? Well, let me explain it this way. Yes, I own a SmartCar. No, I did not get it right away, so I did not exhibit the Just-Must-Have approach. Although, I have a close relative that did get a SmartCar very soon after they came to the United States, and yes, they experienced some issues about 1 week after they got it. Some will say, but the SmartCar has been around in Europe for years, true, but driving in Europe is not the same as in most of the United States. So, did I exhibit the Profit-Driven approach? Yes and No. Yes, in that I decided and ordered my SmartCar when fuel costs where well over $4.00 US. No, in the sense that the SmartCar had been in the United States for about 18 months before I got mine, given that there was about a 6 month queue between order and receipt of a SmartCar. So I would say my approach was part Common-Sense, but not completely.

As for virtualization adoption, specific to .0 releases? I never recommend anything but the Common-Sense approach. This is regardless of the vendor, the situation or the environment selected. Virtualization should be stable, safe, and consistent, if not, it generates more headaches than anyone should have to deal with, never mind the endless-seeming long days, and very odd hours on the phone with vendor technical support, where we are all scratching our heads. True, I often get yelled at for this, and at times it has been at the expense of my personal standing or reputation! However, history has proven, I have been right, and the very same individuals that absolutely hated my view on virtualization adoption, with reluctant, grudging omission, noted I was after all, correct. In the course of time, those that disagreed have been mollified if not thankful for my stubborn stance on late adoption of virtualization, when it comes to .0 release implementations or infrastructure migrations to same.

Some will ask, why this topic now? That is a good question. With the recent release of vSphere 1.0 (insert polite cough here), including vCenter 4.0, ESX/ESXi 4.0, etc., moreover, Microsoft Windows 2008 R2, with some key features that support improved Hyper-V platform use and function, for example, the Clustered Shared Volume (CSV) 1.0 feature coming into its own. Not to ignore, RHEL 5.4 with KVM .83, .84, or maybe even .85, thus approaching KVM 1.0? Now seemed a reasonable.

3 comments June 9th, 2009


June 2009
« May   Jul »

Posts by Month

Posts by Category