Archive for March, 2008

Is Software Product Certification on Virtualization Today A Given?

Know What Virtualization Is, But What Is Next? – Chapter 11

By now just about everyone that keeps track of how things are going on in virtualization circles has heard about the Symantec Antivirus and VMware Virtual Machine issue, right? Even I, am not sure I understand all the implications. But I can think of one question that is significant. How the heck did this happen? Even if the issue turns out to be a VMware issue and not a Symantec issue, or visa-versus, or something else, something just does not add up, some where, some how, this issue should have been caught? Let me be clear, I am not saying who is at fault; I am simply asking how this issue happened. I sincerely hope it is not a comedy of errors scenario. Furthermore, since this issue did happen, I expect some real changes coming to how software publishers do quality assurance testing and what their respective clients will demand, yes, demand.

Do all software publishers test on virtualized platforms now? I can not answer this question, but absolutely every software publisher should. Does or has your favorite publishers of software test of virtualization platforms? Do you know the answer to this question? Ok, which ones? If not, do you have the right to expect it? Yes, of course you do. I don’t know what specific publishers do or do not do in reference to quality assurance testing, but I do know that with this recent issue with Symantec Antivirus and VMware, every vendor agreement in every major corporation is being reviewed and rewritten. It is something that corporations using enterprise solutions will demand, no? You bet they will. In fact, I will go an additional step beyond this basic requirement. I believe that corporations will demand testing schedules, explicit test plans, and even develop customized testing models that they will expect, no correction, will demand software publishers explicitly execute to enterprise class expectations, including virtualization platforms.

Now software publishers that refuse to do this, even hardware vendors that refuse to do this for BIOS and firmware updates, will see key enterprise clients walk. Yes, walk. There is no room to debate this, especially for financial, government, and other key industries. I can hear the inflexible vendors crying and moaning from all the way here, in front of my monitor as I am typing this text. Testing costs time and money, this is unreasonable, etc., etc. Bull. I can also see flexible vendors using their improved test models, that now explicitly include virtualization platforms, as part of their over all marketing efforts. Why not? As an enterprise customer, would you not want to know that a given software publisher is actually using the same virtualization platform that you are using today? Talk about common sense in product evaluation and selection. I can see any lack of testing on virtualization nailing a software publisher, in fact, we may even see a software publisher going bankrupt by ignoring leading virtualization platforms as a viable for overall product certification.

Why did this situation happen? Talking in a hypothetical context? Well, at some point someone in the publisher side of the industry decided myopic vision is a cost effective of course. This was justified? Sure, virtualization is so good, and so stable, and so consistent, that we, the trusted software publisher just never needed to worry about it testing on any virtualization platform. Oh, right, sure! How does that work for you? If you look at this issue, from the outside looking in, the lack of testing on at least the most popular virtualization platforms is more than just an honest oversight? Nuts. What publisher is not using virtualization just like the rest of us in the industry to save costs in lab or quality assurance infrastructure? How about the reverse scenario? Software publishers only tests on virtualization, never on hardware? Now that would be insane as well, but given the level of virtualization today, it would not be quite a complete surprise, not to me.

As I said before, I am not sure what happened or how, in reference to recent developments or should I say difficulties that Symantec and VMware encountered. But if this is not a very loud, obvious, and intense wake up call to all software publishers, that the significance of certification testing on virtualization platforms must not be overlooked, then what is? Having one or more enterprise customers smack the next software publisher, which fails to learn from recent history, in the preverbal head? No, not preverbal head, but profit margin., after all lose of revenue is the most painful impact, right?

Add comment March 25th, 2008

Why Do Hypervisor Based Platforms Dominate Virtualization?

Know What Virtualization Is, But What Is Next? – Chapter 10

Why are hypervisor based platforms for virtualization dominating the industry? There are a number of answers that come to mind:

  • P2V (Physical To Virtual) Migration Ease
  • Avoiding DLL or .NET hell (Yes, .NET Hell)
  • Failure of Utility Computing?
  • Dominate Operating System Support Gap

Hypervisors are easy for technical personnel to understand at a 1000 foot level. No they are not easy to manage, hard to understand at the 10 foot level, and require extensive skill to architect, design, and integrate as enterprise platforms, never mind support. Of course technical management hates to acknowledge this, but it is true. So, anything that makes some part of virtualization easy is focused on, and in the pursuit of pain avoidance, dominates thinking. This becomes quite obvious when you consider the list of issues that are noted above. Let us explore each a bit more!

Physical-to-Virtual is a given, moving from very old hardware to new virtualized agnostic platforms, is a pain to do unless you have 1GB dedicated pipes and almost painless if you actually have 10GB connectivity. All the variants, P2I (Physical-to-Image), I2V, V2P, I2P, etc., are applicable, no significant changes are done, but the operating system is kept in tact, more importantly the application installations are kept functional. When you have 100s or 1000s of migrations to get done, of course you want to keep is easy and straight-forward. I can not tell you the number of times I have heard business management say, just move my application, I don’t want to re-install or even update the operating system. Most of the time this because they refuse to let the technical teams have the time to do it right, or worse technical management has no clue how to do it consistently, often because they have laid off the knowledgeable staff that really knew the applications in question well.

Avoiding DLL hell, well this does not need much in explanation now does it? But .NET hell does, yes, .NET hell does exist. .NET, as much as Microsoft would like us to believe does almost nothing to fix the basic issues with DLLs. I best .NET hides the say old issues in the details. Worst, it makes things harder to resolve. Microsoft has never resolved the DLL memory loading issues, never changed the memory mapping model for DLLs to established true DLL integration between mismatched frameworks. I could go on, but anyone that has written serious code for the Windows platform understands these points. Microsoft slogan for .NET should be… The .NET Devil is in the DLL Details. Until mismatched DLL issues really disappear, the isolation benefits of hypervisor virtualization will help avoidance of pain.

Failure of Utility Computing? What the heck is this? To put is in simple terms, application instances, application framework components, etc. But it is a bigger issue, when heterogeneous applications are integrated, and way beyond DLL hell scenarios, resource management, process management, communication management, and configuration management are all aspects of utility computing models. Where you have many different applications running together, which may or may not be isolated by time or resource controls is a very difficult nut to crack. Time is often referenced as CPU loading, and resources are often referenced as Memory, Network, and/or Disk IO, now where do these elements come into play? Virtualization of course deals with the same basic issues, hiding such as part of its isolation. True utility computing is the long term future of virtualization, and it makes operating system publishers, like Microsoft freak. This is because the total number of operating system instances running is significantly reduced with true application instancing, and advance resource and time control. The maturing of management tools for utility computing on Windows is horrible, where as Solaris with its Zones implementation it is powerful. Apple Computer had the basic concept in its core Macintosh operating system about 1989. Never mind the fact that quad-cores beg for application instancing and heterogeneous application integration. Why pay for 4 or 8 package quad-cores, and then run 16, 32 or even 48, horrors 64, instances of various operating systems when you don’t have to do so?

Application frameworks, often integrate better controls, Microsoft SQL, IIS, Oracle, etc. and various other similar instancing models developed resource management within themselves, but the operating system was ignored. This is where the dominate operating system gap comes into focus. Any one that has tried to use Windows System Resource Manager knows exactly what I am saying, specifically soft caps, for resource control just do not work. It is quite a shame that Microsoft has not done more with proper application isolation at the operating system level. This is really a maturity issue, of course the methods and tools will improve, but never to the point that they impact operating system sales. Better solutions must come from publishers that do not publish operating systems. The same basic issue comes with storage vendors, in that as they improve thin-disk or imaging methods they actually impact the total number of storage frames they sell, it is just not going to happen the way we all would like to see it. I am sure some will disagree, fine, but until I see Microsoft really create a application virtualization model that does anything beyond the basics, I will keep my current opinion.
I am sure many others will think of additional rationales but the above noted are those most often encountered given my personal experience. Bottom-line, every short-cut coding to market drives, wastes real money in total cost of ownership, whereas coding for market saves total cost of ownership costs. Virtualization based on operating system isolation, which another way of saying hypervisor based virtualization, protects bad coding methods and techniques, shortened testing periods, promoting junk quality assurance efforts, etc., which are core to the entire coding to market model. The guy that invented single instance concept unit testing? Should be whipped with a bamboo cane, it was an excuse that code shop management was just itching for! So much code we have today is poor quality junk, and until this trend for dumping junk into the market is reversed, utility computing will never see the light of virtual day. Hypervisor based virtualization with its inherent operating system and application isolation is the only protection we have.

2 comments March 17th, 2008


March 2008
« Feb   Apr »

Posts by Month

Posts by Category