Archive for May, 2008

LifeCycle Wars? Gotta Be kidding… More Important Issues Remain To Be Resolved!

Virtualization, Fine, Well Sort of? – Chapter 14

Gartner commented that every thing will change in 2008 in reference to virtualization? Well duhe. Guess the old crystal ball needs new batteries? But to be honest and fail, Gartner was right, in concept, but not in subject. Everything is changing in virtualization, it has every single year for the last five (5) or so years running. Virtualization and the associated tools, methods and platforms have been, in my humble opinion, been the most dynamic aspect of the information technology sector, no? Fine, dual-cores and quad-cores were expected, come-on! Of course, what has changed and how it has changed is also a matter of debate and exhibited by various perspectives echoing across cyberspace. Including my views, of which, having been quite outspoken, well duhe, again, are no secret.

VMware for example, has constantly improved the enterprise hypervisor based platform, cough 3i, cough cough ESXi, product to a significant degree, but this has been completely predictable. Adding more capacity per virtual instance, making shared storage a core requirement a la VMotion if not Storage VMotion, failed to develop enterprise scalable archival options, and taken almost forever to improve its management application suite? Yes, VirtualCenter has gotten new features and add-ins, but VirtualCenter as a framework, well, still is horrible. If you don’t believe me? Try using it at its quoted scaling capability? Better yet, try using at half its quoted scaling recommendation? It does not do well. Even VMware acknowledges this. Not to mention inconsistent 64bit support for the client? Which has been fixed! But only after a lot of customer heated feedback. This is not to say that VirtualCenter has not been improved, it has, but it, like ESXi is still not enterprise class… yet. When it is a true enterprise class solution… Watch out! It could be competition killer, cough, Microsoft you listening? Did some one say (System Center Virtual Machine Manager (SCVMM)? Again, I have commented on this before, so why do I mention it now? Well because of VMware LifeCycle Manager of course! What? Did I lose anyone?

Yes, LifeCycle products are all the new rage, and if you widen the concept slightly, you get an entire range of related products, beyond VMware LifeCycle Manager. BladeLogic Operations Manager, IBM TPM, Opsware, and even VMware Lab Manager is/was a LifeCycle product for a focused niche. But the list goes on, ConfigureSoft Enterprise Configuration Manager, Encore, Vizioncore vCharter, and ToutVirtual has an entire framework devoted to LifeCycle concept for years, and I am sure I am forgetting a few others, but the list is sufficient for the point, all of these solutions support the basic features or concepts inclusive of total life cycle continuation of virtualization containers, in our biased view, virtual instances of course, right? Let’s itemize feature set at a high level a bit? Some key features for any life cycle product:

  • Provisioning (Charge-Forward?)
  • Retirement
  • Configuration Management
  • Security Management
  • Patching
  • Asset Management (Charge-Back?)
  • Physical-To-Virtual, Well Ok

Provisioning and retirement are obvious no? The automated creation and destruction of instances makes sense. Once an instance exists it must be maintained, so functional configuration, security remediation, and patch management (fixing bugs, not just closing holes), all fall into place. Now, lets see, that is left? Oh, yes, asset management, which beyond inventory and capacity planning, tracking, and trending, that wonders of wonders, the charge forward and back model! Well, blah, blah, duhe, once more. There are quite of few CPAs I know that really do hate charge models, not because of the validity of the concept is not sound, but because establishing and maintaining such models is a pain, can you see blood? I even know one individual that equated capitalization and deprecated costing hang ups with virtual instances as… and I quote… spreading the peanut-butter. Of course I asked, smooth or chunky… the reply was… and I quote again… I don’t know, super chunky? Quite a few big scale clients of virtualization have even developed their own in-house LifeCycle solutions, why? Because enterprise class solutions took more than two (2) years to get to market in 2004 and/or 2005, and some products were/are incomplete, nothing but band-aiding VMware ESX as a Linux distribution into the same old gadgets, very un-cool. The one exception would physical-to-virtual (P2V) solutions, which have remained tight and focused on modest improvement objectives rather than chasing feature set expansion as the only valued goal.

The sad reality is that with all the rush to deliver LifeCycle solutions, VMware included, if debatable, late, the entire virtualization industry has not addressed some key issues. Many of these have been discussed in this same blog in the past, and quite a few other blogs. Including, but not limited to… we do not, and this true of just about every virtualization toting publisher, have enterprise class management tools from the market leaders, we do not have extensive options for archival, thin-disking, disk-instancing (read-only shared disk images) from many of the major storage venders, and the list goes on, heck, VMware has even removed features over time, rather than really improve them? The entire virtualization scope of the information technology industry is still developing new solutions, and not improving existing solutions. No one should be surprised; Gartner said they should make new products and add tons of new features, in order to survive, cough, Hypervisor Wars. So I guess everyone that develops Hypervisors reads Gartner? Can’t count the number of times I have heard customers say… improve what you have before you add new features, well for the last time, duhe. Seems like common sense, no? Not to sales and marketing gurus? Well, they read Gartner as well, right?

We have more clients, very day, going from 10s of virtual instances, to 100s, or from 100s to 1000s, and to be sure, more than a few going from 1000s to 10,000s in this calendar year, no? And what will they need? Oh lets see… very stable hypervisors, check. A fast and consistent management tool that work at significant scale, ah, not quite. Archival solutions that scale, well de-duplication is an initial step for that, so check, with a question mark. And for the 800 pound guerrilla, utility computing, leveraging application instancing, nope, this one is still not reality, well, except for Solaris Zones, or in the context of grid computing. Now, for crying out loud, is that not what we asked for in 2004, 2005, and 2006? And to think, Gartner told us in 2008, everything is going to change? Well has it? Duhe, oops sorry, said it again. Honestly, don’t see that virtualization has changed much at all; we still have the same basic problems we had years go.

Oh, Gartner, the new iBall (say Eye-Ball) still is not very accurate? But, remember, it has new features! To be fair, the newest iBall is an isolinear integrated, carbon based, globe, it glows in 16 million colors, at infinity minus one resolution! No more heavy lead-crystal from Baccarat? Going to miss that, to be sure. No realistically improved functionality, cough, accuracy? Why am I, not surprised, duhe, ops I said it again. It appears all the leading crystal ball makers read Gartner as well, and they have developed and retired three (3) generations of the popular predictive orbs, supported only for one (1) calendar year which is a shame. Purchase of sooth-saying devices has to be a significant capitalization for organizations like Gartner, right? So where do we ship the batteries, yes, batteries. Reading the datasheet, the latest iBall uses traditional chemical cells? Still? But, I just wonder about one more thing… Like the new smart-cars, does the iBall get even one (1) more additional hour of predictive premonitions, for the same size chemical cells? How environmentally green is the newest iBall anyway?

Add comment May 28th, 2008

Do Your Virtual Machines Eat Pizza?

Know What Virtualization Is, But What Is Next? – Chapter 13

Of course your virtual machines do not each pizza? Well mine do. I know, I am sure, some of you believe that I hang around the cold vaults, sniffing the ozone that comes off transformers, the power supply type, not the robotic type. There a few individuals at VMware, Microsoft, IBM, Dell, HP, etc., who are certain that I have an intellectual challenge due to secret consumption of Halon. In their collective opinions, I have limited brain capacity. It is a fact, I have been told to my face that I am crazy, that my views are embryonic, and that I just don’t know the facts about the state of virtualization in the industry today. All those that have believed me insane, the title of this article may be, again from their collective viewpoints, prove the point, how asinine my perception of virtualization is. Well, so be it. Maybe my professional frustration has, at least, damaged my logical diagnostic aptitude.

But, regardless of reality, perspective, or fact, my virtual machines eat pizza. Or to be explicit, my virtual instances eat resource pizza pie. Of course they do. Disk, Network, Memory IO, and CPU cycles, everyone in virtualization, be it operating system isolation based, application instance based, or something in the middle, we know this, pardon the cliché, these truths to be self-evident, and obvious. The entire rationale for virtualization is better consumption of resources? Over subscription, which I am not a fan of in most cases, if not accurate assumption of resources available. In the context of the title for today? Making sure every slice of pizza is eaten, very bite is enjoyed, that my clients are arguing about the last piece of pizza in the box, no pun indented. Rather than saying… go ahead, you can have the last slice, because if they are saying that? I did not do my homework right, because in virtualization there should never, never, ever, ever, be a virtual instance, cough, client which does not leave the table just a bit hungry. Yes, hungry, everyone has to be just a bit hungry. Why? When that is the case, I have achieved three nines utilization no?

Wait, hold up, and stop, what the heck does leaving the table hungry, pizza, and whatever you are saying have to do with virtualization. Well, it is obvious right? Predictive modeling! What? You did not see that coming? Odd, I would have thought everyone would have? Yes, predictive modeling. If the dirty secret of operating system isolation in virtualization is poor, or even worse, bad code, then the stupidity of virtualization, is the lack of extensive and exhaustive use of predictive modeling. This is not to say that this issue is not understood, it is, at least to some degree by the great minds of technology on some high plane of cognitive insight. Predictive Analytics (PA) is already well established in some areas of endeavor. Everyone of us has gained benefit from PA, every time we surf to a website, and said web site suggests a different but related product? Amazon and Netflix leverage the hell out the concept. The computing industry knows the power of PA, ComputerWorld commented on it in 2006, and CIO commented on the same concept in 2004. So I ask you, why has PA not turned virtualization on its head? We understand how to make pizza, cough, implement virtualization. But we don’t really know how to make virtualization resource utilization better, in a predictive manner? If you know just how many slices of pizza you need to make, you know just how much infrastructure you really need to pre-provision, right? Sorry, I just twisted the metaphoric linear relationship. But I believe you see the point no? True, there have been attempts to address the issue, BMC Software in 2005, and SearchDataCenter scratched the surface in 2007 referencing a few more player in the field of PA application to virtualization. However, I have yet to see PA change virtualization. Why is that? Microsoft System Center Virtual Machine Manager (SCVMM) and VirtualCenter (VC) to not have PA integrated? True, VC has HA and DRS, which on the QT (Quick Tip) makes recommendations on host-to-virtual-instance alignment could be done but that is historical trending based on existing infrastructure, not PA against pre-provisioning. PA should happen at initial P2V candidacy, as a what-if analysis before an entire virtualization infrastructure is evaluated, not just as virtual instances are introduced to established virtual clusters, hosts and/or sites.

So how to we kick start PA use? By way of example, I realize PA use in virtualization is not trivial. So here is a suggestion. Every hardware manufacturer, to receive inclusion to the any virtualization vendors respective HCL must execute and maintain baseline performance data. This is already done, no? Sales teams nee this or have this to some degree? Just needs to be standardized or normalized across all vendors. Every software publisher should do the same, should take the baseline configuration of the systems via VMmark, Sandrasoft, or whatever tool resource analysis tool was used, and run their application in the same context. No, not every variant, but say high, medium, and low scale host platforms as published by the hardware vendors. So every hardware vendor does three basic models and every software publisher does the three (3) largest hardware vendors, say HP, Dell, IBM. This is where the strategic alliances should make things happen, yes? I do not see nine (9) unit test scenarios for each software publisher as difficult or abusive. Share the data on the web, in a standard format that can then be imported and establish a baseline for PA for virtualization frameworks, like VC and SCVMM no? This datastore would grow, and develop like an open source initiative?

Every single PA integrated tool could mine this data to establish better modeling for virtualization! Aligned Strategy believes there is real money to be made in this scope. Does it not seem like common sense? PA needs a traditional hardware trending model to leverage for virtual instance what-if analysis, right? Is not the greenest thing we can do for ourselves; is figure out just how many slices of pizza our virtual instances need before we order the pizza? Dang it… while I have been trying convince everyone to change the world of virtualization… some rotten so and so snagged the last slice of pepperoni! I wish I could have predicated that… well, that is the point of the article, after all.

Add comment May 14th, 2008




Calendar

May 2008
M T W T F S S
« Apr   Jun »
 1234
567891011
12131415161718
19202122232425
262728293031  

Posts by Month

Posts by Category