Cialis medicinal variation is its extended 50 %-lifestyle Raspberry Ketones Raspberry ketone curvelle

VMware API Performance AWOL

Virtualization Critical Evaluation – Chapter 13

Now, having been a developer off and on in my information technology career, I have come across both good and bad IDEs, good and bad SDKs, etc. And I have seen more CLIs than most. Of course logical structure, ease of use, consistent presentation of design, performance, etc., are important, and when this is not the case developers tend to be rather vocal and critical. Oh to be honest, developers can be downright nasty and brutal when something is subpar.
Of course the VMware API/SDKU has some fans and enemies. Using the vSphere API is a love and hate relationship for me as well. Using the PowerShell CLI or the Remote Perl CLI for vSphere promotes this positive and negative perception as well for me. Nor is my view, a minority from those that I have queried on this subject. Every single developer I have discussed the situation with that has any significant experience with the VMware API/SDK, states with routine if not eerie consistency the VMware API, is, well, just odd. Yes the term odd is often quoted. There are other comments that are made as well, including difficult, confusing, complex, horrible, powerful but crippled.

I would not call the VMware API or SDK the hardest I have ever used, but it is by far, one of the most frustrating APIs I have experienced thus far. I find often that when using calls in the VMware API, it is clear that something is missing in the logical design of the API, that it often takes multiple iterations of the API calls to get the most obvious features functional, if not gain access to basic information. Layers upon layers, object upon objects. This raises the question, which I have heard voiced many times by others and thought so myself; something to the effect… hey, just who or whom is talking to the developers in the greater VMware developer community? Is the message not getting to the right people in VMware?

For the sake of discussion, given that API/SDK design evaluation can be subjective unless compared to other APIs/SDKs. The single most important design factor for the vSphere API should have been performance, and if it was, it did not translate well into the CLIs or appear to be so when dealing with vCenter. VMware has trouble with this as well, the performance lost is translation that is, and acknowledges it to an extent as well, if indirectly, if you don’t believe me look at the knowledge base article at the following URL:
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1003708.

Of key interest to me, is the quote… The KB is currently in review. Please check back later for updated information. Thank you for your patience. What impression should result from this statement? Since this web page states that KB article #1003708 was updated August 14, 2009. It is now almost 6 months later? I really hope this was an oversight, given that the title of the article is Performance Improvements with Virtual Infrastructure API 2.5. Yes, of course this is 2.5 and not 4.0, but I have yet to find a similar document specific to the vSphere API. If it does exist, I would love for someone to show it to me, post the URL, something. I cannot believe that VMware is silent on API performance, that would just be, to be explicit… wrong, if not bad form.

Why am I concerned with VMware API performance? In other words, I am concerned about vCenter scaling, and have been for some time, it is not improving in any realistic sense from my perspective. Scaling is only as good as the performance, and to be bunt and frank, vCenter, since version 1.3, just has not been a performer, and given the feedback I have received, VirtualCenter a.k.a. vCenter, has been dogged by performance issues both from an automation perspective, via API/SDK, as well as a GUI context since 1.3. This is fact not fiction. Moreover, as the scale increases, regardless of hardware thrown at the situation, vCenter has issues that cripple its future benefit with each successive development and release of the vCenter component of VMware virtual infrastructure, without radical change.

The vast expanse of vCenter enhancement products, solutions, and frameworks do little to help the situation. In fact, these various add-ons further aggregate the situation. The solution of course, is to change the architecture of vCenter, and at least consider changing the operating system platforms that support vCenter, be it Windows or Linux, each operating system scales from foundational design induced constraints. Thus such limitations bind vCenter, in its current iteration, to a finite scale and scope that is enterprise in name only, never mind cloud oriented scale. Some have suggested that a Linux based vCenter server is one possible solution. Or that the current vCenter architecture is under review. Well, these options need serious consideration, no, these options need to be explored and implemented. Enterprise, moreover, Cloud customers are not thrilled with vCenter and its quirky performance and inconsistent behavior at advertised and thus approved/supported greater scale.

It is not ironic that the greater strength of the vSphere environment, vCenter is also, with considerable credibility from a rather diverse group of developers and users, its weakest link? Consider that the ultimate virtualization survivor will be the one that scales at enterprise scale at a minimum, and realistic cloud scale, where 100s of hosts under extensive load, with 1000s of virtual instances doing real stress level work, and performs better than expected. Cloud computing demands this, requires this. Unfortunate, but true for VMware, the arguable industry leader appears to be struggling with these goals. HA & DRS needed real work to work at any serious cloud significant scale, and the API/SDK is showing its limitations when driven as an automation appliance. It is clear vCenter cannot handle multiple entities or even multiple sessions demanded via MOB, SOAP, etc., context for cloud computing. So, the question is, will this be the key issue that allows KVM, RHEV, Oracle VM, or Hyper-V, nail VMware against the wall? The wall of obsolescence? VMware has painted a rather large target on its self which sooner rather than later, some other competitor is going to score a hit dead center on? Virtualization is moving to a Wal-Mart style, or volume based model. PODs, Clouds, etc. all require this, and the actual design of vCenter, and the associated API/SDK, at least from my perspective is still a boutique style solution, it does not perform great, not as easy to use as it should be, and as VMware adds features without redesign of the foundation?

Well, only time will tell. DeltaCloud and similar solutions by other providers are looking down their collective sights at that large painted target on VMware, and their respective ability to aim, is improving all the time. I would not be surprised if someone has a replacement for vCenter in development if not nearing release, such that it drives ESXi better than VMware does? What a radical idea. Would that shake the virtualization world? That someone other than VMware has unchained the potential of the VMware API, such, that their respective solution out performs vCenter. Wow, that would make quite a few VMware customers ecstatic! Talk about a state of bliss.

Add comment March 15th, 2010 Schorschi

Why Has Virtualization Not Encouraged Piracy?

Is it a lack of interest, or technical limitations?

Like many of you that surf the web, I find, that there is no way to avoid three things on the internet, the first obvious thing is sexual content, it is just everywhere. Search for anything, the most innocent thing in the world as a subject, and someone has associated it with sex. Virtualization included, Virtual Girl for example, is just one variant of a theme, if you care to notice it. The second thing you cannot avoid is advertising of some type, often sex is what is being advertised of course, so they go hand in hand, ah, bad association, sorry. The remaining item is some type of notification that what is on screen is owned by someone, somehow, somewhere. Trademarks, Copyright symbols, etc. abound. No matter how obvious it is, that a given web site owns something, it is still stamped or branded beyond reason, from straight-forward text, to embedded, encrypted, patterns, that the human eye cannot process, the need for humans to own something that does not really exist, is significant, just as the piracy reality on the internet to ignore legal ownership abounds.

Piracy on the internet is significant, or at least the offer to benefit from piracy is almost everywhere. This is not an argument about creative rights versus free access, so if that is your expectation, stop reading now. Piracy is stealing. Nor is this blog entry about sex, advertising, or sex and advertising, as such, or even the greater problem of piracy, but rather the observation that there is a significant lack of piracy within the virtualization space compared to other digital theft methods that takes place now on the internet. It is an interesting question to ruminate on…

Virtual machines are transportable, compact if using thin-disking or thin-partitioning, they would seem to be ripe for the picking! With virtualization machine players, such as VMware Player, one would think that piracy of appliances would be off the charts. Why do we not see some shady character on the street corner hawking appliances in similar manner to the frequency of replication and distribution as with DVDs? Why do we not see various Warez or Appz sites stating over and over… Hey, don’t just pirate a game, pirate the entire system that runs the game!? Maybe we need a VM player that is just as easy to use as a DVD player? No? Are there some unique barriers to appliance pirates?

With the growth of VDI, and the enhancement of virtual machines to support more processor power, more memory, and even better video graphic processor emulation, one would think that at the very least video game duplication would be threatened by virtual machines running functional video games, right? Well, maybe, but the personal computing based gaming market has been hit hard by other factors, including the dedicated console market. Of course PC based gaming suffers from piracy like crazy. But is the classic console version alone a sufficient barrier, not to virtualization that I can see!

Digging a bit deeper, maybe it is the virtualization technology alone that is slowing the piracy of virtual appliances? Virtualization was not cheap and easy, no matter what anyone says for the first 5 or so years that it gained momentum in enterprise datacenters. There are pirated versions of VMware Workstation out there, but with VMware Player being free, why bother. There are Video Game Console emulators that are gaining popularity, as Coin-Op emulators such as MAME have done, and some emulators have never seen commercial reality or public distribution, such as the fabled and rumored Connectix PS2 emulator that Sony may have purchased before significant people ever saw it? Oh, wait, that was pirated for a short while was it not? Oops!

True or not, PS2, or even PS3 emulation in a virtual appliance has yet to appear and take the world by storm, same could be said of the Xbox or Xbox360, what emulators that exist, just have not taken off with millions of copies downloaded? Is it because the lawyers are that good? That the legal system is just that good? This may change soon. There is just too much interest in making it happen? I sense it is too tempting a target now that typical hardware is sufficient to handle it. Moreover, I suspect the Wii will be the last console to be hacked and emulated, why? Face it, 14 year-old male hackers, are not into the typical Wii game. Regardless of the marketing and advertising, 14 year old males are into sex, not pretty fluffy cartoon characters, well, not unless they have boobs like the wife of Roger Rabbit! And don’t even tempt me to discuss, biometric mechanical devices, which could be connected to a 3 dimensional Wii feedback enabled controller? Just the suggestion of that might give people the wrong idea. Never mind the movie Surrogates, or various articles in the late 1950s that suggested the theme for the movie Surrogates. As soon as the interface is sufficient, it will happen.

If the real issue has been the slow progress of computational power, and graphic image capacity, over the last 4 or so years, for example VDI supporting multiple display screens was slow to materialize, and mobile devices to adapt or leverage virtualization, need more visual real estate? Then we need virtual monitors, or 3d projectors to come up in quality and capability. First it was memory and processor power, and then graphic adapter emulation, but those issues for the most part are history. Even good old KVM can do a far job of playing a video game in a virtual instance if you work at it a bit, so the dedicated console space should not feel all that safe.

So the technical limitations of the past are gone, other than the virtual real estate issue. When the typical desktop is now approaching a quad-core as the default processor, VT-x, VT-d, chipset support, etc. is there just awaiting to be leveraged? I ask again, why are we not expecting virtualization piracy to not take off, to balloon into a monster similar to bittorrent, eDonkey or other P2P solutions? Never mind, F2F and DarkNets like popularity for piracy of appliances? Only one barrier, remains, that of interest! Yes, I said interest.

Case in point on how interest drives piracy, not technology, the last Star Trek Movie was the most pirated movie in history according to various bittorrent subculture tracking columns, blogs, and experts. Because years ago, someone thought it would be interesting to create bittorrent. Moreover, Avatar and I am not taking about the kid that is the Last Air Bender, but the Avatar movie set on Pandora, that is breaking all the theater attendance records? A movie that has unreal computational scope and deep achieved, to establish eye candy almost beyond belief, I suspect will be the most pirated film ever. Even the 3d variant of the film will be pirated beyond rationale expectation. Why? And I am sorry Hollywood moguls, but it will be pirated in very high quality video and audio, not some half-assed camcorder method, where the sound is bad or worse out of sync, by sophisticated professionals generating a superior quality result, as has been done with many other films, DVD releases, etc. But I digress, why does this piracy happen? Because it is popular, because it is interesting to do things, and motivated self interest is a very powerful motive. Or has everyone forgotten their basic microeconomics course?

Virtualization based appliances, are no different. Even embedded system apps, such as on the iPhone, are at risk for future extensive piracy. As the greater demand goes up, so will the interest go up for pirates to do their thing, that pesky motive self interest! Has history not proven this out, over and over? Pirates always pirate things that are interesting, and if the piracy is profitable at some point? Well, that just is more incentive, to rip of the man, right?

Add comment February 6th, 2010 Schorschi

Death of a Datacenter, Well Sort Of

What is kind of year was 2009 anyways?

Death of a datacenter, well sort, maybe the space in a realistic assessment is more of an oversized server room not a datacenter in the proper sense. I realize I need to explain this. Picture this… an office in the darkest corner of a large building, where one entire wall of the office is glass, looking out into a large file server area. A server site that supported thousands of people for almost 15 years, but now this server room is empty, just a bunch of holes in the raised floor that stretches before you. You remember laying down many of those raised floor tiles, and the server racks over them,. The growth of the site, as rows of racks grew over time. You remember the technology growing more powerful, the scale of capacity growing. But about 6 years ago, just as this file server room had been online for 8 years, 8 of the 18 years of a career in IT, virtualization appears.

Little by little, the number of physical servers does decline here and there, but the total volume of computational capacity increases, so the few empty racks, once full, don’t stand out. After all, there is work to be done. Warp speed. The minor changes in the number of staff or the changes in how the environment managed, the staff reductions in the on-site team are not that obvious. The newest technology, even with virtualization, has not changed the support model in extreme overt manner. Like a ship in dry dock, refits and upgrades of all types and sizes seem endless and ever more complex.

Computing is changing, the computing sites are changing. Remote device control, HP iLO, Dell RAC, IBM RSA, etc., blades, and other forms of topless or headless systems, is growing alone side virtualization, so now the number of cabinets that are empty are obvious and maybe just a bit unnerving. The staffing level has declined, the roles have changed, the command-center thousands of miles away or the command-center in our building is controlling more sites and locations than ever before. Doing more with less is not just a concept or slogan any more.

The years have passed all of the above has changed the concept of dedicated file server sites, of datacenters, of the IT industry. You realize that the number of days you transport into the office for the first time is fewer than the number of days you work from a remote site or location each week. Moreover, management, the command at the top, is encouraging this behavior, rather than blocking or debating its value. But the file server room is still there, darker, quieter than ever, but still doing its computational job. The systems are fewer of course, but they are so powerful, so significant in impact, but in such a small form factor, it is almost science fiction. The upgrades and changes have not touch the heart and soul of the site, the form and function change, but not the purpose.

Now you stand in that same office again, facing the entire wall that is nothing but glass, and you see a large empty room before you, has it been that long? How many worlds, virtual worlds, visited? You turn to look at the office and it is empty as well. Sure there is still a desk, a chair, maybe a bookshelf unit. But the soul of the office is gone, the energy of the late nights, early mornings, of echoes of laughter where practical jokes abounded are just ghosts of times past, none of the original team remains but you, the crew that saved your ass countless times is long gone. You remember when the kids, on bring your children to work day each year would walk through the massive columns of computing power, listening to the hum of the fans, the vibration of the equipment, the click of the disks in the arrays, the soft beeps or the rare click of keys in the distance. If you close your eyes, and let your imagination expand just a bit, you could see yourself in the engine room of starship, and the office, was the engineering operational center of the ship. After all the bridge was the command-center many floors above right?

But the reality is the engine room of this ship is empty now; warp coils and power relays gone, the heating/cooling system, the magnetic containment system, the atmosphere controls and fire suppression system gone. The core is gone. The ship is decommissioned, silent, just like the bridge, ah, I mean the command-center that was taken offline, sometime ago. You take off your badge, cough, your communication link, and place it on the control console, which has nothing on it but a layer of dust, dead, lifeless. No one said this would last forever, no one expected it would, but somehow it is sad that now it has come to the end. The perception is that the end of an era is among of you. Without thinking about it, you snap your heals together, straighten your back, and your arm seems to move on its own, and before you act to do otherwise, you salute the glass wall facing where the core once existed. You pause for a moment longer, remembering friends lost and long gone. You think the words… Warp Speed.

Reality comes back, so you straighten your uniform, and pick up your travel pack. Time has passed faster than you realized, you notice you are running late. The sadness that dominated your thoughts a few moments before, as you rush out of the room is gone; a new unique aspect of your career is beginning. You hear the hiss of the pressure doors close for the last time as you leave. You know you will never return to this ship again. The soul of the ship is now gone, she is now a cold dead hull, nothing but structural elements, components and resources to be recycled soon. You think to yourself you need to hurry now, that the last working transporter is on deck 12, and if Turbolift 4 is already offline, it is going to be a long climb from the engineering deck to deck 12.

As you run down the passage way on deck 12, yelling to the decommissioning crew in the way to make a hole, approaching the lift, you notice the designation on the wall next to the lift says… Elevator 4. You blink twice, shaking your head, but it still says… Elevator 4. Just a building, and file server room, after all? Not a ship? What was I thinking!

Add comment December 29th, 2009 Schorschi

Is virtualization entering a Dark Age?

Cloud Computing is Pushing Virtualization into the Shadows

Clouding is the rage, everyone that is anyone is riding, or would it be better to say floating in, the clouds? I prefer to call it cloud bursting. Clouding is fine, but if it is a solution looking for a problem, then it is a horrible time and resources pit to fall into. No this article is not about the good or bad of clouding, but the impact of clouding on virtualization and the resources that design and support virtualization, and what is missing in virtualization to support clouding to a degree. Moreover of all the commercials about clouding that flood the airways now… IBM has one that just makes my teeth grind… IBM commercial says, if I heard it right… “…Cloud is Simple…” Well, nothing about clouding is simple. Pods are simple, from a hardware perspective; but the clouding software is not. Google has had what? Ten (10) or more years developing cloud oriented logic from a software perceptive!

Virtualization is now a commodity, just one more tool or component to a greater synergistic effort. But is it really? None of the complexity of the computing environment is reduced, it is increased. Just as virtualization made computing more complex, so does clouding, adding layers of complexity, dragging virtualization, provisioning, automation, reporting, and decision making together, of course decision making. Thus effective and efficient management and control are more important than ever. This is where things get dark, and everything fades to long grey shadows, and environment or datacenter architects run for the shadows, like roaches for cracks in the floor molding. Why? Because clouding is not easy, it is not uniform, it is not consistent. Cloud decisions are career changers… people resign, give up, or get nailed, by cloud solutions that do not live up to the hype, and no cloud solution, no matter how valued or vaulted lives up to all the hype. Thinking I am wrong? Dig a bit deeper, the evidence is there, that cloud design and implementation is hard work that some just cannot handle, or others took many years to get right at some effective level. Years? Oh man, that is a word that management hates with a passion, up there with corporate taxes.

Worse the cloud terminology is horrible. Whoever came up with the term Service? As in an application in a cloud is a Service? Talk about selecting the worse way to communicate to the end-user population a concept. Could it have been made any more, less informative? An application in a cloud is just that, an application! End-users understand applications, not generic terms like Service. Come on, call a stone a stone, call an application an application, for crying out loud. Better yet, call applications in a cloud… Solutions! End-users think in terms of problems and solutions.

The ugly aspect of clouding beyond the human impact is that the complexity of virtualization is not being acknowledged, and so talent and resources once dedicated to good virtualization solutions are bled down to the minimum, either let go, or feed to the clouding chewing machine. This is fact, not fiction. The push to have automation and autonomous systems mange a virtualization environment is a great goal, but is it a technical reality? Having reviewed more than 8 significant management, control, and reporting applications this year, including Surgient, Hyper9, ManageIQ, CapacityIQ, Liquidware Labs, etc. offers, as well as MOAB and Platform ISF. The word that comes to mind is… disappointment. Not because most of the these solutions in their own right have no merit, they do, but because all of them lack something that is critical to cloud busting, and some of us have been asking for as customer of virtualization for more than four (4) years, in my case closer to six (6) years, Predictive Analysis.

Predictive Analysis, in reference to virtualization, is the ability to do What If analysis scenarios against an existing environment. Look before leaping, in context to and virtualization models, which clouding must also address or deal with. For example, given ten (10) virtual machines, what happens when one (1) more is added to this situation? Saying, in simple terms, the result of such a proposed move, is good, bad or ugly? Now what is the impact if adding five (5) new virtual instances to an existing pool of a 1000? Predictive Analysis is applicable to every single customer of cloud computing, or bursting, large and small.

Never before, has predictive modeling been needed, especially in clouding. So where are the 100s of products that do predicative analysis? I would have thought that P2V would have pushed this gap into the light, from the dark shadows, but that did not happen, in part because P2V is not popular in some environments. So now cloud computing should do it, forcing predicative engines to factual reality. Talk about missing an obvious opportunity to establish competitive advantage! So, as virtualization is fades into the background, into a dark age, becoming more black-box than ever, and predicative analytical tools to supporting clouding own the light? Not yet, not yet.

Add comment December 16th, 2009 Schorschi

Traveled Around the World, Traveled Through Time

Know What Virtualization Is, But What Is Next? – Chapter 17

A couple of days ago, I was in Cairo and the Valley of the Kings, in Egypt. Moved from Taxco, Mexico on to Hong Kong hours later, and last night as well as early this morning I was in Beirut, Lebanon. Last week I was in Bangkok, Thailand, and Penang Malaysia. Never mind, various locations in Canada, and the United States. As I write this blog entry I just left Tokyo Japan. How is this possible, well, I was going through several thousands of 35mm slides my maternal grandparents took as they traveled around the world in 1966 and going forward in time some 45 or so years until now, 2009? In 1966, which it so happens, was just after my 1st birth day, the world was a different place; the pictures I have seen make this clear. This walk through time, around the world, is more than interesting, it is quite personal, because my maternal grandmother now almost 92, is losing her short-term memory, and starting to lose some of her long-term memory. Thus, extreme age is curse at times, and not always a benefit. So now is the time to review the slides and make sure what my Grandmother remembers, is not lost to the continuum of space and time.

At this point, I am sure someone is asking… and this has something to do with virtualization? It does. What will we remember of virtualization in 4 or more years, what will the future of computing look like to each generation beyond us from now? In just the last 6 years, virtualization is now the driving force in computing. Hands down, cloud computing would be near impossible without virtualization. But, unlike Gartner, I do not have a crystal ball, I don’t have extensive resources to research the trends, the patterns, or the unique indicators that twist and turn computing destiny, like stellar matter on the event horizon of a black hole? But what I do have is a bit of common sense, and some basic knowledge of the information technology industry. There are few key concepts that computing continues to establish, and re-establish over and over the last 40 years for so. Will these concepts hold true 40 year from now?

Time versus Space. This is a fact, computing still struggles with a way to resolve this conflict. Technology hides the issue, but the principle never changes. In computing this is memory versus disk, even as disk as a metaphor changes, from mechanical to solid state, the issue does not. Well, operating system architecture may change and may eliminate the conflict, how? Imagine an operating system, complete and robust that lives only in memory? ESXi stateless with its direct memory load is a step in this direction. No, solid-state disks are disqualified, the use of disk IO will not survive, cell phone operating system design using SIM cards are an incomplete but parallel concept, the SIM card is seen as memory to an extent, and extension of memory space. If the entire operating system only lives in memory, disk is a drag that can be eliminated, not the just device, but the API that drives disks can be abandoned. Will the idea of a giant, huge, ever present data-core, exist in the future? I believe so.

Direct versus Indirect Interfaces. Keyboards, and mice, where and are indirect cybernetic interfaces. Which will apply to virtualization in 40 years? Ignoring the soul and the metaphysical aspects of human existence, what is the mind? A parallel computing system, sure, a database engine, short-term and long-term memory, caching, etc., yes to all, so improving the connections to these is a corner stone now that will build future methods? Consider cybernetic interfaces now in development for war veterans, there is no realistic wet interfacing to the mind, but variants of electro-chemic driven apparatus, all indirect methods. So cybernetic systems at least today are driven by proxy only, such as toes stand in for a hand, arm, or even fingers that no longer can be controlled by the mind via direct interface to the nervous system before injury or loss of limb? But in 40 years, direct interfaces will eclipse this limitation. One possible result of wet interfaces or direct integration of virtual space to the human nervous system, may overcome cerebral palsy. What did I lose some of you? Imagine the potential, for instance of such direct interfacing. Case in point, cerebral palsy is brain damage, where the voluntarily motor system is unable to function as designed. In computing terms cerebral palsy is corrupted firmware because the memory location where it resides is or has failed, the mind cannot drive the peripherals, meaning the arms and legs, as needed, in a correct manner, the result is all the side effects the condition creates, misalign muscle strength, tendons to weak or strong to work as designed, etc. I However, a direct interface technology, could create a virtual mirror of the damaged part of the brain, interface the rest of the functional brain to the new mirror, then flash the mirror, thus the brain via external reference, can by-pass the fault, and cerebral palsy never establishes the impact, and normal muscle and tendon development takes place. Don’t stop with just a cure, no, a fix for cerebral palsy? What would 40 or more years allow us to achieve? Almost any memory oriented chronic issue in the human mind could be eliminated or improved.

But considering virtualization in five (5) dimensions, beyond application versus operating system isolation frameworks? Abstract the mind, focus on the sensory aspects of the mind, that drive the five senses, touch, taste, smell, sight, sound? With cybernetic enhancement, and direct interface improvements overcome memory recall, or even memory imprinting? What will we be able to do in the future? Will we encounter a limitation of the mind? The old space versus time issue comes back again? Once solution would be to abstract human memory into a virtualized space not expanding consciousness but just access and retrieval functions for information, which would seem possible, no? Can the human mind learn to access external memory? So back to the indirect versus direct interface design again? Time versus space becomes an issue unless external resources and be leveraged by the mind. Get the feeling that this is circular logic? Imagine a world, where dementia and Alzheimer’s, or any of the various recall impacting diseases or chronic conditions, being addresses not by drugs or other electro-chemical alchemistic methods, but by information technology solutions? Ok, so maybe it is science fiction, but will it always be so? I hope not in 40 or more years, as I approach 85 years of age! VDI may one day stand for Virtual Direct Interface to the human mind?

If only walking through time, was as easy as walking around the world. If only walking through time, was like flipping through slides take decades ago. Consider this, in 1966 I am sure some could only dream of what we can do today with computing technology, virtualization would appear as a mystical and vague concept? So in the decades to come from now, what miracles will the future have that we can only imagine in simplistic terms now?

Add comment December 3rd, 2009 Schorschi

How Do You Say Good Bye?

So The Old Virtualization Platform Is Just Not Doing It for You Still (Part 2 of 2)

I suggest reading the blog entry, When Is It Time to Say Good Bye? before continuing to read this blog entry. It is not required but recommended.

Now that the decision has been made to change virtualization platforms… What is important? What should not have been forgotten or lost? What must be reaffirmed to be successful? There are three (3) tasks key to saying good bye to the declining virtualization platform, and hello to the emerging virtualization platform. All are easy to itemize, two (2) are easy to complete, and one (1) is not painless, but should be a known path, providing the lessons learned in the past are not forgotten, and the skills developed in the past are not abandoned.

  • Tell the original virtualization vendor… Goodbye! It was nice knowing you, you saved us money, thanks, but don’t let the door hit you in the ass on the way out!
  • Tell the new virtualization vendor… You are wonderful, but unless you are cheap, do everything we want immediately, you are gone, and you have been warned. Oh, and we signed a multiyear contract, but don’t expect it to be renewed, dude, unless your platform is better than expected.
  • Find all the documentation and people that implemented the declining virtualization platform, and get the ones worth leveraging, working on the emerging virtualization platform sooner than later. What? They were laid off? Reassigned? Or otherwise been regulated to a dark corner, and do nothing but drool and speak in a weird dead language no one has heard for 100s if not 1000s of years? Say it is not so!

Now of course, there is considerable humor implied in the above characterizations. Being professional precludes some of the semantic implications, so if anyone thinks said characterizations are realistic, my honest response is… What are you smoking, dude?

What is important? Moving to an emerging virtualization platform is a new platform, and although experience is important don’t let it bite you in the ass, because you don’t realize a new platform, is just that… a new platform. All the typical stuff, technical mumbo-jumbo, so no need to dive deep into that here, the new platform should have better or be reasonable in performance context to the declining solution. If not the scale will change in significant manner, vertical because better hardware is needed, horizontal if more virtual instances are needed to get the same performance to the end-users. This impacts cost of course. The cost avoidance model will change to some degree no matter what is done, so be prepared for that situation. From physical environmental factors to logical processing factors will change; don’t let this surprise anyone, including the customers. Again, this is a new platform, respect it as such. Engineering by Fact is a good concept to keep in mind. So is Management by Fact, so once the emerging solution is designed, tested, and implemented, expect surprises and differences based on actual results and situation. Unloading one hypervisor and loading another, even when stateless makes it simplistic, is just the tip of the iceberg. Management will want to believe otherwise, don’t let that happen.

What should not have been forgotten or lost? Some will disagree with this point, saying that things are not forgotten, well, yes they are. The skills that were needed to establish a viable and robust virtualization platform are not identical to those that maintain a virtualization platform over time. This is not a classic engineering versus operational scope argument. This is talent versus skill debate. Virtualization as many of you may recall, took insight and a bit of guess work, to get it off the ground. Yes, it was new and no one really knew what the upper limits were, etc., etc. Well, once again, that mind set is needed, walk before running, when running watch out for potholes. Scaled testing in a lab is never inclusive, but the temptation will be to rely on the past. That past experience will color the results that are gained in the lab, that is one of many potholes. Application designers and developers are not going to find the emerging virtualization platform easy to understand or deal with, build into the master plan for this confusion and frustration. Remember how some issues came out of nowhere when the declining virtualization platform was implemented? It surprised more often than not, right?

What must be reaffirmed to be successful? This is the ugly question. Why? Because what is already virtualized is, just that virtualized, and will be migrated in some fashion, be it virtual-to-virtual, or OVF, or whatever, beyond the performance and response differences, and the resulting marginal costing delta big or small, it is not low hanging fruit as once was the case. Those days are gone. If this migration is the first from the very first virtualization platform, this is going to be a culture shock event. Massive cost avoidance, beyond the savings (hope it is savings) of virtualization vendor licensing fees, is or has been accrued and tabulated in seconds, so that easy gainer is toast. Long term success now is based on incremental cost savings per host, per virtual instance, per flexibility of environment as managed, cough, cloud, or such, providing services and solutions on demand, with ease of availability.

How can ease of availability be enhanced, improved? A work-load-management solution, a deployment automation solution, and/or the use of a stateless model solution may or may not be opportunities for cost savings in the emerging virtualization platform. This is not a simple feat, if such components are or were already core to the declining virtualization solution, additional cost avoidance out of the cloud vapor, as such, may not be possible. In fact, adapting to the emerging virtualization might even increase expense sooner than later, ouch, taking time to net savings over time, ouch. Again, plan for this, control expectations, the emerging virtualization solution will need to mature and balance out within the organization, just as the declining virtualization solution did or required time to do, changing the culture of the organization, again. As my Mother, who was a manager for more than 30 years in a Fortune 10 firm was fond of reminding me… The only constant is change. To which my reply always was… Why does management always seem to forget that all change takes time? Do lt!

Add comment October 29th, 2009 Schorschi

When Is It Time to Say Good Bye?

So The Old Virtualization Platform Is Just Not Doing It for You Still (Part 1 of 2)

This time of year a lot of hard decisions are made in enterprise firms, I am sure many will identify with this statement. The New Year is approaching, the budget is due, the bonus if any is on the line, and now is the time to pull that rabbit out of the hat. There are a number of reasons to keep or leave your past virtualization platform, this article will explore a few points of on why a platform should or should not be continued, with the premise that it is time to say goodbye. The basic logic is evaluative analysis, something any of us in the computing industry know or should know well. The two key questions are:

  • Is the solution effective?
  • Is the solution efficient?

The basic objective is to save expense, total expense; this includes factors beyond the virtualization platform, and is reflective of the above questions of effectiveness and efficiency:

  • Can the platform be revised, improved, or otherwise developed?
  • What does management want, really want?
  • What is the complexity of the effort?
  • What is the time line and scope of the effort?
  • Can the customer base survive the transition?

The answers to these questions will be environment specific, again for the sake of illustrative explanation, each question will be explored in conceptual context only.

Is the solution effective? If the solution has been around for a few years, the expected answer would be yes. However, early in virtualization experimentation by a number of firms, mistakes were made, and true cost avoidance or savings may not have been as expected or hoped. Often virtualization platforms are kept conservative, and structured with caution, but over time, expectations for continual economies of scale demand more for less from the original design cloud the results if not the perception of effectiveness.

Is the solution efficient? Just because initial cost avoidance was good, or better than expected, does not mean the virtualization platform was well managed over time. This is an unfortunate, but a common problem for many firms that have significant virtualization. Staff changes or organizational reorganizations often break efficiencies gained as the virtualization platform matured, and just when the solution should be a smooth operating entity, things go wrong. Both management and engineers just cannot seem to leave something that works alone.

Can the platform be revised, improved, or otherwise developed? This should be an obvious yes. If not, then something is crippling the solution, or someone has made some horrible decisions about the virtualization solution. Management is often impatient for results; a short-sighted mind set for virtualization is often a key factor in failure of the solution or lack rational expectations. It takes years for a virtualization platform to mature regardless of the vendor. The vendor needs to be stable, consistent, and responsive, if not, walking away from the solution is more than possible. A struggling vendor is a key indicator that years later nothing but pain will result. The solution should work at reasonable scale, maturing feature set should never mask foundational or core functional issues. Fixing bugs and improving stability cannot be given a back seat to new feature development.

What does management want, really want? Never overlook the fact, that management may just not like the solution in place, the reasons for this are endless, but they often come down to two factors, or issues. The first issue is cost, management hates paying for anything twice, and so what was acceptable at the initial deployment of virtualization, is often a problem a few years later. VMware has this issue today, VMware costs go up, while customers are expecting costs to go down. Feature set grows, but most customers do not use all features or worse are forced to purchase many features just to get a few key needs addressed. This scenario just ticks management off, and is often the reason that a given solution is kicked to the curb even when successful. The second issue is competition, the best solution in the world, and I have discussed this issue in the past, often does not gain the greatest market share over time, Sony BetaMax versus VHS, Apple Macintosh versus PC clones, etc. True, the many will point out the iPod, but in fact, the iPod dominates only because it has no real or true competition, once there is something that is better, or even close to the iPod? Guess what! For example, the iPhone is already facing this issue with the latest generation of cell phones from other vendors, that are approaching iPhone class of service and range of use features.

What is the complexity of the effort? Or in other words, is it easy or hard to walk away from the existing virtualization platform? Today, moving from VMware VI3, e.g. VirtualCenter and ESX, to say KVM or Xen is not painless, but less painful every day. KVM and Virt-Manager make it easier to leave VMware vSphere now that KVM/Virt-Manager support OVF via Virt-Inst, and Open OVF expanding to support Hyper-V. In fact, once OVF supports references to virtual machine disks and does not embed virtual machine disk data, the transportability of virtual instances will be almost seamless, really closer to a classic cold migration, since the common shared storage is leveraged.

What is the time line and scope of the effort? Well, this question is not fair; it is a trick question of course. If the complexity of the effort is near painless and/or management has already decided it is time to go, then does the time and scope of the migration to a new solution really matter? Yes and no. Yes, in the sense that a complex migration may delay the migration. No, in that once the action plan is defined, and executed, the writing is on the wall. It is interesting that Microsoft has not been splashing all over the world, how they are stealing hand-over-fist customers from VMware? Why? Well Hyper-V until 2008 R2 with CSV (Cluster Shared Volumes), was not a true challenge for VMware VI3, never mind vSphere. Of course, KVM is still maturing. Oracle is stuck trying to figure out what to do with SPARC, so Zones never made it to the sand box? But once the pain of migration is resolved, watch out! Microsoft and others learned a number of things from killing off Novell. Microsoft Windows NT server even with faults was a general application server that was easy to management and use, oh and it did file serving as well. Microsoft made sure moving off Novell was a painless as possible, CSNW (Client Services for Netware) and GSNW (Gateway Services for Netware) for example. Novell 4 was a superior solution to Windows NT in serving files and using LDAP, but still it was painful to go from Windows to Novell, compared to Novell to Windows.

Can the customer base survive the transition? Aw, shucks, the end-users, who cares about them anyways? This is the most obvious sleeper issue. What? Wait! What the heck is an obvious sleeper issue? I gave away this one in the answer above… CSNW for example, let a desktop think it was working with a Novell server, when it was really a Microsoft Windows NT server. It was about as transparent and seamless as possible considering the radical differences between Novell and Windows NT. CSNW make the end-user experience, if everything was done right, painless as possible. Now consider virtual instances, which are several steps away from hardware, and even platform specifics, combined with V2V (Virtual-to-Virtual) tools and methods? Or even easier, a simple reboot, because KVM can support VMDK files? Sure KVM VMDK support many not be the best performance, but on the standard pain scale, is more like of an itch to be scratched, than a needle prick. OVF is another option as well. Unless someone is just plain goofy, customer impact should not be a factor.

As for the key question of this article… When Is It Time To Say Goodbye? When the answer to all of the questions above are… No big deal. Sure, many will say that the management demand issue trumps most if not all? Well, I disagree. Virtualization platforms that are run well, work well, and avoid cost, will and do take years to implement, taking years to retire, at an enterprise scale deployment with 100s if not 1000s of hypervisor hosts, and 10,000s if not 100,000s of virtual instances. Moreover, there is one superordinate concept that will take most if not all of the remaining pain out of an enterprise scale migration to different virtualization platforms on demand, wait for it, wait for it… Stateless! The emerging acceptance and implementation of stateless computing concepts, at a hypervisor level as well as at the guess OS level in virtual instances, is or was the last technical foundational stone holding virtualization platform mass migrations from taking place with little or no pain. Is 2010 going to be insane or what? Will it be the year of mass virtual migrations? I think so? Do you?

2 comments October 8th, 2009 Schorschi

Clouds, Vapor and Other Things That Don’t Quite Exist Yet?

Virtualization Critical Evaluation, Chapter 16

VMworld 2009 left me with doubts and concerns. No, not because there is something wrong with vSphere, or such. But because for the first time, I got the impression, perception, that the core of the latest VMworld event was more flash than substance. I have never seen vendors so aggressive in trying to impress the hell out of the big whales, or true enterprise customers. For example, given three (3) evenings, well four (4) evenings, if you attend the TAM day events, I know of some Fortune 10 representatives that had 10 or 15 invitations to dinner, or even just casual discussions over an open bar, etc., stacked upon each other. VMware was of course mixing as well, with the best of them, doing the VMware best to affirm customer loyalty and commitment.

EMC seemed subdued, NetApp seemed dominate, more tuned into the customers? IBM and HP doing a lot of Look at Me, Look at What We Have Now broadcasting to anyone that would listen. Are my perceptions wrong? Many of the smaller vendors, were pulling all the tricks possible to get exposure, girls in nurse uniforms, girls in skin tight silver body suits, the art of the hook, or eye candy methods to get geeks to stop at booths was in high form this year? I for one should not be complaining, it made for an interesting exhibit experience! Of course every VMworld has had something unique to look at, for those of us that are white males anyways! Please, a drawing for a Wii? How about a PS3 and a XBOX 360 together, delivered by the girls in the nurse uniforms? Now that would be a crowd maker! What a raffle that would be!

But this year just seemed to push the edge beyond a reasonable point for flash but no substance? There was a lack of style and taste so obvious beyond years past? Maybe it was the lack of vendors that buck VMware to a degree that really got my attention? The few counter culture souls that had Got Xen? shirts, put a smile on my face. Now, many of you will say I hate Xen, not so, I just think and have said before, that Xen has no significant path for growth potential now that Hyper-V R2 is a true threat to VMware vSphere, and KVM is growing to reasonable maturity because a few Work-Load-Management vendors officially announced support for KVM even if their actual product support is vapor or limited? No one was taking on VMware straight on that attended VMworld 2009, everyone was in step with VMware, and this is something I did not want to see, per my perception, everyone was VMware friendly to the point they dripped VMware spiel from official VMware press releases, thus making for a rather dull conference. VMware should accept all challenges, come one come all, even at VMworld letting the VMware product suites defend their superior position, as such, no? Where was the Critix booth to challenge VMware? Where was Microsoft? Heck where was Xen with a huge tower of a booth saying, Xen is better than VMware Dang it! It is the VMworld conference not the VMware World conference, right?

Maybe this is what I missed, the great impact concept from anyone? The great Oh My God idea from VMware of VMworld events of the past? I am sorry but beyond the cloud concept, which is still weak and quasi consistent in definition, there is nothing but a desert of vapor or near vapor solutions. For example, provisioning does not integrate painlessly to work-load-management, work-load-management does not drive different hypervisors with HA & DRS or even VMotion like functions in a vendor agnostic fashion? Where is the painless transport between Hyper-V, vSphere and KVM? True, development is being done to establish work-load-management solutions, to drive the cloud use and automation, but the options are limited and incomplete so far, lacking robust maturity.

I am not being critical without cause, vendors have had 18 months per my calculations to establish cloud strategies and solutions, at the least between Xen, VMware and Hyper-V, and the claim that everyone was awaiting for vSphere API just does not gain traction with those I have discussed this topic with so far. It seems everyone is learning all the bad traits from Microsoft, in not doing original work, versus purchase solutions that need work over with considerable care to be integrated, and then take a long time to do this through version 2.0, 3.0, etc., to achieve any stable scaled result. The last thing we all need is a group of cloud management solutions that are all virtually (no pun intended) identical, and silos among themselves, and all have the same gaps?

Everything at VMworld 2009 that could be considered 3rd party was little more than attempting to improve on something VMware has already done? And yet, VMware is showing its own cracks in its ivory tower of virtualization, with some ugly bugs and/or design constraints around scaling and scope, cough, did someone say HA? The big comment I heard several times was things should work-as-advertised now that got a lot of attention from customers and those walking the halls. Original ideas were few and far between all the add-ons and tweaks, however, it would serve VMware better, to improve what exists rather than just add new features that VMware marketing thinks will sell to smaller and smaller niches, is chasing market share profitable over the long term?

But a key question still exists, after we all have a cloud work-load-management solution, to drive our own variant of ESXi stateless nodes, driving vCenter (and related toys) as an automation hub, not as a management console, what is next? Is it time to replace the batteries in the old crystal ball? Maybe, because the dream of a floating, global, datacenter, that follows time zone changes, with dynamic application on demand scaling and loading remains a dream for most of us that don’t have extensive engineering teams or deep information technology resources? Oh, a virtualization cloud, right. Is just that, a vaporous dream? Clouds are mostly vapor, so why should I be surprised?

2 comments September 15th, 2009 Schorschi

RHEV: Dark Horse or New Dynasty?

Questions about the impact of RHEV

Well, I am off to the far side of the planet in a few days, and will not have access to electronic communication, this is by design, not by circumstance. Moreover, this entry in the blog will be the last before VMworld 2009, given the above statement. With these topics addressed, on to the topic at hand. RHEV, RedHat Enterprise Hypervisor, in an interesting prospect, from one perspective 6 or more years behind VMware, 2 years, cough, behind Microsoft, and everyone else in between. Is RHEV going to be a Linux only hosting hypervisor, well by design no, but in reality, maybe? What of application virtualization, which is growing, even before massive corporate clouds are common place?

Who is going to jump to RHEV to run Windows? It is clear that RedHat wants to make RHEV as inviting to Windows community as possible with the RHEV Manager layered on IIS and .Net based? Of course RedHat will release a Linux variant of RHEV Manager that will run on, say Apache. But with the beta of RHEV out to key entities, RHEV Manager is .Net based. Interesting, madness or genius? Not sure I could qualify or even quantify the answer!

My initial experience with RHEV beta has been a mixed bag of minor successes and failures, as with any new platform, some things just do not work or react as expected. The interface is interesting, any one that sees it I believe will be reminded a bit of the VMware MUI of the past, it did for me, when I first saw RHEV Manager. At a minimum I expect the interface will change over time, and with many enterprise clients using their own favorite life-cycle and/or work-load manager solutions, direct use of RHEV manager may be less significant than such tools where in the past.

The question that keeps bubbling up to the surface of my perceptions is the same as the title above. RHEV, Dark Horse or New Dynasty? Can KVM survive better than Xen has as an open source platform in the shadow of the Citrix ownership with RHEV now reality? As I have said before I think so, and now that I have spent some time with RHEV, I continue to believe that Xen is under threat; RHEV has just clouded up the water around Hyper-V and vSphere, bad pun intended. Thus, here are my questions, about RHEV that I will be interested in, and I believe others will be considering as well between now and when RHEV is released…

  • How many Linux based environments are only using Xen or KVM or VMware because RedHat had not as yet released a solution of their own?
  • How many environments will use Hyper-V versus VMware regardless of RHEV?
  • Does RHEV do Windows better than VMware and even Microsoft? Now that will be a very interesting question! RHEV and Core seem similar, but are they?
  • How well will NetApp, EMC, etc. support RHEV?
  • Will corporate security teams accept RHEV as a locked down, or even stateless appliance, in the same fashion as ESXi?
  • Not being a fan of Ovirt, at least not a fan, yet, will RedHat walk away from libvirt or ovirt over time?
  • KVM on traditional Linux may be a real competitor to RHEV, given that the parent Linux partition maybe the key platform and KVM virtual instances are just used to leverage that last 20% of a server that management thinks is going to waste?

Well, questions, and more questions? Believe it or not, I like this! I think all of us were getting way too comfortable with the world of virtualization as we know it, knew it? Well at least I was. Maybe the best aspect of RHEV is not that it exists, but that between KVM and RHEV, both Microsoft and VMware should be motivated to get off their collective back ends, and innovate the next generation of virtualization, not purchase it? Oh, and I did I mention Microsoft and VMware will have to reduce their cost models as well? I am sure RedHat has not missed the fact that the days of expensive virtualization platforms, no matter how much is saved by server consolidation or higher adaptation rates to virtualization versus traditional hardware, are gone forever.

Add comment August 25th, 2009 Schorschi

VMworld 2009: Beyond Romancing the Core

Expectations and Ideas about VMworld 2009 based on VMworld Events of the Past

VMworld 2009? What will it be like? Will it be exciting or boring? Before I tackle that question, I believe a brief review VMworld over the last 5 years, this is not exhaustive, but more impressionistic… Oh, and brush up on your Shakespeare… The history of virtualization of course if viewed through VMworld events is rather dramatic.

I remember VMworld 2004 and 2005, the virtualization space was still a bit out law, a bit radical, and something you did that was socially unacceptable. Almost hid under the bed, so your parents would not find it, right? The greed of virtualization, if that is accurate term, had yet to strangle the wow factor down to something realistic in reference to virtualization. If you knew virtualization, you walked on water, if you did not know virtualization, you were just normal, if normal is a word that can be used for people that work in the information technology subculture. What was the big deal, beyond VMotion and VirtualCenter? The future promise of more shared storage options I remember being a key item of interest, whispers of iSCSI, and scale and scope improvements. How would you classify this era, Henry the V, or Romeo and Juliet? True, VMworld 2004 was bold, and 2005 was strong. A Midsummer Night’s Dream might be applicable given the unreal feeling VMworld had in the beginning.

VMworld 2006 and 2007 where the dawn of feature set expansion and growth of new elements to virtualization, AMD and Intel where getting into the swing of things, the ground work by Intel, for VT-x, VT-c, VT-d, and similar work along the same lines by AMD. In some respects this was the golden era of the hypervisor as central focus of virtualization. Sure, management tools and such came along in step, but the hypervisor was the key to all the plans and efforts. Behind the closed doors of the engineering research departments of various firms both sane and insane ideas were being evaluated, out sprang the lab management, life cycle management, and 10s if not 100s of some in-between solution concepts, but was everyone missing something? Or was it just the cost-saving focused customers were not as accepting of anything labeled virtualization as just a year or two before? I remember walking around the exhibits in 2007 thinking, well, this does not seem very exciting. HA and DRS were fine, but incremental steps, not crazy radical. This is the era of virtualization that I would describe as Taming of the Shrew.

VMworld 2008 had a few high points, but I was focused the specific ideas that were important to me, and those I support and work with, for example, better utilization of storage, faster greater virtual instance backups, better management of the environment at an enterprise scale. VMworld was full of things, that were ideas yet to be realized. A notable exception, PowerCLI or what would become PowerCLI, that specific break out session had some of the real honest old fashion energy that VMworld 2004 and 2005 seemed to be buzzing with. I remember also, there seemed to be a very large number of people at VMworld 2008 that were doing virtualization for the first time, I am sure this the case at all VMworld events, but for some reason 2008 it seemed more obvious to me. Maybe it was because I saw VMworld now as an experienced alumnus? I even was part of a presentation at VMworld 2008, so that gave me a different perspective as a limited speaker? Moreover, I was under whelmed by the VMware VM FT feature. As nice as VM FT is, I just was not that impressed, and to be fair I am not sure why, am I now jaded by virtualization? VMware SRM now reality seemed late to market? Some would argue this characterization but for me, this is the era I would classify as Much to Do about Nothing.

What do I expect from 2009? Right now I am not sure. Since 2007 I have been waiting for VMware to grab the concept of virtual containers and make it their own. You can only get so much done with master images, gold images, etc. You can only do so much with tweaking ESXi, which needed some tweaking. VMware has instead, gone to the clouds, literally in the virtualization sense. Yes, everyone is in love with clouds these days, I am not saying this is right or wrong, but I am saying it is not romancing the core, as the focus of virtualization once was. I believe VMware has missed a significant opportunity with virtualization containers. And the opportunity is long gone. True, VMware does not, did not own an operating system, in the classic sense; thus for VMware to get into the container scope, they may had to, will have to, deal with the operating system devils. Even the once dominate Oracle had to purchase an operating system to get a container model. I will always remember with a strong sense of nostalgia, my early career experience with SPARC hardware. Alas poor Solaris, I knew thee well. Sun Microsystems, may fights of computing angels take thee to thy rest. …Of course, my heartfelt apologies to the great Bard. VMworld 2009 may be closer to Hamlet, than I would want. For now, I would guess, the Merchant of Venice is applicable. After all, the director and actors are responsible for presenting Shylock such that the character is viewed with disdain or sympathy by the audience. If you don’t get the inferences here, well…read more Shakespeare.

Maybe the guys at Gartner that have been significant in their lack of pontificating about virtualization last year or so, have finally gotten the batteries in the old crystal ball replaced? RedHat, if not IBM as well, are focused on guiding KVM, and with RHEV beta a reality, the next logical step is not operating system isolation but containerization for RedHat. Hyper-V as it matures will become a containerization model I believe as well, where does that leave VMware? VMware… Look like the innocent flower, but be the serpent under it. Five years ago it was done; now is the time again, to take on RedHat and Microsoft straight on, head on… Double, double toil and trouble; Fire burn and cauldron bubble. Now, where did I leave that copy of Macbeth?

Add comment July 29th, 2009 Schorschi

Next Posts Previous Posts


Feeds

Categories

Resources