Posts filed under 'A Proper Virtual World'

Death of a Datacenter, Well Sort Of

What is kind of year was 2009 anyways?

Death of a datacenter, well sort, maybe the space in a realistic assessment is more of an oversized server room not a datacenter in the proper sense. I realize I need to explain this. Picture this… an office in the darkest corner of a large building, where one entire wall of the office is glass, looking out into a large file server area. A server site that supported thousands of people for almost 15 years, but now this server room is empty, just a bunch of holes in the raised floor that stretches before you. You remember laying down many of those raised floor tiles, and the server racks over them,. The growth of the site, as rows of racks grew over time. You remember the technology growing more powerful, the scale of capacity growing. But about 6 years ago, just as this file server room had been online for 8 years, 8 of the 18 years of a career in IT, virtualization appears.

Little by little, the number of physical servers does decline here and there, but the total volume of computational capacity increases, so the few empty racks, once full, don’t stand out. After all, there is work to be done. Warp speed. The minor changes in the number of staff or the changes in how the environment managed, the staff reductions in the on-site team are not that obvious. The newest technology, even with virtualization, has not changed the support model in extreme overt manner. Like a ship in dry dock, refits and upgrades of all types and sizes seem endless and ever more complex.

Computing is changing, the computing sites are changing. Remote device control, HP iLO, Dell RAC, IBM RSA, etc., blades, and other forms of topless or headless systems, is growing alone side virtualization, so now the number of cabinets that are empty are obvious and maybe just a bit unnerving. The staffing level has declined, the roles have changed, the command-center thousands of miles away or the command-center in our building is controlling more sites and locations than ever before. Doing more with less is not just a concept or slogan any more.

The years have passed all of the above has changed the concept of dedicated file server sites, of datacenters, of the IT industry. You realize that the number of days you transport into the office for the first time is fewer than the number of days you work from a remote site or location each week. Moreover, management, the command at the top, is encouraging this behavior, rather than blocking or debating its value. But the file server room is still there, darker, quieter than ever, but still doing its computational job. The systems are fewer of course, but they are so powerful, so significant in impact, but in such a small form factor, it is almost science fiction. The upgrades and changes have not touch the heart and soul of the site, the form and function change, but not the purpose.

Now you stand in that same office again, facing the entire wall that is nothing but glass, and you see a large empty room before you, has it been that long? How many worlds, virtual worlds, visited? You turn to look at the office and it is empty as well. Sure there is still a desk, a chair, maybe a bookshelf unit. But the soul of the office is gone, the energy of the late nights, early mornings, of echoes of laughter where practical jokes abounded are just ghosts of times past, none of the original team remains but you, the crew that saved your ass countless times is long gone. You remember when the kids, on bring your children to work day each year would walk through the massive columns of computing power, listening to the hum of the fans, the vibration of the equipment, the click of the disks in the arrays, the soft beeps or the rare click of keys in the distance. If you close your eyes, and let your imagination expand just a bit, you could see yourself in the engine room of starship, and the office, was the engineering operational center of the ship. After all the bridge was the command-center many floors above right?

But the reality is the engine room of this ship is empty now; warp coils and power relays gone, the heating/cooling system, the magnetic containment system, the atmosphere controls and fire suppression system gone. The core is gone. The ship is decommissioned, silent, just like the bridge, ah, I mean the command-center that was taken offline, sometime ago. You take off your badge, cough, your communication link, and place it on the control console, which has nothing on it but a layer of dust, dead, lifeless. No one said this would last forever, no one expected it would, but somehow it is sad that now it has come to the end. The perception is that the end of an era is among of you. Without thinking about it, you snap your heals together, straighten your back, and your arm seems to move on its own, and before you act to do otherwise, you salute the glass wall facing where the core once existed. You pause for a moment longer, remembering friends lost and long gone. You think the words… Warp Speed.

Reality comes back, so you straighten your uniform, and pick up your travel pack. Time has passed faster than you realized, you notice you are running late. The sadness that dominated your thoughts a few moments before, as you rush out of the room is gone; a new unique aspect of your career is beginning. You hear the hiss of the pressure doors close for the last time as you leave. You know you will never return to this ship again. The soul of the ship is now gone, she is now a cold dead hull, nothing but structural elements, components and resources to be recycled soon. You think to yourself you need to hurry now, that the last working transporter is on deck 12, and if Turbolift 4 is already offline, it is going to be a long climb from the engineering deck to deck 12.

As you run down the passage way on deck 12, yelling to the decommissioning crew in the way to make a hole, approaching the lift, you notice the designation on the wall next to the lift says… Elevator 4. You blink twice, shaking your head, but it still says… Elevator 4. Just a building, and file server room, after all? Not a ship? What was I thinking!

Add comment December 29th, 2009

Is virtualization entering a Dark Age?

Cloud Computing is Pushing Virtualization into the Shadows

Clouding is the rage, everyone that is anyone is riding, or would it be better to say floating in, the clouds? I prefer to call it cloud bursting. Clouding is fine, but if it is a solution looking for a problem, then it is a horrible time and resources pit to fall into. No this article is not about the good or bad of clouding, but the impact of clouding on virtualization and the resources that design and support virtualization, and what is missing in virtualization to support clouding to a degree. Moreover of all the commercials about clouding that flood the airways now… IBM has one that just makes my teeth grind… IBM commercial says, if I heard it right… “…Cloud is Simple…” Well, nothing about clouding is simple. Pods are simple, from a hardware perspective; but the clouding software is not. Google has had what? Ten (10) or more years developing cloud oriented logic from a software perceptive!

Virtualization is now a commodity, just one more tool or component to a greater synergistic effort. But is it really? None of the complexity of the computing environment is reduced, it is increased. Just as virtualization made computing more complex, so does clouding, adding layers of complexity, dragging virtualization, provisioning, automation, reporting, and decision making together, of course decision making. Thus effective and efficient management and control are more important than ever. This is where things get dark, and everything fades to long grey shadows, and environment or datacenter architects run for the shadows, like roaches for cracks in the floor molding. Why? Because clouding is not easy, it is not uniform, it is not consistent. Cloud decisions are career changers… people resign, give up, or get nailed, by cloud solutions that do not live up to the hype, and no cloud solution, no matter how valued or vaulted lives up to all the hype. Thinking I am wrong? Dig a bit deeper, the evidence is there, that cloud design and implementation is hard work that some just cannot handle, or others took many years to get right at some effective level. Years? Oh man, that is a word that management hates with a passion, up there with corporate taxes.

Worse the cloud terminology is horrible. Whoever came up with the term Service? As in an application in a cloud is a Service? Talk about selecting the worse way to communicate to the end-user population a concept. Could it have been made any more, less informative? An application in a cloud is just that, an application! End-users understand applications, not generic terms like Service. Come on, call a stone a stone, call an application an application, for crying out loud. Better yet, call applications in a cloud… Solutions! End-users think in terms of problems and solutions.

The ugly aspect of clouding beyond the human impact is that the complexity of virtualization is not being acknowledged, and so talent and resources once dedicated to good virtualization solutions are bled down to the minimum, either let go, or feed to the clouding chewing machine. This is fact, not fiction. The push to have automation and autonomous systems mange a virtualization environment is a great goal, but is it a technical reality? Having reviewed more than 8 significant management, control, and reporting applications this year, including Surgient, Hyper9, ManageIQ, CapacityIQ, Liquidware Labs, etc. offers, as well as MOAB and Platform ISF. The word that comes to mind is… disappointment. Not because most of the these solutions in their own right have no merit, they do, but because all of them lack something that is critical to cloud busting, and some of us have been asking for as customer of virtualization for more than four (4) years, in my case closer to six (6) years, Predictive Analysis.

Predictive Analysis, in reference to virtualization, is the ability to do What If analysis scenarios against an existing environment. Look before leaping, in context to and virtualization models, which clouding must also address or deal with. For example, given ten (10) virtual machines, what happens when one (1) more is added to this situation? Saying, in simple terms, the result of such a proposed move, is good, bad or ugly? Now what is the impact if adding five (5) new virtual instances to an existing pool of a 1000? Predictive Analysis is applicable to every single customer of cloud computing, or bursting, large and small.

Never before, has predictive modeling been needed, especially in clouding. So where are the 100s of products that do predicative analysis? I would have thought that P2V would have pushed this gap into the light, from the dark shadows, but that did not happen, in part because P2V is not popular in some environments. So now cloud computing should do it, forcing predicative engines to factual reality. Talk about missing an obvious opportunity to establish competitive advantage! So, as virtualization is fades into the background, into a dark age, becoming more black-box than ever, and predicative analytical tools to supporting clouding own the light? Not yet, not yet.

Add comment December 16th, 2009

Traveled Around the World, Traveled Through Time

Know What Virtualization Is, But What Is Next? – Chapter 17

A couple of days ago, I was in Cairo and the Valley of the Kings, in Egypt. Moved from Taxco, Mexico on to Hong Kong hours later, and last night as well as early this morning I was in Beirut, Lebanon. Last week I was in Bangkok, Thailand, and Penang Malaysia. Never mind, various locations in Canada, and the United States. As I write this blog entry I just left Tokyo Japan. How is this possible, well, I was going through several thousands of 35mm slides my maternal grandparents took as they traveled around the world in 1966 and going forward in time some 45 or so years until now, 2009? In 1966, which it so happens, was just after my 1st birth day, the world was a different place; the pictures I have seen make this clear. This walk through time, around the world, is more than interesting, it is quite personal, because my maternal grandmother now almost 92, is losing her short-term memory, and starting to lose some of her long-term memory. Thus, extreme age is curse at times, and not always a benefit. So now is the time to review the slides and make sure what my Grandmother remembers, is not lost to the continuum of space and time.

At this point, I am sure someone is asking… and this has something to do with virtualization? It does. What will we remember of virtualization in 4 or more years, what will the future of computing look like to each generation beyond us from now? In just the last 6 years, virtualization is now the driving force in computing. Hands down, cloud computing would be near impossible without virtualization. But, unlike Gartner, I do not have a crystal ball, I don’t have extensive resources to research the trends, the patterns, or the unique indicators that twist and turn computing destiny, like stellar matter on the event horizon of a black hole? But what I do have is a bit of common sense, and some basic knowledge of the information technology industry. There are few key concepts that computing continues to establish, and re-establish over and over the last 40 years for so. Will these concepts hold true 40 year from now?

Time versus Space. This is a fact, computing still struggles with a way to resolve this conflict. Technology hides the issue, but the principle never changes. In computing this is memory versus disk, even as disk as a metaphor changes, from mechanical to solid state, the issue does not. Well, operating system architecture may change and may eliminate the conflict, how? Imagine an operating system, complete and robust that lives only in memory? ESXi stateless with its direct memory load is a step in this direction. No, solid-state disks are disqualified, the use of disk IO will not survive, cell phone operating system design using SIM cards are an incomplete but parallel concept, the SIM card is seen as memory to an extent, and extension of memory space. If the entire operating system only lives in memory, disk is a drag that can be eliminated, not the just device, but the API that drives disks can be abandoned. Will the idea of a giant, huge, ever present data-core, exist in the future? I believe so.

Direct versus Indirect Interfaces. Keyboards, and mice, where and are indirect cybernetic interfaces. Which will apply to virtualization in 40 years? Ignoring the soul and the metaphysical aspects of human existence, what is the mind? A parallel computing system, sure, a database engine, short-term and long-term memory, caching, etc., yes to all, so improving the connections to these is a corner stone now that will build future methods? Consider cybernetic interfaces now in development for war veterans, there is no realistic wet interfacing to the mind, but variants of electro-chemic driven apparatus, all indirect methods. So cybernetic systems at least today are driven by proxy only, such as toes stand in for a hand, arm, or even fingers that no longer can be controlled by the mind via direct interface to the nervous system before injury or loss of limb? But in 40 years, direct interfaces will eclipse this limitation. One possible result of wet interfaces or direct integration of virtual space to the human nervous system, may overcome cerebral palsy. What did I lose some of you? Imagine the potential, for instance of such direct interfacing. Case in point, cerebral palsy is brain damage, where the voluntarily motor system is unable to function as designed. In computing terms cerebral palsy is corrupted firmware because the memory location where it resides is or has failed, the mind cannot drive the peripherals, meaning the arms and legs, as needed, in a correct manner, the result is all the side effects the condition creates, misalign muscle strength, tendons to weak or strong to work as designed, etc. I However, a direct interface technology, could create a virtual mirror of the damaged part of the brain, interface the rest of the functional brain to the new mirror, then flash the mirror, thus the brain via external reference, can by-pass the fault, and cerebral palsy never establishes the impact, and normal muscle and tendon development takes place. Don’t stop with just a cure, no, a fix for cerebral palsy? What would 40 or more years allow us to achieve? Almost any memory oriented chronic issue in the human mind could be eliminated or improved.

But considering virtualization in five (5) dimensions, beyond application versus operating system isolation frameworks? Abstract the mind, focus on the sensory aspects of the mind, that drive the five senses, touch, taste, smell, sight, sound? With cybernetic enhancement, and direct interface improvements overcome memory recall, or even memory imprinting? What will we be able to do in the future? Will we encounter a limitation of the mind? The old space versus time issue comes back again? Once solution would be to abstract human memory into a virtualized space not expanding consciousness but just access and retrieval functions for information, which would seem possible, no? Can the human mind learn to access external memory? So back to the indirect versus direct interface design again? Time versus space becomes an issue unless external resources and be leveraged by the mind. Get the feeling that this is circular logic? Imagine a world, where dementia and Alzheimer’s, or any of the various recall impacting diseases or chronic conditions, being addresses not by drugs or other electro-chemical alchemistic methods, but by information technology solutions? Ok, so maybe it is science fiction, but will it always be so? I hope not in 40 or more years, as I approach 85 years of age! VDI may one day stand for Virtual Direct Interface to the human mind?

If only walking through time, was as easy as walking around the world. If only walking through time, was like flipping through slides take decades ago. Consider this, in 1966 I am sure some could only dream of what we can do today with computing technology, virtualization would appear as a mystical and vague concept? So in the decades to come from now, what miracles will the future have that we can only imagine in simplistic terms now?

Add comment December 3rd, 2009

How Do You Say Good Bye?

So The Old Virtualization Platform Is Just Not Doing It for You Still (Part 2 of 2)

I suggest reading the blog entry, When Is It Time to Say Good Bye? before continuing to read this blog entry. It is not required but recommended.

Now that the decision has been made to change virtualization platforms… What is important? What should not have been forgotten or lost? What must be reaffirmed to be successful? There are three (3) tasks key to saying good bye to the declining virtualization platform, and hello to the emerging virtualization platform. All are easy to itemize, two (2) are easy to complete, and one (1) is not painless, but should be a known path, providing the lessons learned in the past are not forgotten, and the skills developed in the past are not abandoned.

  • Tell the original virtualization vendor… Goodbye! It was nice knowing you, you saved us money, thanks, but don’t let the door hit you in the ass on the way out!
  • Tell the new virtualization vendor… You are wonderful, but unless you are cheap, do everything we want immediately, you are gone, and you have been warned. Oh, and we signed a multiyear contract, but don’t expect it to be renewed, dude, unless your platform is better than expected.
  • Find all the documentation and people that implemented the declining virtualization platform, and get the ones worth leveraging, working on the emerging virtualization platform sooner than later. What? They were laid off? Reassigned? Or otherwise been regulated to a dark corner, and do nothing but drool and speak in a weird dead language no one has heard for 100s if not 1000s of years? Say it is not so!

Now of course, there is considerable humor implied in the above characterizations. Being professional precludes some of the semantic implications, so if anyone thinks said characterizations are realistic, my honest response is… What are you smoking, dude?

What is important? Moving to an emerging virtualization platform is a new platform, and although experience is important don’t let it bite you in the ass, because you don’t realize a new platform, is just that… a new platform. All the typical stuff, technical mumbo-jumbo, so no need to dive deep into that here, the new platform should have better or be reasonable in performance context to the declining solution. If not the scale will change in significant manner, vertical because better hardware is needed, horizontal if more virtual instances are needed to get the same performance to the end-users. This impacts cost of course. The cost avoidance model will change to some degree no matter what is done, so be prepared for that situation. From physical environmental factors to logical processing factors will change; don’t let this surprise anyone, including the customers. Again, this is a new platform, respect it as such. Engineering by Fact is a good concept to keep in mind. So is Management by Fact, so once the emerging solution is designed, tested, and implemented, expect surprises and differences based on actual results and situation. Unloading one hypervisor and loading another, even when stateless makes it simplistic, is just the tip of the iceberg. Management will want to believe otherwise, don’t let that happen.

What should not have been forgotten or lost? Some will disagree with this point, saying that things are not forgotten, well, yes they are. The skills that were needed to establish a viable and robust virtualization platform are not identical to those that maintain a virtualization platform over time. This is not a classic engineering versus operational scope argument. This is talent versus skill debate. Virtualization as many of you may recall, took insight and a bit of guess work, to get it off the ground. Yes, it was new and no one really knew what the upper limits were, etc., etc. Well, once again, that mind set is needed, walk before running, when running watch out for potholes. Scaled testing in a lab is never inclusive, but the temptation will be to rely on the past. That past experience will color the results that are gained in the lab, that is one of many potholes. Application designers and developers are not going to find the emerging virtualization platform easy to understand or deal with, build into the master plan for this confusion and frustration. Remember how some issues came out of nowhere when the declining virtualization platform was implemented? It surprised more often than not, right?

What must be reaffirmed to be successful? This is the ugly question. Why? Because what is already virtualized is, just that virtualized, and will be migrated in some fashion, be it virtual-to-virtual, or OVF, or whatever, beyond the performance and response differences, and the resulting marginal costing delta big or small, it is not low hanging fruit as once was the case. Those days are gone. If this migration is the first from the very first virtualization platform, this is going to be a culture shock event. Massive cost avoidance, beyond the savings (hope it is savings) of virtualization vendor licensing fees, is or has been accrued and tabulated in seconds, so that easy gainer is toast. Long term success now is based on incremental cost savings per host, per virtual instance, per flexibility of environment as managed, cough, cloud, or such, providing services and solutions on demand, with ease of availability.

How can ease of availability be enhanced, improved? A work-load-management solution, a deployment automation solution, and/or the use of a stateless model solution may or may not be opportunities for cost savings in the emerging virtualization platform. This is not a simple feat, if such components are or were already core to the declining virtualization solution, additional cost avoidance out of the cloud vapor, as such, may not be possible. In fact, adapting to the emerging virtualization might even increase expense sooner than later, ouch, taking time to net savings over time, ouch. Again, plan for this, control expectations, the emerging virtualization solution will need to mature and balance out within the organization, just as the declining virtualization solution did or required time to do, changing the culture of the organization, again. As my Mother, who was a manager for more than 30 years in a Fortune 10 firm was fond of reminding me… The only constant is change. To which my reply always was… Why does management always seem to forget that all change takes time? Do lt!

Add comment October 29th, 2009

When Is It Time to Say Good Bye?

So The Old Virtualization Platform Is Just Not Doing It for You Still (Part 1 of 2)

This time of year a lot of hard decisions are made in enterprise firms, I am sure many will identify with this statement. The New Year is approaching, the budget is due, the bonus if any is on the line, and now is the time to pull that rabbit out of the hat. There are a number of reasons to keep or leave your past virtualization platform, this article will explore a few points of on why a platform should or should not be continued, with the premise that it is time to say goodbye. The basic logic is evaluative analysis, something any of us in the computing industry know or should know well. The two key questions are:

  • Is the solution effective?
  • Is the solution efficient?

The basic objective is to save expense, total expense; this includes factors beyond the virtualization platform, and is reflective of the above questions of effectiveness and efficiency:

  • Can the platform be revised, improved, or otherwise developed?
  • What does management want, really want?
  • What is the complexity of the effort?
  • What is the time line and scope of the effort?
  • Can the customer base survive the transition?

The answers to these questions will be environment specific, again for the sake of illustrative explanation, each question will be explored in conceptual context only.

Is the solution effective? If the solution has been around for a few years, the expected answer would be yes. However, early in virtualization experimentation by a number of firms, mistakes were made, and true cost avoidance or savings may not have been as expected or hoped. Often virtualization platforms are kept conservative, and structured with caution, but over time, expectations for continual economies of scale demand more for less from the original design cloud the results if not the perception of effectiveness.

Is the solution efficient? Just because initial cost avoidance was good, or better than expected, does not mean the virtualization platform was well managed over time. This is an unfortunate, but a common problem for many firms that have significant virtualization. Staff changes or organizational reorganizations often break efficiencies gained as the virtualization platform matured, and just when the solution should be a smooth operating entity, things go wrong. Both management and engineers just cannot seem to leave something that works alone.

Can the platform be revised, improved, or otherwise developed? This should be an obvious yes. If not, then something is crippling the solution, or someone has made some horrible decisions about the virtualization solution. Management is often impatient for results; a short-sighted mind set for virtualization is often a key factor in failure of the solution or lack rational expectations. It takes years for a virtualization platform to mature regardless of the vendor. The vendor needs to be stable, consistent, and responsive, if not, walking away from the solution is more than possible. A struggling vendor is a key indicator that years later nothing but pain will result. The solution should work at reasonable scale, maturing feature set should never mask foundational or core functional issues. Fixing bugs and improving stability cannot be given a back seat to new feature development.

What does management want, really want? Never overlook the fact, that management may just not like the solution in place, the reasons for this are endless, but they often come down to two factors, or issues. The first issue is cost, management hates paying for anything twice, and so what was acceptable at the initial deployment of virtualization, is often a problem a few years later. VMware has this issue today, VMware costs go up, while customers are expecting costs to go down. Feature set grows, but most customers do not use all features or worse are forced to purchase many features just to get a few key needs addressed. This scenario just ticks management off, and is often the reason that a given solution is kicked to the curb even when successful. The second issue is competition, the best solution in the world, and I have discussed this issue in the past, often does not gain the greatest market share over time, Sony BetaMax versus VHS, Apple Macintosh versus PC clones, etc. True, the many will point out the iPod, but in fact, the iPod dominates only because it has no real or true competition, once there is something that is better, or even close to the iPod? Guess what! For example, the iPhone is already facing this issue with the latest generation of cell phones from other vendors, that are approaching iPhone class of service and range of use features.

What is the complexity of the effort? Or in other words, is it easy or hard to walk away from the existing virtualization platform? Today, moving from VMware VI3, e.g. VirtualCenter and ESX, to say KVM or Xen is not painless, but less painful every day. KVM and Virt-Manager make it easier to leave VMware vSphere now that KVM/Virt-Manager support OVF via Virt-Inst, and Open OVF expanding to support Hyper-V. In fact, once OVF supports references to virtual machine disks and does not embed virtual machine disk data, the transportability of virtual instances will be almost seamless, really closer to a classic cold migration, since the common shared storage is leveraged.

What is the time line and scope of the effort? Well, this question is not fair; it is a trick question of course. If the complexity of the effort is near painless and/or management has already decided it is time to go, then does the time and scope of the migration to a new solution really matter? Yes and no. Yes, in the sense that a complex migration may delay the migration. No, in that once the action plan is defined, and executed, the writing is on the wall. It is interesting that Microsoft has not been splashing all over the world, how they are stealing hand-over-fist customers from VMware? Why? Well Hyper-V until 2008 R2 with CSV (Cluster Shared Volumes), was not a true challenge for VMware VI3, never mind vSphere. Of course, KVM is still maturing. Oracle is stuck trying to figure out what to do with SPARC, so Zones never made it to the sand box? But once the pain of migration is resolved, watch out! Microsoft and others learned a number of things from killing off Novell. Microsoft Windows NT server even with faults was a general application server that was easy to management and use, oh and it did file serving as well. Microsoft made sure moving off Novell was a painless as possible, CSNW (Client Services for Netware) and GSNW (Gateway Services for Netware) for example. Novell 4 was a superior solution to Windows NT in serving files and using LDAP, but still it was painful to go from Windows to Novell, compared to Novell to Windows.

Can the customer base survive the transition? Aw, shucks, the end-users, who cares about them anyways? This is the most obvious sleeper issue. What? Wait! What the heck is an obvious sleeper issue? I gave away this one in the answer above… CSNW for example, let a desktop think it was working with a Novell server, when it was really a Microsoft Windows NT server. It was about as transparent and seamless as possible considering the radical differences between Novell and Windows NT. CSNW make the end-user experience, if everything was done right, painless as possible. Now consider virtual instances, which are several steps away from hardware, and even platform specifics, combined with V2V (Virtual-to-Virtual) tools and methods? Or even easier, a simple reboot, because KVM can support VMDK files? Sure KVM VMDK support many not be the best performance, but on the standard pain scale, is more like of an itch to be scratched, than a needle prick. OVF is another option as well. Unless someone is just plain goofy, customer impact should not be a factor.

As for the key question of this article… When Is It Time To Say Goodbye? When the answer to all of the questions above are… No big deal. Sure, many will say that the management demand issue trumps most if not all? Well, I disagree. Virtualization platforms that are run well, work well, and avoid cost, will and do take years to implement, taking years to retire, at an enterprise scale deployment with 100s if not 1000s of hypervisor hosts, and 10,000s if not 100,000s of virtual instances. Moreover, there is one superordinate concept that will take most if not all of the remaining pain out of an enterprise scale migration to different virtualization platforms on demand, wait for it, wait for it… Stateless! The emerging acceptance and implementation of stateless computing concepts, at a hypervisor level as well as at the guess OS level in virtual instances, is or was the last technical foundational stone holding virtualization platform mass migrations from taking place with little or no pain. Is 2010 going to be insane or what? Will it be the year of mass virtual migrations? I think so? Do you?

2 comments October 8th, 2009

Clouds, Vapor and Other Things That Don’t Quite Exist Yet?

Virtualization Critical Evaluation, Chapter 16

VMworld 2009 left me with doubts and concerns. No, not because there is something wrong with vSphere, or such. But because for the first time, I got the impression, perception, that the core of the latest VMworld event was more flash than substance. I have never seen vendors so aggressive in trying to impress the hell out of the big whales, or true enterprise customers. For example, given three (3) evenings, well four (4) evenings, if you attend the TAM day events, I know of some Fortune 10 representatives that had 10 or 15 invitations to dinner, or even just casual discussions over an open bar, etc., stacked upon each other. VMware was of course mixing as well, with the best of them, doing the VMware best to affirm customer loyalty and commitment.

EMC seemed subdued, NetApp seemed dominate, more tuned into the customers? IBM and HP doing a lot of Look at Me, Look at What We Have Now broadcasting to anyone that would listen. Are my perceptions wrong? Many of the smaller vendors, were pulling all the tricks possible to get exposure, girls in nurse uniforms, girls in skin tight silver body suits, the art of the hook, or eye candy methods to get geeks to stop at booths was in high form this year? I for one should not be complaining, it made for an interesting exhibit experience! Of course every VMworld has had something unique to look at, for those of us that are white males anyways! Please, a drawing for a Wii? How about a PS3 and a XBOX 360 together, delivered by the girls in the nurse uniforms? Now that would be a crowd maker! What a raffle that would be!

But this year just seemed to push the edge beyond a reasonable point for flash but no substance? There was a lack of style and taste so obvious beyond years past? Maybe it was the lack of vendors that buck VMware to a degree that really got my attention? The few counter culture souls that had Got Xen? shirts, put a smile on my face. Now, many of you will say I hate Xen, not so, I just think and have said before, that Xen has no significant path for growth potential now that Hyper-V R2 is a true threat to VMware vSphere, and KVM is growing to reasonable maturity because a few Work-Load-Management vendors officially announced support for KVM even if their actual product support is vapor or limited? No one was taking on VMware straight on that attended VMworld 2009, everyone was in step with VMware, and this is something I did not want to see, per my perception, everyone was VMware friendly to the point they dripped VMware spiel from official VMware press releases, thus making for a rather dull conference. VMware should accept all challenges, come one come all, even at VMworld letting the VMware product suites defend their superior position, as such, no? Where was the Critix booth to challenge VMware? Where was Microsoft? Heck where was Xen with a huge tower of a booth saying, Xen is better than VMware Dang it! It is the VMworld conference not the VMware World conference, right?

Maybe this is what I missed, the great impact concept from anyone? The great Oh My God idea from VMware of VMworld events of the past? I am sorry but beyond the cloud concept, which is still weak and quasi consistent in definition, there is nothing but a desert of vapor or near vapor solutions. For example, provisioning does not integrate painlessly to work-load-management, work-load-management does not drive different hypervisors with HA & DRS or even VMotion like functions in a vendor agnostic fashion? Where is the painless transport between Hyper-V, vSphere and KVM? True, development is being done to establish work-load-management solutions, to drive the cloud use and automation, but the options are limited and incomplete so far, lacking robust maturity.

I am not being critical without cause, vendors have had 18 months per my calculations to establish cloud strategies and solutions, at the least between Xen, VMware and Hyper-V, and the claim that everyone was awaiting for vSphere API just does not gain traction with those I have discussed this topic with so far. It seems everyone is learning all the bad traits from Microsoft, in not doing original work, versus purchase solutions that need work over with considerable care to be integrated, and then take a long time to do this through version 2.0, 3.0, etc., to achieve any stable scaled result. The last thing we all need is a group of cloud management solutions that are all virtually (no pun intended) identical, and silos among themselves, and all have the same gaps?

Everything at VMworld 2009 that could be considered 3rd party was little more than attempting to improve on something VMware has already done? And yet, VMware is showing its own cracks in its ivory tower of virtualization, with some ugly bugs and/or design constraints around scaling and scope, cough, did someone say HA? The big comment I heard several times was things should work-as-advertised now that got a lot of attention from customers and those walking the halls. Original ideas were few and far between all the add-ons and tweaks, however, it would serve VMware better, to improve what exists rather than just add new features that VMware marketing thinks will sell to smaller and smaller niches, is chasing market share profitable over the long term?

But a key question still exists, after we all have a cloud work-load-management solution, to drive our own variant of ESXi stateless nodes, driving vCenter (and related toys) as an automation hub, not as a management console, what is next? Is it time to replace the batteries in the old crystal ball? Maybe, because the dream of a floating, global, datacenter, that follows time zone changes, with dynamic application on demand scaling and loading remains a dream for most of us that don’t have extensive engineering teams or deep information technology resources? Oh, a virtualization cloud, right. Is just that, a vaporous dream? Clouds are mostly vapor, so why should I be surprised?

2 comments September 15th, 2009

RHEV: Dark Horse or New Dynasty?

Questions about the impact of RHEV

Well, I am off to the far side of the planet in a few days, and will not have access to electronic communication, this is by design, not by circumstance. Moreover, this entry in the blog will be the last before VMworld 2009, given the above statement. With these topics addressed, on to the topic at hand. RHEV, RedHat Enterprise Hypervisor, in an interesting prospect, from one perspective 6 or more years behind VMware, 2 years, cough, behind Microsoft, and everyone else in between. Is RHEV going to be a Linux only hosting hypervisor, well by design no, but in reality, maybe? What of application virtualization, which is growing, even before massive corporate clouds are common place?

Who is going to jump to RHEV to run Windows? It is clear that RedHat wants to make RHEV as inviting to Windows community as possible with the RHEV Manager layered on IIS and .Net based? Of course RedHat will release a Linux variant of RHEV Manager that will run on, say Apache. But with the beta of RHEV out to key entities, RHEV Manager is .Net based. Interesting, madness or genius? Not sure I could qualify or even quantify the answer!

My initial experience with RHEV beta has been a mixed bag of minor successes and failures, as with any new platform, some things just do not work or react as expected. The interface is interesting, any one that sees it I believe will be reminded a bit of the VMware MUI of the past, it did for me, when I first saw RHEV Manager. At a minimum I expect the interface will change over time, and with many enterprise clients using their own favorite life-cycle and/or work-load manager solutions, direct use of RHEV manager may be less significant than such tools where in the past.

The question that keeps bubbling up to the surface of my perceptions is the same as the title above. RHEV, Dark Horse or New Dynasty? Can KVM survive better than Xen has as an open source platform in the shadow of the Citrix ownership with RHEV now reality? As I have said before I think so, and now that I have spent some time with RHEV, I continue to believe that Xen is under threat; RHEV has just clouded up the water around Hyper-V and vSphere, bad pun intended. Thus, here are my questions, about RHEV that I will be interested in, and I believe others will be considering as well between now and when RHEV is released…

  • How many Linux based environments are only using Xen or KVM or VMware because RedHat had not as yet released a solution of their own?
  • How many environments will use Hyper-V versus VMware regardless of RHEV?
  • Does RHEV do Windows better than VMware and even Microsoft? Now that will be a very interesting question! RHEV and Core seem similar, but are they?
  • How well will NetApp, EMC, etc. support RHEV?
  • Will corporate security teams accept RHEV as a locked down, or even stateless appliance, in the same fashion as ESXi?
  • Not being a fan of Ovirt, at least not a fan, yet, will RedHat walk away from libvirt or ovirt over time?
  • KVM on traditional Linux may be a real competitor to RHEV, given that the parent Linux partition maybe the key platform and KVM virtual instances are just used to leverage that last 20% of a server that management thinks is going to waste?

Well, questions, and more questions? Believe it or not, I like this! I think all of us were getting way too comfortable with the world of virtualization as we know it, knew it? Well at least I was. Maybe the best aspect of RHEV is not that it exists, but that between KVM and RHEV, both Microsoft and VMware should be motivated to get off their collective back ends, and innovate the next generation of virtualization, not purchase it? Oh, and I did I mention Microsoft and VMware will have to reduce their cost models as well? I am sure RedHat has not missed the fact that the days of expensive virtualization platforms, no matter how much is saved by server consolidation or higher adaptation rates to virtualization versus traditional hardware, are gone forever.

Add comment August 25th, 2009

VMworld 2009: Beyond Romancing the Core

Expectations and Ideas about VMworld 2009 based on VMworld Events of the Past

VMworld 2009? What will it be like? Will it be exciting or boring? Before I tackle that question, I believe a brief review VMworld over the last 5 years, this is not exhaustive, but more impressionistic… Oh, and brush up on your Shakespeare… The history of virtualization of course if viewed through VMworld events is rather dramatic.

I remember VMworld 2004 and 2005, the virtualization space was still a bit out law, a bit radical, and something you did that was socially unacceptable. Almost hid under the bed, so your parents would not find it, right? The greed of virtualization, if that is accurate term, had yet to strangle the wow factor down to something realistic in reference to virtualization. If you knew virtualization, you walked on water, if you did not know virtualization, you were just normal, if normal is a word that can be used for people that work in the information technology subculture. What was the big deal, beyond VMotion and VirtualCenter? The future promise of more shared storage options I remember being a key item of interest, whispers of iSCSI, and scale and scope improvements. How would you classify this era, Henry the V, or Romeo and Juliet? True, VMworld 2004 was bold, and 2005 was strong. A Midsummer Night’s Dream might be applicable given the unreal feeling VMworld had in the beginning.

VMworld 2006 and 2007 where the dawn of feature set expansion and growth of new elements to virtualization, AMD and Intel where getting into the swing of things, the ground work by Intel, for VT-x, VT-c, VT-d, and similar work along the same lines by AMD. In some respects this was the golden era of the hypervisor as central focus of virtualization. Sure, management tools and such came along in step, but the hypervisor was the key to all the plans and efforts. Behind the closed doors of the engineering research departments of various firms both sane and insane ideas were being evaluated, out sprang the lab management, life cycle management, and 10s if not 100s of some in-between solution concepts, but was everyone missing something? Or was it just the cost-saving focused customers were not as accepting of anything labeled virtualization as just a year or two before? I remember walking around the exhibits in 2007 thinking, well, this does not seem very exciting. HA and DRS were fine, but incremental steps, not crazy radical. This is the era of virtualization that I would describe as Taming of the Shrew.

VMworld 2008 had a few high points, but I was focused the specific ideas that were important to me, and those I support and work with, for example, better utilization of storage, faster greater virtual instance backups, better management of the environment at an enterprise scale. VMworld was full of things, that were ideas yet to be realized. A notable exception, PowerCLI or what would become PowerCLI, that specific break out session had some of the real honest old fashion energy that VMworld 2004 and 2005 seemed to be buzzing with. I remember also, there seemed to be a very large number of people at VMworld 2008 that were doing virtualization for the first time, I am sure this the case at all VMworld events, but for some reason 2008 it seemed more obvious to me. Maybe it was because I saw VMworld now as an experienced alumnus? I even was part of a presentation at VMworld 2008, so that gave me a different perspective as a limited speaker? Moreover, I was under whelmed by the VMware VM FT feature. As nice as VM FT is, I just was not that impressed, and to be fair I am not sure why, am I now jaded by virtualization? VMware SRM now reality seemed late to market? Some would argue this characterization but for me, this is the era I would classify as Much to Do about Nothing.

What do I expect from 2009? Right now I am not sure. Since 2007 I have been waiting for VMware to grab the concept of virtual containers and make it their own. You can only get so much done with master images, gold images, etc. You can only do so much with tweaking ESXi, which needed some tweaking. VMware has instead, gone to the clouds, literally in the virtualization sense. Yes, everyone is in love with clouds these days, I am not saying this is right or wrong, but I am saying it is not romancing the core, as the focus of virtualization once was. I believe VMware has missed a significant opportunity with virtualization containers. And the opportunity is long gone. True, VMware does not, did not own an operating system, in the classic sense; thus for VMware to get into the container scope, they may had to, will have to, deal with the operating system devils. Even the once dominate Oracle had to purchase an operating system to get a container model. I will always remember with a strong sense of nostalgia, my early career experience with SPARC hardware. Alas poor Solaris, I knew thee well. Sun Microsystems, may fights of computing angels take thee to thy rest. …Of course, my heartfelt apologies to the great Bard. VMworld 2009 may be closer to Hamlet, than I would want. For now, I would guess, the Merchant of Venice is applicable. After all, the director and actors are responsible for presenting Shylock such that the character is viewed with disdain or sympathy by the audience. If you don’t get the inferences here, well…read more Shakespeare.

Maybe the guys at Gartner that have been significant in their lack of pontificating about virtualization last year or so, have finally gotten the batteries in the old crystal ball replaced? RedHat, if not IBM as well, are focused on guiding KVM, and with RHEV beta a reality, the next logical step is not operating system isolation but containerization for RedHat. Hyper-V as it matures will become a containerization model I believe as well, where does that leave VMware? VMware… Look like the innocent flower, but be the serpent under it. Five years ago it was done; now is the time again, to take on RedHat and Microsoft straight on, head on… Double, double toil and trouble; Fire burn and cauldron bubble. Now, where did I leave that copy of Macbeth?

Add comment July 29th, 2009

What Now, The Year Is Almost Over!

Virtualization Critical Comparison – Chapter 08

A few days ago, I had to call a friend of mine to discuss Hyper-V. This friend of mine is a smart guy, is a player, a mover, and a shaker, at the least when he voices an opinion about virtualization, vendors, a lot of vendors listen. It was once said, that when my friend and I agree, there is not a vendor on the planet that does not listen. True, it does not mean we get our respective wish or way with the vendor, but they do take serious note. Well, when we don’t agree, that is when things get interesting. And on Hyper-V versus KVM we do not agree, we see the points of each other respective perspective, but do not agree.

In short, Hyper-V is coming along well with the updates and changes in Windows 2008 R2, including CSV. However, also as I have said in this blog, KVM is coming along as well. With Xen and VMware on the side-lines, for different reasons, this is starting to feel like the finals in a World Cup Football (Soccer) tournament. Why? Well, everyone in the computing industry is watching, everyone. Regardless of whether they admit it or not, the finals are here. What? Wait, wait. I hear the calls and yells from the fans in the stadium, saying… What final 4? Parallels, Iron Works, etc. all are players. True, but did they make it to the finals this year? No. Virtualization container concepts took it in the teeth this year, this was the year of the mature platforms for virtualization, gearing up for cloud computing, the enterprise scale and, yes, and, hypervisor operating system isolation based big strikers.

This year is almost over, eyes are turning to 2010, and as KVM gains acceptance with the bigger life-cycle application vendors, Surgient, ManageIQ, and a few others that slip my mind at the moment, a bit limitation with KVM disappears. Whereas, Hyper-V paired with SCVMM could be seen as at a disadvantage, Microsoft has not as yet that I know of written a true virtualization agnostic management framework. Not when SCVMM has to drive vCenter, and vCenter has to drive ESX? The libvirt community is pushing but can only go so fast, although RedHat is going to change that to a reasonable degree, as well as IBM, when they drive more resources and gain results that support KVM, not Hyper-V. So where does that leave my metaphor for World Cup Virtualization, cough, Soccer, wheeze, Football?

After KVM and Hyper-V smash into each other, since they are on approximately on the same timeline/roadmap for parity in feature set, the winner of this clash will focus on VMware. Yes, yes, I know that regardless of what happens, neither KVM nor Hyper-V will really disappear, but someone has to be the true challenger to VMware. Oracle is still lost trying to figure out how to market Sun Microsystems technology, sad but true, that the precursor the true virtual containers in the modern age of virtualization, Solaris LDOMs (Zones) is cursed to oblivion. Sorry Oracle, but I have real trouble finding anyone that wants to pay a premium for Oracle and be locked into a narrow virtualization platform. Cost is a factor, interoperability is also a significant factor based on the informal questions I have posed to many of my friends in virtualization.

Xen, as much as I like the platform, is a spectator right now as well, Citrix has its niche, and will continue at some level but does anyone really think Xen will compete will with Hyper-V or KVM given the resources that Microsoft can bring to the arena of virtualization, or what RedHat can combined with IBM? Here is a radical idea… Parallels and Citrix should merge! The result is a mature virtualization container model combined with the now dominate application virtualization solution? That would be interesting, but the downside, is that it would cannibalize Xen. After all I do believe that operating system isolation cannot dominate the industry for much longer. The pressure to implement cloud computing with the absolute minimum disk footprint on SAN or NAS? Never mind the push to reduce processor packages per server node, and increase cores per package? That means, as least to me, lots of smaller, leaner servers hosting applications, databases, and very lean virtualization. Now where has that idea come from… Oh yes, PODS of course.

KVM fits this concept now, you run Linux application on the server, and when the given server has cycles free you run a few virtual instances, just a few. Radical but not unheard of, there have been quite a few operating systems that supported primitive application partitions while the parent partition/system did significant work. Hyper-V cannot do this effectively nor can VMware ESX notes on vSphere. Think I am crazy? Talk to your respective architecture teams, they are trying to figure out how to get the last 5 or 10 percent utilization out of dedicated application servers, as a dynamic resource to handle over flow capacity needs, in cloud, right?

Add comment June 30th, 2009

Learning to Hate VOIP

The Lack of a Virtualization End-User Rant?

In my opinion, I am learning to hate VOIP (Voice over IP). As an end-user of various technologies, mobile, virtual, media based, what-have-you, I have come to hold in considerable contempt some technologies with a passion, where others I have come to appreciate as innovative or even supportive to the potential for lifelong blissful use. So, as I noted above, one technology that I have learned to not enjoy is VOIP. VOIP is not evil; it is just, given my experience, not quite right. Good old analog phones where consistent, stable, and the sound quality was reasonable, compared to VOIP. Thus I have learned to view VOIP as horrible but necessary, meaning that as much as I don’t like using VOIP, I refuse to pay for analog, which was at times given my location and service providers as much as three (3) times more than any VOIP. My experience with VOIP is across many vendors, in many situations; so it is not a situation of a single provider or vendor had or did a poor job, but one of where they all are less than great, or should I say, less than good old analog.

So what does this have to do with virtualization? An interesting parallel that did not happen as yet, between VOIP and virtualization? For example, I hear on a routine basis people gripe about VOIP, the latest VOIP joke is the top topic at the being of conference calls, or the loud comment intended to be soft spoken when someone on a conference call is the latest victim of the echoing pop, static fuzzing, then line dead issue, that VOIP seems to suffer from more often than I would like. Where is the parallel to virtualization? Why are we not buried in a cascade of angry end-users deploring negative aspects of virtualization? I always think of the angry mob in original black and white Frankenstein movie, where the local villagers are out for blood, with pitch forks and flaming torches, to avenge the good Doctor, at the expense of the monster, when the idea of a virtualization end-user revolt comes to mind. I wonder if other technical resource people feel the same? Sure there is griping about virtualization, but it has not reached critical mass? I don’t see a flood of Google hits discussing the reversal of virtualization in Fortune 50 companies, where the volume of bad VOIP experience is growing slowly. So why does VOIP seem to have an image issue, where virtualization does not?

The easy answer one could submit is that the management above said, to the technology teams… Virtualization is going to happen or else, make it work, or we will find someone that can. Similar in tone, management above said, to the end-user population… Choice is not an option, use virtualization, deal with it, or we will find someone else that can. This is real, everyone, I have been in meetings and conference calls where these comments were made. But then again, this is the easy answer, and although true to a reasonable degree, it is not the full rationale for why we have never seen an anti-virtualization revolt of any significance.

The hard answer is that everyone, technical support or end-user alike knew virtualization makes sense, we had too much capacity and scale, that computing resources where not used effectively. But was this a fault of the technology, or the architects, project managers, design engineers? Again this is true but is it the entire answer? Management is convinced there is waste in computing, this is the new religion, and unfortunately this qualified waste, more than quantified, is extracted often by the reduction of FTE (Full Time Equivalency) in man hours. People cost too much. So enters the 100s of life cycle vendors and publishers, saying we can do more with less, meaning their application or solution suite eliminates people. Back in my MBA course days, I remember a professor smiling while saying… 80% of your business cost is people, most of the time. Is that the real answer? Reduce head count? For near 10 years, processor scaling has grown in the vertical direction, faster, more IOP based processors. But the cost curve has been near flat. So there was no logical reason not to chase the best computing platform, in reference to processors and related components, because after all no project manager wants to be responsible for implementing a slow system, right? And of course project x, y, z eliminates FTE on the end-user side! Bingo, we end up with extensive capacity, at the same time that networking and storage system performance has made some modest gains. In short, the computing industry has never done well with the idea of doing with less at the beginning versus at the end?

The slick, smooth, answer is that virtualization is cool, feels kewl, and makes management happy, no matter how effective it is. Tangible cost avoidance is a result of a good virtualization strategy and even better tactical implementation and support. But it can go wrong. What ineffective virtualization? Oh, yeah, do a bit of research, mucking up virtualization is not hard, it happens. The emphasis becomes cost saving, not customer quality, and/or the technical team that had the skills and knowledge to support strong virtualization deployment and support, is eliminated or out-sourced, so customer experience is impacted, all true, all has happened, just no one wants to acknowledge this. Imagine what a poor implementation of cloud computing is going to be like? Ha!

So, back to VOIP, yes, VOIP is doing more with less, which is a fact? No, it is providing some features at the expense of quality, with less cost. The quality is less, the performance is less, the stability and consistency is less, but, important to remember, the cost is less. Virtualization is the same idea, even cloud computing is the same basic mindset, doing more with less. What, wait, I can hear many yelling… You are wrong! Virtualization is doing more with what you already have. But is that true? I question it. Yes, having servers idling during the middle of the night was unused capacity. Yes, being able to re-provision systems on the fly, in a stateless model is nice, but is it really more effective or efficient? Would it not be more efficient to just purchase less equipment and lease less capacity up front? And achieve the same result without all the expensive virtualization components, tools, and added complexity? Oh, but this means the architecture and engineering has to get it right in the beginning? Yes!

Don’t be fooled, the end-users are not thrilled with virtualization so far; they have been pushed, threatened, and kicked into accepting the following perspective… It is just good enough computing resource model. Virtualization no matter how good the tools and procedures, requires smart, dynamic, and quality support, without this, expect to have a sinkhole in your respective technical support organization, that absorbs issues and problems, but never solves all issues or problems, so confusion then frustration builds. With VOIP it may be more visible, because VOIP is an average consumer level product. Just consider an end-user revolt against Cloud computing based on virtualization? It maybe more painful when it happens, than say the VOIP revolt? Is VOIP the, Canary in a coal mine? The warning indicator to how virtualization will be regarded in the future? Maybe, or will we all accept less overall believing we are getting more for less?

3 comments June 16th, 2009

Next Posts Previous Posts


March 2018
« Jun    

Posts by Month

Posts by Category