Archive for August, 2007

ToutVirtual Chosen for Network World’s List of “10 Virtualization Companies to Watch”

Carlsbad, California – August 30, 2007 – ToutVirtual, an emerging leader in management software for virtual computing infrastructures, today announced it has garnered its place in Network World magazine’s list of “10 Virtualization Companies to Watch”. On the heels of its freeware launches over the past two years, the company has now firmly staked its ground in the virtualization market with its recently launched VirtualIQ Pro product.

Continue Reading August 30th, 2007

Virtual Instance Performance Context

Virtualization, Fine, Well Sort Of? – Chapter 03

Virtual machine performance is one of those topics that is both art and science. My grandmother the English Professor is yelling at me already… not is but are both art and science. But I am making a point, virtual instance performance has a statistical mathematical aspect, but numbers alone, do not always provide the expected results, you have know the context and judge its significance, that is the art aspect, this is absolutely critical when you are trying to predict performance before you act. I can not tell you the number of times I have looked at virtual instances, and I just knew why one was not performing well, and the only option that made sense was to move it to another host. Only to find that once I moved the given virtual instance to another host, things got worse, or following my gut, moved a different instance, and things got better unexpectedly, in a way that mathematics just could not illustrate based on the performance metrics under my nose. Sometimes the math does not make sense, sometimes the math along is not enough?

There are several different ways to define the context of performance, I am going to try, and I do mean try, to establish a framework for analysis, not just a bunch of rules, since in this case any rule defined here, someone, somewhere, will disprove it in their situation, their specific situation. Also, for the sake of discussion the terms Virtual Instance and Container are interchangeable unless otherwise noted. So performance relationships within a virtualization context are:

  • Intra Virtual Instance Performance
  • Peer-to-Peer or Inter Virtual Instance Performance
  • Host to Instance Performance
  • Host to Host Performance
  • Host to Cluster Performance

Intra Virtual Instance Performance – This is what you could call the false performance of a virtual instance, for example, Windows OS fans will recognize PerfMon (Windows Performance Monitor), and it does a good or great job, depending on your opinion of monitoring in Windows, it tends to be more actuate when monitoring external to its target, on traditional hardware, but opinions vary. However, any tool which relies on its own context can get completely mislead when monitoring from- or in- a virtual instance. Virtual instances do not run in real-time, did everyone get that? They do not run in real-time, but they think they do! Thus, I can not count the times operational teams or developers have watched PerfMon and seen it show outrageous poor performance, and yet the virtual host denotes the individual virtual instance is not suffering at all, it poor code, or poor application design that is the issue. In short, do rely on intra virtual instance performance in empirical terms ever. Intra performance should be a qualitative analysis not a quantitative analysis. Is the end-user experience good-enough? This is qualitative analysis.

Peer-to-Peer or Inter Virtual Instance Performance – This is the foundation of what quantitative performance is possible for virtual instances. This is performance that is reported by the virtual host, this performance is real-time; balanced by context of the host. The virtualization industry has struggled with this model from a predictive modeling perspective, our kind and wonderful management, those nurturing souls, are demanding a predictive model for virtual instance introduction to the existing environment or host, but this takes extensive brain-power, such that no single virtualization platform publisher has yet cracked this nut, more like they keep scratching the surface of the nut only. Automated models integrated into Dynamic Resource Sharing (DRS) or even features which indirectly use performance trending like High Availability (HA), that attempt to load balance or distribute instances across several hosts for best performance and stability, all use historical data. Historical data is worthless if you change the context, the very act of adding a virtual instance to an existing group of instances, changes all the variables of the equation, in fact, because there is no linear or logarithmic relationship between virtual instances, it is all a house of cards. If you think you can accurately predict the impact of adding a new virtual instance to an existing pool of instances, patent it, write a book, make millions, I wish you the best. You will have solved one of the true problems in virtualization today. Every physical-to-virtual tool publisher out there will hire you in a microsecond, because they see this problem as the search for the Holy Grail, their technology is just begging for actuate predictive modeling.

Host to Instance Performance – This context is quantitative on the host side, but qualitative at the virtual instance side, since intra performance as noted above is applicable. Since virtual hosts attempt to provide the every highest degree of resources possible to all virtual instances, this really is a question of overhead. Container models should always have an edge over hypervisor models in this context. The weaker the host hardware, the more significant the impact the host processing overhead has on the virtual instances. This is where things like disk (DASD) performance, network latency, memory latency, and backplane or bus sub-system performance are critical. For example, if you have 4 dual-core CPUs, but you purchased a cheap main-board with a slow front-side bus, or worse yet, very slow cheap memory, have you put very fast squirrels, on rather slow wheels?

Host to Host Performance – This is where the industry is starting to make some waves, which really do positively impact the performance of virtual instances. Automation of this type of analysis is gaining preference, taking some of the guess work out of situation, making host to host performance analysis more deterministic. This is possible because such automation is very simple given sufficient trending data, being a variant of the classic sales-man or shortest-route/shortest-time statistical modeling methods. If you think of resources per host as limited resource in time or distance in relation to virtual instances, then you can use trending data to optimize what instances are floating on what hosts. In fact, if you think about it in simple terms, what do you do when you have a misbehaving virtual instance? You often move it to another host, where you believe it will either perform better, or have less impact on the other instances that may exist on the host you are migrating towards.

Host to Cluster Performance – This context is host to host performance, but the actual analysis is done across groups of hosts as logical entity. The analysis is very similar to host to host performance evaluation.

Of course proper virtual instance performance analysis is dependent on context, consider the PerfMon example above, it looks like good data, if we where not talking about virtual instances, it would in all likelihood be accepted as valid data. However, it is not acceptable data in a virtual instance.

There have been a number of scientists in the 20th century that have commented on this same issue of context as applicable to observation and measurement, in various ways, lets see…  Albert Einstein – Theory of General Relativity, where Einstein states that the observer defines the context of what is going on, for example, if you watching an object fall, and you are not falling, you see something falling. However, if you are also falling at the some rate and direction as the object, and you do not realize you are falling, you may simply see the object sitting next to you. Of course, the most descriptive theory that is applicable to virtual instances and monitoring their performance, is the Heisenberg Uncertainty Principle and confusion of the Observer Effect, where in, the actual act of observing, or in our case measuring performance, impacts the performance being measured. Heisenberg understood the Observer Effect, but his uncertainty principle was really describing measurement uncertainty, for example, some uncertainty always exists in every measurement, this always happens when inaccurate measuring methods are used, or when something unknown is impacting the measurement process. For those that read this blog often, you may be reminded by something I have said in the past… Know What You Don’t Know… the very question implies that you have to acknowledge context for any analysis that is done. In part II, I will discuss the more mechanical side of ensuring virtual instance performance, now that we have got the nasty context topic out of the way.

3 comments August 29th, 2007

Reducing Total Cost of Ownership?

Virtualization?  What The Heck Is That? – Chapter 03

This has to be one of the most loaded questions in computing today. Why is cost of ownership an issue? Why is it now driving all kinds of revisions in the industry? Well, global warming, increasing cost of cooling systems, and other environmental factors aside, processors that are insane, like we need more cores? Not to mention real estate and physical infrastructure costs, both network and storage connectivity expense, and the list goes on, it all boils down to one issue, which is core, bad pun intented, to virtualization, and one we have already discussed in this blog… effective and efficient utilization of resources.

Virtualization uses resources that are not already used in an efficacious way. Why does this situation exist? Blame developers, yes developers, and blame the management of the developer groups even more. Everyone is in a rush to market, such that we the customer get nothing but junk. 

Virtualization, rather, virtual instances isolate bad code, and protect other virtual instances from each other. This is true. But why? Because if good code dominated the computing industry, co-hosting would have killed virtualization, hands down! In fact, it may yet do it at some point in the future.

Now, just about everyone reading this article is saying… What is this bozo talking about this time? I expected to read about reducing Total Cost of Ownership (TCO), not hear yet another tirade on developer coding issues. Can we please get back to the title? Well, I will answer that question… in short actually I have been on topic the entire time. Good code does the following:

  • Saves money by saving developer lines of code generation cycles
  • Gets products to market faster with shorter alpha, beta, and release candidate cycles
  • Provides for happier customers
  • Allows for more extensive leverage of Co-Hosting
  • Reduces total ownership cost across the support infrastructure, with fewer customers griping?

Of course, virtualization does a few things that co-hosting does not do including:

  • Encapsulation of virtual instances, easy to migrate, easier to backup and recover
  • As noted above isolates virtual instances from other virtual instances to the greatest extent possible
  • Customers think they are isolated, and so have better performance… bhah!

However, virtualization does incur expenses identical to traditional hardware, and aggravates a few issues, for example:

  • Virtual instances do not reduce software costs, application and operating system licensing is still incurred, whereas co-hosting reduces application and operating system expense
  • Virtualization only in part, offsets or avoids some of the infrastructure cost compare to traditional hardware
  • Virtualization generates a completely new expense, the cost of the virtualization software licensing expense, this is not insignificant
  • Reduces infrastructure cost, reducing network and storage connectivity costs in total
  • Virtualization has a cost in common with co-hosting, that management tends to discount, or worse ignore, the eggs-all-in-one-basket issue of course! Lose one virtual host, and lose of all virtual instances.

The strengths of virtualization can be trumped by co-hosting, on two keys issues, application/operating system expense, and virtualization software platform cost. For example, three of the most obvious co-hosting models, just happen to be very common solutions in computing today:

  • Microsoft Internet Information Server (IIS)
  • Citrix Application Server
  • Microsoft SQL Server (SQL)

These same environments are designed to be co-hosting oriented and down right blow gigabyte chunks when they are embedded in virtual instances compared to straight co-hosting. For those keeping score, yes, yes, virtualization saves money, if it is done right, but if co-hosting is done right, it should save more. All that is needed is good code. So, we have come full circle, yes? If we had good code, we could do better or more co-hosting. Taking it a logical step further, designing better code, so co-hosting is not even needed? What a radical concept! And, well, we then would not need virtualization at all either!

Add comment August 22nd, 2007

Major Tom, Cough, VMware The Count Goes On!

I realized while listening to a Peter Schilling extended remix of the song Major Tom, on my iPod, it was a true metaphor for VMware, including its past, present, and potential future. Why am I saying this? Let me enumerate, VMware launched a great idea, per the song… standing there alone… so was Major Tom launched into space. VMware was praised as the foundation of a new era of computing, so was Major Tom named a hero for risking his life in space.

Continue Reading 7 comments August 15th, 2007

Short Distance Disaster Recovery?

Many of you are thinking, I hope, that you know what short distance disaster recovery is. The concept should be familiar, even if the term short distance disaster recovery is not. A few typical scenarios would be:

Continue Reading 2 comments August 8th, 2007

Know What You Don’t Know?

Know what you don’t know. Any experienced computer technology expert knows this simple rule, if they don’t fire them now, seriously. While you are at it, if the company CPAs do not know this as well, fire them too, I will explain later why. This know-what-you-don’t-know concept is the basic principle of the scientific method. In other words, if you don’t know what questions to ask in solving a problem, you have no possible hope to finding a viable solution. How does this prove applicable to virtualization?

Continue Reading 3 comments August 1st, 2007


August 2007
« Jul   Sep »

Posts by Month

Posts by Category