Posts filed under 'Virtual System Management'

Intelligent Workload Management Myth or Mystery?

Virtualization Critical Evaluation – Chapter 15

Ever wonder why various firms have published only parts of their wondrous cloud designs?  The provisioning aspects, the infrastructure aspects, seem obvious no?  PXE, DHCP, TFTP, HTTPS, VLANs, PODs, switches, racks, chassis, blades, servers, remote lights out or control devices, out-of-band devices, etc., stateless operating systems, floating applications, node aware applications, etc. as services, grid application solutions as a primitive forms of cloud services?  The list goes on and on, but in reality everything discussed, is not new.  Not really a significant leap in design or concept.

What is not discussed is the work-load management aspect of the cloud.  In the last few years there have been or are a small number of patents on file that discuss concepts of and unique implications of resource dynamics, autonomous computing infrastructures.  Some describe types of intelligence applied to resource demand needs, availability needs, etc.  This would be the foundation of work-load-management for a cloud to be sure.  I leave it to the reader to do some basic resource on what is published from a patent perspective, but other than the hints of how work-load management might be implemented from a logical design perspective; there is very little information on how it is implemented at a function or tactical level.  Why work load management should be implemented is discussed at various points since the initial days of virtualization theory, or virtualization reality as we know it today.

Moreover, a number of firms have approached aspects of work-load management, usually from a control and reporting perspective that grows into a historical trending, capacity forecasting methodology.  A very few entities have tackled the predictive analysis aspects of work-load management.  No, VMware DRS is not predictive analysis, nor does VMware Capacity IQ have such.  This is not a stab at VMware, but an acknowledgement that even the 800 pound gorilla of virtualization, has trouble with the subject.  Predictive analysis is non-trivial, complex, and difficult to do, and to do it well, the use of a magic wand might be needed, and as far as I am concerned, strongly recommended.  Do disrespect to Wizards Guild intended.

If you want to give yourself a headache, Google workload management, and see what appears, then Google predictive analysis, and then, try to intersect the two topics. Some of the most interesting solutions that I have seen never appear in the joint Google index has returned.  It appears as though the two topics are mutually exclusive?  Ha!  No way.  Note, we are not discussing business analytics or business intelligence here, but cloud computing.  Business or application analytics are rules of logic that drive solution behavior at an application level, ignoring the infrastructure that the application runs on, with an expectation that said infrastructure is homogenous or quite uniform.  The grid technologies, even cluster technologies, and virtualized high-availability models of today are taken for granted by business application analytics, and are quite static compared to what a heterogeneous cloud should be able to do now or in the very near future, no?

So where does that leave us?  With more questions than answers?  Yet again?  Some might expect or decry that I have not rattled of 5 or 10 products, itemizing the good, the bad, or even the ugly aspects of each product?  Rating or even ranting about whatever.  Not this time.  This is a topic that any virtualization architecture, that has any integrity, should learn the old fashion way… do your own research.  That is the only way to appreciate the complexity of the subject, and difficulty of design, and stress to achieve implementation.  Intelligent workload management is a mystery still, but no longer a myth, the functional components exist, and can be leveraged.  However, getting square pegs into round holes requires some original thinking and unique effort.

Add comment May 11th, 2010

ToutVirtual Announces VirtualIQ Pro for XenServer

VirtualIQ suite of products now adds support for leading virtualization platform from Citrix

Carlsbad, Calif. – February 23, 2009 – ToutVirtual, an emerging leader in virtualization intelligence, optimization and performance-management software for virtual computing infrastructures, today announced that its VirtualIQ suite of products now also support all editions of Citrix® XenServer. With the certification under Citrix Ready, VirtualIQ is now certified to manage XenServer environments and heterogeneous environments that include XenServer.

The VirtualIQ suite of products is designed to support virtual server room operations through three stages of virtualizationdesign, deploy, and deliver stages – helping users make correct decisions for virtualization optimization along the way. For instance, in the design stage, the issues are physical-to-virtual (P-V) migration, calculating ROI, platform selection, how to decide which applications get consolidated on which hosts, resource optimization, and so forth. In the deploy stage, the challenges are managing available server capacity, controlling virtual server sprawl, performance optimization, and resource dependency just to name a few. In the deliver stage, the hurdles are managing heterogeneous virtual environments, service-delivery optimization, policy-based actions like spinning a new virtual machine, and more.

The suite of products allows users to compare how various virtualization platforms, such as XenServer, perform running different applications, such as XenApp and XenDesktop, and then provides visibility and policy-based control in managing the XenServer-based environment. The VirtualIQ suite of products supports multiple virtualization platforms – not just Citrix – for an apple-to-apple comparison and provides all essential decision-making data in a single, integrated web console that is simple to install and use. ToutVirtual is partnering with Citrix under the Citrix Ready Program, which showcases and recommends solutions demonstrating compatibility with Citrix Application Delivery Infrastructure.

VirtualIQ products provide Citrix customers key benefits including:

“We are pleased to work with ToutVirtual in defining best practices for managing virtualization processes,” said Simon Crosby, chief technology officer for the Virtualization and Management Division, Citrix Systems. “ToutVirtual’s VirtualIQ products enable our XenServer customers to better assess and optimize applications for their environments, assisting them in making the right planning decisions.”

“ToutVirtual is pleased to strengthen its relationship with Citrix,” said Vipul Pabari, chief technology officer for ToutVirtual. “VirtualIQ with XenServer enables Citrix channel partners and users to optimize each stage of virtualization. Best practices highlighting automation are a necessity in managing virtual infrastructures, which is why we developed policy-based ‘Crawl-Walk-Run’ automation methodology. Whether the user is just getting started in the design phase, or further along in deployment, or delivering advanced services, the VirtualIQ suite of products is simple and cost effective.”

Pricing, Availability and Platforms Supported

More information about the VirtualIQ suite of products is available at:
http://www.toutvirtual.com/downloads/downloads.php

Channel partners interested in joining the ToutVirtual reseller program should go to
http://www.toutvirtual.com/company/reseller.php?ctype=new

Add comment February 23rd, 2009

ToutVirtual Announces VirtualIQ Pro 3

Single, platform-agnostic console provides integrated policy engine for all virtualization platforms
Carlsbad, California – January 13, 2009 – ToutVirtual, an emerging leader in virtualization intelligence, optimization and performance-management software for virtual computing infrastructures, today announced the availability of VirtualIQ Pro 3 (VirtualIQ Pro). VirtualIQ Pro is a management and automation program designed to support customers in every stage of virtualization deployment. The wide range of features helps users from the planning phase to established virtual server rooms running hundreds of virtual machines (VMs). VirtualIQ Pro is available as a software solution, and as a quick-start for new users, ToutVirtual is offering a fully featured, free version of VirtualIQ Pro which supports up to 5 CPU sockets or 25 virtual machines. To download a free copy of ToutVirtual’s VirtualIQ Pro, please visit http://www.toutvirtual.com/downloads/downloads.php

VirtualIQ Pro, with a new user interface, offers the tools to manage the unique challenges that IT faces at each stage. The integrated functionality of VirtualIQ Pro provides server virtualization assessment, asset management, performance management, capacity management, and reporting in one product that is simple to install, deploy and use. VirtualIQ Pro offers a unique value proposition since the same console is capable of supporting Citrix, Microsoft, Oracle, VMware, and open-source Xen virtualization platforms.

“ToutVirtual’s VirtualIQ Pro console addresses a very important need in the support of heterogeneous virtualization environments,” says Andi Mann, research director at Enterprise Management Associates. “In an easy-to-use, functional and affordable interface, VirtualIQ Pro helps users get greater value from virtualization deployments by integrating, in a single console, several core disciplines like capacity planning, load balancing and automated VM management.”

“ToutVirtual’s products have helped us manage our physical and virtual IT infrastructure,” says Rakesh Shah, vice president of IT operations for eGain, Inc., the leading provider of multi-channel customer service and knowledge management software. “We have a heterogeneous IT environment with different virtual platforms and various stages of virtualization. ToutVirtual’s VirtualIQ Pro has provided us with a unified console to control server capacity, manage incidents, create and execute policies and ultimately optimize our virtualization environment.”
 
Users face different challenges in each phase of their virtualization deployments – design, deploy, and deliver. For instance, in the design stage, the issues are physical-to-virtual (P-V) migration, calculating ROI, platform selection, how to decide which applications get consolidated on which hosts, resource optimization, and so forth. In the deploy stage, the challenges are managing available server capacity, controlling virtual server sprawl, performance optimization, and resource dependency just to name a few. In the deliver stage, the hurdles are managing heterogeneous virtual environments, service-delivery optimization, policy-based actions like spinning a new virtual machine, and more. VirtualIQ Pro provides a single, integrated console to optimize each of these scenarios.

“The proliferation of virtualization platforms and the ever growing number of virtual applications creates a strong demand for a single, platform- and hypervisor-agnostic management and automation console that can address all the customer’s needs – whether the customer is in the planning stage or already in the deployment stage,” says Vipul Pabari, chief technology officer of ToutVirtual. “With VirtualIQ Pro we are responding to this demand with an affordable, flexible, complete and easy-to-use management and automation tool.”

VirtualIQ Pro 3 comes with the following new features:

  • Physical and virtual asset and inventory discovery – provides audit trail of additions, moves and removal of assets. Supports both non-virtualized and virtualized physical server assets, virtual machine assets, and resource pools.
  • Single, integrated console – simplifies IT operations to manage physical servers and virtual machines. Supports multiple management functions such as performance management, capacity planning, capacity maximization, capacity forecasting, and visibility reports.
  • Physical to virtual migration (P-to-V) analyzer – allows IT to discover virtualization server candidates, perform multiple “what-if” workload placement analysis for virtual machine (VM) density planning. Optional green IT analysis and forecasting for virtualization return-on-investment (ROI) analysis available.
  • Virtualization analytics – provides IT quick incident management to identify where resource contention or bottlenecks are within the dynamic virtual infrastructure. Supports Host-Host, Host-VM, Host-HostGroup, Inter-VM and Intra-VM resource dependency analysis.
  • Hypervisor-agnostic policy engine for virtualization automation – allows IT to simplify and lower the total cost of ownership of their virtual infrastructure. Supports “crawl-walk-run” automation modes, event-triggered automation, and time-triggered (scheduled) automation. New virtualization actions include (cold migration), move VM and live VM migration.
  • Multiple hypervisor (Type-I and Type-II) support for heterogeneous support of the market’s top virtualization platforms.
  • Secure virtualization with role-based access – allows IT departments to define different user roles within the web portal and to customize rights. Supports multiple roles, multiple authentication models: stand-alone, Active Directory and OpenLDAP.

Pricing, Availability and Platforms Supported

VirtualIQ Pro 3 is available now. VirtualIQ Pro software supports unlimited number of CPU sockets and unlimited virtual machines. Several pricing levels for VirtualIQ Pro are available based on CPU socket and VM support. As a quick-start or test drive, VirtualIQ Pro for up to 5 CPU sockets or 25 virtual machines is available for free download. Pricing starts at $199 per CPU socket per year. Additional volume pricing is available by contacting ToutVirtual directly. To download a free copy of ToutVirtual’s VirtualIQ Pro, please visit http://www.toutvirtual.com/downloads/downloads.php

VirtualIQ Pro is hypervisor agnostic and can be installed on either Windows or Linux OS and supports the following virtual platforms:

  • VMware ESX, VMware ESXi, VMware Server on Windows, VMware Server on Linux, VMware GSX Server on Windows, VMware GSX Server on Linux
  • Citrix XenServer
  • Microsoft Windows Server 2008 Hyper-V, Microsoft Virtual Server 2005 R2
  • Xen on Novell SUSE Enterprise 10
  • Oracle Oracle VM

Add comment January 16th, 2009

Virtual Instance Performance Revisited

Virtualization, Fine, Well Sort Of? – Chapter 08

This is a revisited article, not because of a correction or change of view, but advance a topic that I have always indented to revisit, but never seemed to have time to do so, until now. A loyal reader of this blog reminded me of this fact recently, so I am honor bound to resolve the gap or lack of continued discussion on this topic. Oh, I am speaking of virtual instance performance of course, as the title notes. Unfortunately, virtual instance performance is a complex topic that gets into the swampy weeds full of tangles and hidden snags, faster than water and dirt make mud.

In part I of this topic, I discussed the context of performance, to which, Peer-to-Peer or Inter Virtual Instance Performance, which is what the host infrastructure reports. I will not rehash this topic here, but it is important to note that only the host infrastructure can accurately report virtual instance performance. Also in part I of this topic I referenced, Host to Instance Performance, Host to Host Performance, and Host to Cluster Performance. Respectively I will summarize each concept, but for more detail, refer to part I of this topic. In brief, Host to Instance Performance, is overhead of the host or host impact to performance, what does your hardware and hypervisor cost in reference to performance. Whereas, Host to Host Performance is which host executes which virtual instances the best, all things beyond individual instance deltas being equal. Moreover, Host to Cluster Performance is one step short of cloud or grid computing modeling, focusing on which host in a given cluster is the most efficient given a known set of virtual instances. This is important, when you consider data center globalization, to which hosts should be consistent, and so should clusters of hosts, across different datacenters, for example.

Now if you are tracking all these This-Versus-That models above, then you will realize that one model is missing, which it is? Give up? Cluster to Cluster Performance! There is a good reason for this; I neglected it in part I, my bad. As life cycle and management tools have approved over the last year or so, this is a viable and significant performance model, especially when you have heterogeneous hyper-visor based environments. Consider VMware versus Hyper-V, or Xen versus VMware, or Xen versus Hyper-V? Obviously if you offer a class of service aspect to your virtualization, you need to be able to compare different virtualization infrastructures in real time, with little or no explicit normalization. I for one, hate normalization, it is often abused and biased to a specific or narrow criteria set, so normalization devalues the analysis and results. But I digress, Cluster to Cluster Performance is beyond the scope of this specific article, but will be discussed in the future, did someone say Virtual Instance Performance Part III?

But the title is Virtual Instance Performance Revisited, and so the key to all performance evaluation starts and ends with the virtual instance, this is the corner stone of virtualization, be it application instance, virtualization container, or operating system isolation based. The vast majority of tools available for virtualization performance evaluation focus on the virtual instance of course, since the goal is, to always have the fastest instances possible, given the constraints of the associated infrastructure. The last comment begs the question, what constraints? Well, these are discussed extensively by virtualization gurus over and over, including processor context switching or processor cycle loading, memory IO, disk IO and network IO. It is quite common for various hardware vendors to focus only on one or two of these constraints and publish misleading or flat-out inaccurate statistics declaring they have the best or fastest virtual instances in the known universe, only on their respective hardware of course. Bah Humbug! They even normalize their results in comparison to their competitors to prove the point that they have the best hardware. Bah Humbug, Again!

Unless you evaluate virtual instances only under severe load for all four (4) constraints, and inclusive of these constraints, you are not doing your clients or yourself right. A classic garbage-in gospel-out (GIGO) scenario if there ever was one. Virtualization abuses hardware, it is and always does this, this is by design, and after all, virtualization is attempting to fully utilize resources that are often unused or wasted, no? So the single most import issue with virtualization instance performance evaluation is the selection of the tools, not I stated, tools, to do the evaluation, Ah Ha! Bet you did not see, or read, that one coming now did you? VMware VMark, vCompute, IOMeter, etc., all have their weak points, you must understand these limits or issues before you design your evaluation criteria and methodology. Consider this, if your specific testing for virtual instance performance testing is only looking at processor loading and memory IO, are your clients not going to be unhappy when their network IO and disk IO results are horrible? Did you analyze your environment right? Did you evaluate your proposed environment right? If the virtual instance evaluation for performance is skewed, then your entire environment performance evaluation for Host or even Cluster scope performance will be horrible.

Now it is time to get into the weeds, and get mud in between the toes. Now that we know that we must test all constraints explicitly and inclusively, and we must test at the virtual instance scope before all else, what do we do? The virtualization gurus will argue over this, but below is what works for me.

  1. Establish a control. Establish a performance history baseline. If you are testing virtual instances on a new hyper-visor, or new hardware vendor, do the exact same test on an environment you already understand. If you have HP, and test on Dell, don’t normalize your results, just make sure you understand that HP and Dell are different, and make sound inferences based on the raw results. If you can test on 3 or more hardware vendors at the same time, or have historical data using the same tools and methods, you don’t need to normalize the data. Normalization is for management and others that do not know how to analyze resultant data.
  2. Processor and memory differences, including changes in caching speed and size of buffers, are often a shifted scale comparison, so normalization is not needed. This is also true of power consumption curves. This is rational and logical, since network and disk sub-systems should remain consistent for a longer period, so by definition the number of factors to be compared can be reduced if the sub systems remain consistent, including, of all things, the PCI bus architecture per host. Production performance data always trumps lab data. So if you have HP and Dell in production, and are evaluating IBM, use the production data as the baseline or control, then test HP and Dell with the newer tools or methods, or processors and memory, etc., then and only then, test IBM. Bingo! No normalization is required. I can just hear the slick stylized marketing types for all the various vendors crying over their iced-mocha-lattes, when they find out I always reject normalization based evaluations by default.
  3. Always run the same test, in the same environment, at the same time, with the same characteristics. This is just basic common sense. However, don’t be surprised when you see something that does not make sense. Iterations are key to the entire evaluation effort. Remember that basic statistical analysis requires that a sample size of 30 or more is needed to get to any standardization and variance deviations accuracy. Every time a change is done, changing the experiment is done, and performance evaluation is an experiment. Think scientific method all the time when doing any performance evaluation, be it in the lab or otherwise.
  4. Make sure you understand where and when you can introduce error into the results. The only way to do this is through peer review, getting more eyes on the proposed test plan, is the significant objective. Everyone sees the same process with or from a different perspective, whereas tunnel vision is evaluative death. Sometimes eating crow at the beginning is better than getting heart-burn while coughing up feathers at the end of an evaluation effort.
  5. Control expectations. Data often goes around the world faster then the executive summary. Expect that someone, somewhere, will take the evaluation tools and methods, as well as the results out of context. Results will be challenged, be prepared for it. Don’t defend results, only explain how results are generated and analyzed. Vendors hate this, and often forget this point, when they sponsor or quote so called independent analysis, focusing the resultant explanations as the authoritative final qualitative statement, when the raw data objectively discounts or obviously points to other conclusions. Normalization often hides the true results.
  6. The developer of the given virtualization environment is the start of the process not the end. Do not rely on the developer tool set, nor what a given vendor demands as the only acceptable tool for analysis. Of course the vendor has tuned the given tool or methodology to illustrate the strengths of the platform in question. Would it not be a wonderful world if HP performance tools worked on Dell and IBM, and Dell tools for same, worked on IBM and HP, etc., etc. Would make for some interesting evaluations no? Or Fabric tools worked on FCoE infrastructure, and iSCSI tools worked on FC infrastructure? Sounds insane? Not so. Generic tools sets exist, independent tools exist, use them. Even if every vendor in the world has used VMmark, VMmark means nothing to Hyper-V.
  7. Repeat, repeat and repeat, change only one thing at a time, for example, only change loading of one constraint at a time, be it processor loading versus memory IO, versus disk IO or versus network IO. Use the same dataset or streamed sequence for each test. Never change the dataset or streamed sequence between iterative testing for a given factor. Complete an entire set of tests before mucking with the variables beyond the planned test set. Could be considered a repeat of the point above, about running the evaluation in a consistent manner, but it is so important, it if it is a repeat, so be it.

Well, at this point, I am sure someone is yelling…But he has not told us anything useful yet? What the Heck?! Not true. It is true, I have not spelled out an explicit methodology for evaluation, as a do this, do that, then do this scenario. To do that, cough, would be to create a bias that should, no, must be avoided. But to be fair, I will summarize things a bit, and recommend a best practices approach.

  • Analyze the virtualization environment, focus on the virtual instances first, look at processor loading, memory, network, and disk IO loading, create and execute tests that stress all constraints as applicable to your expected needs, and well beyond your expected needs. If the majority of your virtual instances are encoding unique video data creating results, expect lots of disk IO, if your virtual instances are web servers, expect lots of network IO, etc. Be smart in your evaluation design for performance.
  • Remember, virtual instances are the corner stone of all evaluation, Hosts and Clusters have their own performance characteristics, but they are impacted or resultant based on the virtual instances. Dynamic resource sharing, high-availability, etc., are wonderful features, but mean nothing if individual and grouped virtual instancing performance is not understood. The goal is to have most of the virtual instances perform well, most of the time, nothing more. The number of instances, the number of hosts, the number of clusters, even the number of virtualized datacenters, if it comes to that scale or scope of evaluation, will be obvious and straight-forward, if the methodology and tools used are sound according to the virtual instance modeling.
  • Performance evaluation is a living breathing animal, and should be viewed as dynamic and experienced based, no pun intended. Nothing in virtualization is static, so allow and expect the methods and tools to be flexible and adaptive to the effort at hand. This is not to say that change is good for the sake of change. Only change tools and methods when it makes sense to do so. Never change technique in the middle of an evaluation effort. To do so is statistical resultant evaluation suicide.

Add comment December 10th, 2008

ToutVirtual Announces Support for Microsoft Windows Server 2008 Hyper-V Virtualization

Carlsbad, Calif. – October 27, 2008 – ToutVirtual, an emerging leader in optimization and performance-management software for virtual computing infrastructures, today announced that its VirtualIQ suite of products now also supports Microsoft Windows Server 2008 Hyper-V virtualization.

The VirtualIQ suite of products is designed to support virtual server room operations through three stages of virtualizationdesign, deploy, and deliver stages – helping users make correct decisions for virtualization optimization along the way. The suite of products allows users to compare how various virtualization platforms, such as Hyper-V, perform running different applications and then provides visibility and policy-based control in managing the Hyper-V based environment. VirtualIQ suite of products supports multiple virtualization platforms for an apple-to-apple comparison and provides all essential decision-making data in a single, integrated web console that is simple to install and use.

ToutVirtual is partnering with Microsoft as an Independent Software Vendor (ISV). Users of Microsoft Windows Server 2008 Hyper-V can get more information about ToutVirtual VirtualIQ at: http://www.microsoft.com/virtualization/partner-profile.mspx?id=81

“We are pleased that ToutVirtual is delivering support for Windows Server 2008 Hyper-V,” said Jim Schwartz, Director for Virtualization Solutions at Microsoft Corp. “ToutVirtual’s VirtualIQ products help our customers running Hyper-V assess and optimize applications for their environments, assisting them in making the right planning decisions.”

“ToutVirtual is excited to strengthen its relationship with Microsoft with the support of Windows Server 2008 Hyper-V,” said Vipul Pabari, chief technology officer for ToutVirtual. “Hyper-V users can use our products to compare the performance of applications on multiple virtualization platforms. Whether the user is just getting started in the design phase or further along in deployment or delivering advanced services, VirtualIQ suite of products is simple and cost effective.”

Pricing, Availability and Platforms Supported
More information about the VirtualIQ suite of products is available at:
http://www.toutvirtual.com/downloads/downloads.php

About ToutVirtual
ToutVirtual, Inc. is an emerging leader in virtualization system optimization software to manage and automate virtual computing processes and ease the transition from design to deployment. VirtualIQ, the company’s flagship product suite, allows organizations to obtain a holistic view and control their virtual infrastructure including servers, applications, storage, and clients independently of the underlying virtual computing platform. Unlike other companies whose products are vendor specific, platform specific, or network tier specific, ToutVirtual software operates across multiple platforms and is multi-tier to prolong product life, protect IT investments, and maximize ROI. Additional information about the company and its products is available at http://www.toutvirtual.com

ToutVirtual and VirtualIQ are registered trademarks or trademarks of ToutVirtual, Inc. All other marks and names mentioned herein may be trademarks of their respective companies.
###

Add comment October 27th, 2008

The Risk of Virtualization

Virtualization, Fine, Well Sort Of? – Chapter 05

Let us consider the following situation…You walk into a conference room, before you are a group of your key clients, lines of businesses, what ever is applicable to your situation, and you ask the following questions:

  • What percentage of your technology infrastructure is virtualized?
  • Are you happy with the performance of virtualization?
  • Do you consider virtualization a higher risk? If so, Why?
  • Do you consider virtualization a lower risk? If so, Why?
  • What is the true risk of this percentage of virtualization?
  • Does the phrase “All Eggs in One Basket” meaning anything to you?
  • Is the risk of virtualization worth the threat of virtualization?

What do you think the answers will be? Are you going to be happy with the answers? Are your clients going to be able to answer the questions? If they can not, is it your fault or theirs?

What is the percentage of your technology infrastructure is virtualized? This is a straight-forward question, or is it? Does your firm or infrastructure provider use virtualized storage? Does your solution use hypervisor or application virtualization?  Is your network virtualized? VLANs, QoS, etc., What? What the heck is that? Hypervisor and/or application virtualization, which is safer? Many clients have no real understanding that every information technology resource they contract for is virtualized in some way. It may be that what you think are LANs, are VLANs. It may be that what you think is physical infrastructure is, but not completely, what if you network or storage provider has their entire command-center tool set on virtualized instances and they are suffering from the same issues you are? You just don’t know what is virtualized below what you see as infrastructure.

Are you happy with the performance of virtualization? This is also a straight forward question, maybe the only straight-forward question in this entire discussion, or virtualization in general. From a quantitative perspective, providing you have rather extensive analysis methods, you can provide your clients with hard facts, from P2V, V2I, I2I, I2V, I2P, V2P, etc. But from a qualitative perspective the client is going to believe what they believe. Of course, from the client perspective, you get what you pay for, so if you can live with the performance virtualization provides, versus the cost of not using virtualization, this relatively straight forward. Did or does your process for virtualization candidacy force your project managers and design personnel to do proof-of-concept validation of virtualization? Or do you just trust the application vendors?

What is the true risk of the percentage of virtualization? Higher Risk? Lower Risk? We are not talking labs or non-critical environments but core production, the mission critical, the five (5) nines (9) realm of up time. No, not disaster recovery on virtualization, but true production, customer visible, if offline, you lose real money. Do your clients really understand this? Or did you just, ah, sort of, gloss over this point? This is not an argument for or against virtualization, because depending on your respective analysis, design, and implementation, this true risk could or should be less than traditional hardware, no? Transparent migration, i.e. VMotion for VMware, and/or stand-by hosts, or just available capacity on existing hosts? Not to mention network redundancy methods, 3DNS, Big-IP, etc. Storage redundancy infrastructure or methods? Argues the point that virtualization should have lower risk than traditional hardware. However, if you cut corners, or have non-shared storage, did not make the commitment to redundant network and storage fabric implementations? Then the risk of virtualization is higher than traditional hardware for sure, if you do not believe this, keep smoking what your are smoking. It is rather ironic that many clients do not really understand that they have many eggs in one basket on every virtual host server. The virtualization industry knows this, HA, DRS, SRM from VMware, and similar technologies or methods under development by every major virtualization solution provider, just screams this point no? Why was the biggest AH! At VMworld 2007 transparent host redundancy? Microsoft has learned this lesson a bit late, but to be sure Microsoft will never make that specific mistake again, of course I refer to the lack of a transparent virtual instance migration, or VMotion like feature set.

Taking all these questions and associated issues as potential impacts to your environment, is the risk of virtualization worth the thread of virtualization to your financial success? Returning to our conference room, let us up the stakes, what if it is really not a client conference, a board room meeting? Do you want to be the one that has to defend the use of virtualization when something significant goes wrong? Or worse, defend that you short-changed, or rushed to virtualization implementation? It does give pause to you does it not?

Now consider the concept of risk in virtualization, taking into account that hypervisor based operating systems are rather immature compared to traditional operating systems. Are we all just dodging the bullet? Will this risk get us in the end? Real issues for virtualization are and continue to be those of an immature platform, I will not exhaustively itemize these issues, but just think back to the early days of Windows or Linux, and you will recall the things that gave you pain, no? Rush to market code, inexperienced code development, hardware vendors struggling to understand a software operating system that is a moving target, etc., etc., I recall these and more in every operating system ever developed.

The bottom-line in avoiding risk is before your virtualization plan is invoked, why? Because regardless of how good the virtualization platform or your design around said platform is, it is people that designed it. If you have not given your people the time, resources, and had the patience to allow a superior design work to be done, your virtualization infrastructure is a house-of-cards. Once it is implemented, no matter how well it is supported, if the design is bad, the risk you are taking is bad. So I ask again, do your clients know this?

Add comment February 12th, 2008

Datacenter Trends: Heterogeneous Virtualization Management

An article by (NetworkWorld) Denise Dubie reports:

“The market is going to see the need for a heterogeneous virtualization management platform that we haven’t seen up until this point,” [Forrester analyst] Staten says. “It will cause a significant shake-up in the management space when start-ups pop up, and bigger players that haven’t been doing a very good job will look to acquire them.”

Read the full article here

December 20th, 2007

Long Distance Disaster Recovery?

Know What Virtualization Is, But What Is Next? – Chapter 04

To be honest I have been avoiding this subject. Long distance disaster recovery is one of the weak points in virtualization for a number of technical, and a few financial reasons. For example, some of the technical reasons often sited in reference to long distance recovery solutions:

  • Every single instance archival/restore solution available today does not scale well
  • Are vendor specific
  • Require extensive bandwidth
  • Limited vertical scale (imaging issues)
  • Scale horizontal (DASD allocation issues)
  • Hard to manage, monitor and control
  • Inflexible once implemented 

For example the financial reasons often sited in reference to long distance recovery:

  • Infrastructure that is under utilized/idle
  • Require network bandwidth over distance

As with any solution, or should I say situation? The world is changing, and vendors are running after issues, cough, dollar signs, VMworld 2007 was no exception in this regard, the hot topics for the super-sessions, meaning attendance was in the multiple 100s per session, were centered around disaster recovery, site management, and to a lesser degree image scaling. Unfortunately, the concept of total scale is still not quite enterprise level for my taste; for example, every single solution for disaster recovery presented would have issues beyond 250 host servers, or more than 1500 virtual instances, and anything at 10 or more of actual allocated terabytes of DASD per site. Wait, wait, some are saying already, that it is pretty big corporations that are doing that!  Well, in point of fact, everyone doing virtualization is looking at more not less virtualization, so realistic scaling for disaster recovery solutions over long distance should be looking to support 1000 or more virtual hosts, and at least 10,000 or more virtual instances across an enterprise, and distances should not be in the 10s or even 100s of miles, but in the 1000s of miles. Yes, 1000s of miles. Think of real disasters! A Disaster recovery site should/could be 1000 miles a way, even on a different tectonic plate if possible, or in other words, how far can a hurricane travel of over land, not far, but the flooding and storm damage is often 100s of miles in land. Archival/restore options per virtual instance just do not scale, thus storage array based methods will dominate the virtualization industry, and this is not a predicative comment, but a fact. All the vendors applicable know this, never mind the fact that we, as clients of virtualization have been yelling about this for the last 2 to 3 years.

But I digress, for long distance recovery, as a concept, to explode in a positive sense, regardless of where it is situated, a few things need to happen beyond the scaling discussion, these include the following:

  • Standardized use and implementation of image scaling
  • Standardized use of thin-disk methods for DASD allocation
  • Standardized use of storage-array level snapping, cloning, etc.
  • Convince management that no matter what is done, bandwidth will be required, maybe even a dedicated storage area network, cough, cough

I am not going to explain the bandwidth issue, it is obvious, very long distance disaster recovery models need bandwidth, and of course no one wants to hear that, but it is true. Storage Array networking is needed to implement a number of emerging virtualization technologies, welcome to emerging virtualization life. Moving on… standardized use of anything is good, from a practical perspective, so we eliminate one of our big issues, vendor specific implementations. We want storage array snapshots compatible just like SCSI is compatible today right?  Or better if we can get it. We want to have NetApp in one site migrate storage-array snapshots to EMC, for example. Don’t laugh, it will happen, but the storage array vendors don’t like the idea. This cross vendor model also would address migrations to newer platforms and different models of implementation. Image scaling, which is not here yet to any realistic degree, is the idea that DASD that is really read-only in concept, is leveraged. For example, 90 plus percent of the operating system foot-print in a virtual instance is static, so I should be able to use it over and over per instance, and then only the DASD that actually changes per virtual instance is isolated per instance. Dang, does this sound like a container model? It should! Image scaling, combined with thin-disking methods, where the operating system thinks it has 100GB for data, but the actual partitioning on the storage-array is what is needed plus a growth factor offset, and only unique data is DASD growth. For example, if the given instance is only using 20GB, it really only has a 20 plus GB footprint on the storage-array. This reduces the cost factors and allows for better utilization of resources for DASD, which should make the accounting geeks happy. Did I really just say accounting geek?

For those keeping score, about long distance disaster recovery, not the geek name calling, the last issue is monitoring, management, and control. Well, that is the real kicker, without universal standardization of storage-array models across vendors, at least all significant vendors, such a tool or methodology is lacking. Well… Actually… That is changing, but is in the embryonic stage at best. VMware and other virtualization vendors will follow soon, has a new toy, VMware Site Recovery Manager (SRM). It aims to solve this key issue that plagues our topic of discussion, the lack of monitoring, control and management. However, since VMware SRM does not employ thin-disking or image scaling as yet, the opportunity for someone else to snake this market niche away from VMware is obvious, no?

Add comment October 10th, 2007

ToutVirtual Chosen for Network World’s List of “10 Virtualization Companies to Watch”

Carlsbad, California – August 30, 2007 – ToutVirtual, an emerging leader in management software for virtual computing infrastructures, today announced it has garnered its place in Network World magazine’s list of “10 Virtualization Companies to Watch”. On the heels of its freeware launches over the past two years, the company has now firmly staked its ground in the virtualization market with its recently launched VirtualIQ Pro product.

Continue Reading August 30th, 2007

Virtual Instance Performance Context

Virtualization, Fine, Well Sort Of? – Chapter 03

Virtual machine performance is one of those topics that is both art and science. My grandmother the English Professor is yelling at me already… not is but are both art and science. But I am making a point, virtual instance performance has a statistical mathematical aspect, but numbers alone, do not always provide the expected results, you have know the context and judge its significance, that is the art aspect, this is absolutely critical when you are trying to predict performance before you act. I can not tell you the number of times I have looked at virtual instances, and I just knew why one was not performing well, and the only option that made sense was to move it to another host. Only to find that once I moved the given virtual instance to another host, things got worse, or following my gut, moved a different instance, and things got better unexpectedly, in a way that mathematics just could not illustrate based on the performance metrics under my nose. Sometimes the math does not make sense, sometimes the math along is not enough?

There are several different ways to define the context of performance, I am going to try, and I do mean try, to establish a framework for analysis, not just a bunch of rules, since in this case any rule defined here, someone, somewhere, will disprove it in their situation, their specific situation. Also, for the sake of discussion the terms Virtual Instance and Container are interchangeable unless otherwise noted. So performance relationships within a virtualization context are:

  • Intra Virtual Instance Performance
  • Peer-to-Peer or Inter Virtual Instance Performance
  • Host to Instance Performance
  • Host to Host Performance
  • Host to Cluster Performance

Intra Virtual Instance Performance – This is what you could call the false performance of a virtual instance, for example, Windows OS fans will recognize PerfMon (Windows Performance Monitor), and it does a good or great job, depending on your opinion of monitoring in Windows, it tends to be more actuate when monitoring external to its target, on traditional hardware, but opinions vary. However, any tool which relies on its own context can get completely mislead when monitoring from- or in- a virtual instance. Virtual instances do not run in real-time, did everyone get that? They do not run in real-time, but they think they do! Thus, I can not count the times operational teams or developers have watched PerfMon and seen it show outrageous poor performance, and yet the virtual host denotes the individual virtual instance is not suffering at all, it poor code, or poor application design that is the issue. In short, do rely on intra virtual instance performance in empirical terms ever. Intra performance should be a qualitative analysis not a quantitative analysis. Is the end-user experience good-enough? This is qualitative analysis.

Peer-to-Peer or Inter Virtual Instance Performance – This is the foundation of what quantitative performance is possible for virtual instances. This is performance that is reported by the virtual host, this performance is real-time; balanced by context of the host. The virtualization industry has struggled with this model from a predictive modeling perspective, our kind and wonderful management, those nurturing souls, are demanding a predictive model for virtual instance introduction to the existing environment or host, but this takes extensive brain-power, such that no single virtualization platform publisher has yet cracked this nut, more like they keep scratching the surface of the nut only. Automated models integrated into Dynamic Resource Sharing (DRS) or even features which indirectly use performance trending like High Availability (HA), that attempt to load balance or distribute instances across several hosts for best performance and stability, all use historical data. Historical data is worthless if you change the context, the very act of adding a virtual instance to an existing group of instances, changes all the variables of the equation, in fact, because there is no linear or logarithmic relationship between virtual instances, it is all a house of cards. If you think you can accurately predict the impact of adding a new virtual instance to an existing pool of instances, patent it, write a book, make millions, I wish you the best. You will have solved one of the true problems in virtualization today. Every physical-to-virtual tool publisher out there will hire you in a microsecond, because they see this problem as the search for the Holy Grail, their technology is just begging for actuate predictive modeling.

Host to Instance Performance – This context is quantitative on the host side, but qualitative at the virtual instance side, since intra performance as noted above is applicable. Since virtual hosts attempt to provide the every highest degree of resources possible to all virtual instances, this really is a question of overhead. Container models should always have an edge over hypervisor models in this context. The weaker the host hardware, the more significant the impact the host processing overhead has on the virtual instances. This is where things like disk (DASD) performance, network latency, memory latency, and backplane or bus sub-system performance are critical. For example, if you have 4 dual-core CPUs, but you purchased a cheap main-board with a slow front-side bus, or worse yet, very slow cheap memory, have you put very fast squirrels, on rather slow wheels?

Host to Host Performance – This is where the industry is starting to make some waves, which really do positively impact the performance of virtual instances. Automation of this type of analysis is gaining preference, taking some of the guess work out of situation, making host to host performance analysis more deterministic. This is possible because such automation is very simple given sufficient trending data, being a variant of the classic sales-man or shortest-route/shortest-time statistical modeling methods. If you think of resources per host as limited resource in time or distance in relation to virtual instances, then you can use trending data to optimize what instances are floating on what hosts. In fact, if you think about it in simple terms, what do you do when you have a misbehaving virtual instance? You often move it to another host, where you believe it will either perform better, or have less impact on the other instances that may exist on the host you are migrating towards.

Host to Cluster Performance – This context is host to host performance, but the actual analysis is done across groups of hosts as logical entity. The analysis is very similar to host to host performance evaluation.

Of course proper virtual instance performance analysis is dependent on context, consider the PerfMon example above, it looks like good data, if we where not talking about virtual instances, it would in all likelihood be accepted as valid data. However, it is not acceptable data in a virtual instance.

There have been a number of scientists in the 20th century that have commented on this same issue of context as applicable to observation and measurement, in various ways, lets see…  Albert Einstein – Theory of General Relativity, where Einstein states that the observer defines the context of what is going on, for example, if you watching an object fall, and you are not falling, you see something falling. However, if you are also falling at the some rate and direction as the object, and you do not realize you are falling, you may simply see the object sitting next to you. Of course, the most descriptive theory that is applicable to virtual instances and monitoring their performance, is the Heisenberg Uncertainty Principle and confusion of the Observer Effect, where in, the actual act of observing, or in our case measuring performance, impacts the performance being measured. Heisenberg understood the Observer Effect, but his uncertainty principle was really describing measurement uncertainty, for example, some uncertainty always exists in every measurement, this always happens when inaccurate measuring methods are used, or when something unknown is impacting the measurement process. For those that read this blog often, you may be reminded by something I have said in the past… Know What You Don’t Know… the very question implies that you have to acknowledge context for any analysis that is done. In part II, I will discuss the more mechanical side of ensuring virtual instance performance, now that we have got the nasty context topic out of the way.

3 comments August 29th, 2007

Previous Posts




Calendar

May 2018
M T W T F S S
« Apr    
 123456
78910111213
14151617181920
21222324252627
28293031  

Posts by Month

Posts by Category