Desktop Virtualization. It’s the hot topic these days. But is it all just hype, or will it revolutionize the workplace? There are many forms of Desktop Virtualization, ranging from Client Hosted Virtual Desktops (CHVD) to Server Hosted Virtual Desktops (SHVD) and even those two categories can be broken down into multiple subcategories, including Type I and II hypervisors, blade PCs, Hosted VDI and Shared VDI. Unless you’ve been hiding under a rock in the middle of the Sahara, you’ve heard something about Desktop Virtualization.
And while it seems that every vendor under the sun is scrambling to get a product related to Desktop Virtualization out to market, is this really a technology that’s gonna stick or is it just the next fad destined to fade away when the next thing comes along? Honestly, how many people reading this have deployed some form of this technology on a large-scale rollout? And the more important question, which was recently posed by Brain Madden, ‘If Virtual Desktop is so great, then why aren’t YOU using it?’
I certainly think that this is a viable technology, but it’s not going to be as big as the hype is making it out to be. In my case, the push for this technology came on a whim from upper management in what seemed like an attempt to stay up on current technology and prove we could be on the cutting edge. The problem was, as we got into the project and performing use case, this technology wasn’t a viable option for mass deployment. Sure, we could have deployed it locally at our back office, or at a few locations, but there was no way this was going to be a mass roll out across all the branches and front offices due to current limitations.
And that’s where I draw my opinion from, because those conclusions we drew from our pilot are the same conclusions that a lot of my colleagues at other companies were realizing. I think Desktop Virtualization is a niche technology that has a viable place in part of the over all desktop solution, but it certainly wont be the death of the traditional desktop machine anytime soon. And to end, here’s a strong quote by Brian, who shares the same view:
The collected masses aren’t stupid. If VDI were so cheap, convenient, manageable, flexible, and wonderful then everyone would be using it. Don’t kid yourself: VDI is a niche. 10% max* at best. Mark my words.
*VDI will be 10% max. That might be 10% of all users, or 100% of users for 10% of their apps.
When talk performance with any virtualized server environment, CPU Ready is a common key indicator of how well your VM is performing. However, its not always a cut and dry explination as to why your CPU Ready times are high.
To start, for those that aren’t aware, CPU Ready is:
The amount of time a virtual machine waits in the queue in a ready-to-run state before it can be scheduled on a CPU.
This means that a VM is ready to process something, however, it has to wait because the CPU resources it requires are not available on the physical host.
Before we examine causes of high CPU Ready, lets try to look at what are acceptable values for CPU Ready time. Unfortunately, there is no hard set value to say ‘Yep, your CPU Ready has crossed the ‘its bad’ threshold.’ General rule of thumb is that your CPU Ready time not be higher than 200ms if being checked in the vCenter performance charts, or 5% if being checked using the esxtop command. Again, this isn’t a hard set value. Your VMs role may require less CPU Ready time for more critical functions, or may be more lenient to longer CPU Ready times as well. It all depends on your environment, and its up to you as an admin to determine what works in your individual environment.
After a delay, I bring you Part 2 in the discussion of Physical vs Virtual Provisioning. In the opening post of this series, I gave a few recent examples of VM request that had graced my screen and shocked my brain. In today’s post, I want to examine some of the reasons why the requirements differ from a physical to a virtual machine.
The main difference between a physical machine and a virtual machine is the lack of hardware. By lack of hardware, I don’t mean there is no hardware at all; we all know there has to be hardware somewhere. By no hardware, I mean the machine and OS itself aren’t aware they are virtual and don’t run on its own physical server. I don’t want to get too caught up on that discussion. The point to my comment is that the lack of the physical hardware means a lack of physical hardware drivers. We all know how much of a pain, and resource hog, drivers can be. No one really knows how much of your CPU cycles and Memory I/O activities are the work of drivers translating actions between the physical and application level.
Part of my day-to-day job is to deploy new virtual machines for the client. These VMs are deployed with specifications given to us by the client. Lately, I’ve noticed an increasing trend of over-powered virtual machines being requested, and it seems the mentality for these request is that the application states the physical requirements, so that is what is being requested of us. This increasing trend kicked off a debate between myself and a few of my co-workers on the topic of Virtual Provisioning versus the Physical Requirements.
First, let me start of by stating that my client is very new to virtualization. We are only utilizing about 95 Virtual Servers, 80 Virtual Desktops, and 13 ESX host, mostly in a non production environment. We are mid deployment of our largest, full production data center. The client is definitely eager to jump into virtualization, but still hasn’t fully grasp the concepts being virtualization.
My latest VM provisioning request was for a Windows Server 2008 x64 Enterprise Edition machine. The VM will be used as a Sql Server Reporting Services (SSRS) machine in a test/development environment. The VM will be used to assess the benefit and abilities of SSRS, and determine if it’s a viable solution to deploy into production. Given the use of this VM, I figured the required specs would run along the lines of 1 vCPU and 1-2GB of Memory, right?