With the release of 4.1 Update 1, we probably have all seen the last release of ESX. That said, and with VMware being very clear, a lot of us have yet to begin the migration, myself included. While the idea behind ESXi is a great one, most of us have grown fond to the service console, and the familiarity of ESX. I can relate, and understand, why so many people have held off on making the switch. Change is hard, but it must be done.
Implementing ESXi doesn’t mean the end of CLI management, just the way we go about leveraging it. This is accomplished via RCLI, or Remote Command Line Interface. There is the vMA, or the Virtual Machine Assistant, that allows you to centrally manage your ESXi host via a CLI. Though personally, with the addition of Esx-Cli included in the latest build of PowerCLI, I really see limited use for it. Powershell and PowerCLI have come a lot way and really make management task quite simple and with a quick search on the net, you can find a script for almost any task. ESXi also allows for a faster installation, and can be scripted. It also boots much quicker than ESX. It also creates a much smaller surface area for a malicious attack.
Its obvious VMware thinks ESXi is ready for primetime in an enterprise environment, and I have to agree. I have already migrated my home lab to ESXi 4.1 and am working to familiarize myself with the nuances and differences of the new hypervisor. The question is, how much longer will, and can, you wait to make the switch yourself.
Recently, we had an issue with Phantom Snapshots. Basically, we had a VM with multiple VMDK files (vmname_01-000001.vmdk) and by multiple, I mean 26 snapshot vmdk files per VM HDD. It was quite a nightmare and was taking up a lot of space on our datastore. Unfortunately, they were not showing up in snapshot manager, nor showing up on our morning snapshot report, so they went unnoticed. Luckily, we caught these and were able to get the issues resolved.
The first problem was we were not able to ‘Delete the snapshot’ from snapshot manager, because it wasn’t there. The simple fix for that was to clone the machine during off hours. The cloning process consolidates the snapshots, and leaves only 1 VMDK file per VM HDD. Once the Line of Business confirmed the new machine was functional, I proceeded to delete the old VM thinking my problems were over. Boy was I wrong.
The delete ran perfectly fine, yet when I went back to determine if the files had actually been removed, there were still a few VMDK files left that weren’t deleted. Every time I tried to delete them from the datastore I received an error. Quick trip to the blogs and the VMware Community Forums, and I found I was not alone. Others had experienced this and the fix was to restart the hostd services on the owning ESX host of the VM, or reboot that host. Problem was, I didn’t exactly note which host this VM was on last.
Luckily we have a daily health report that comes out which list all the major task performed in the past 24 hours, so my delete was listed, along with which host I deleted it from. I proceeded to evacuate the host of all VMs with the help of DRS and placed the host in maintenance mode. I decide a reboot would be the simplest solution and anytime you get the chance to reboot a host, it’s never a bad thing in my book.
Post reboot, files were able to be deleted with no issues and all was well in the world on VMware here in my data center. I hope having all this information in one spot will help future admins with this issue. Having files eat up disk space is never a good thing and being able to resolve it quickly is a big help. If you run into this problem in the future, the steps to fix are:
- Clone your VM to consolidate snapshots
- Note which ESX/ESXi Host your problem VM is running on
- Delete the problem VM from inventory
- Evacuate other VMs from identified ESX/ESXi Host
- Restart hostd services or Reboot Host
- Delete leftover VMDK files from datastore
- Have a Coke and a Smile
Many of you noticed your VUM alerting you to the fact that VMware released Update 1 for vSphere 4.1. The amount of new items is pretty short and routine. They include the following updates to ESX/ESXi:
- Support for up to 160 Logical Processors
- Additional Drivers Support
- Enablement of Intel Trusted Execution Technology (TXT) for ESXi Only
- Additional Guest OS Support
vCenter also received some new items and they include:
- Additional Guest OS Customization Support
- Additional vCenter Server Database Support
The interesting thing here isn’t the fact that there is any amazing new features. What’s crazy is that this could be potentially the last update that an ESX host ever receives. I’d suspect the next update (4.2) will be released for ESXi only, as VMware has previously announced 4.1 would be the last ESX update.
VMware has issued an ultimatum. Move to ESXi soon or be left behind. As anyone in tune with the VMware world knows, vSphere 4.1 was released yesterday. In the release notes, VMware stated:
VMware vSphere 4.1 and its subsequent update and patch releases are the last releases to include both ESX and ESXi hypervisor architectures. Future major releases of VMware vSphere will include only the VMware ESXi architecture.
This has been a long time coming and should not be a shock to anyone, although this is the first time VMware has given any sort of deadline. I guess this means the majority of us will be rebuilding our company labs and truly testing migrations from ESX to ESXi.
All the info on vSphere 4.1 can be found HERE.