Well, I, like most serious virtualization professionals, am taking the plunge into building a home lab. I’ve gathered pieces for the past few months from various sources and am now in a position to start. I decided what better place to chronicle my journey.
Let’s start with the inventory:
- 2 HP ML350 G5 /w 14GB RAM – ESXi 4.1 Host
- 2 HP DL380 G4 /w 6GB RAM – iSCSI Shared Storage
- Cisco 3550 XL 48p Switch – Loaded with Router Code
- Cisco ASA5505
Some would say overboard, but I also plan on using it for infrastructure at my house, and hosting of some personal websites.
Here is a list of VMs I plan to run:
- Windows 2008 R2 – vCenter 4.1
- Windows 2008 R2 – DC, DHCP, DNS
- Windows 2008 R2 – File/Media Server
- Windows 2008 R2 – SQL Server
- Windows 2008 R2 – Exchange 2010 Server
- Windows 2008 R2 – IIS/Web Server
I’m sure this list will grow as I look to master new technologies, but for now I think this is a good start. As soon as I get the lab up and running, I’ll post again and take a few pictures as well. Things are still in the design and planning phase so if you have any advice or recommendations, please feel free to share them.
I went through and corrected some of the server models as I had mistyped them. I also swapped out my 3500XL for a 3550XL so I could load router code on it and handle my Layer 3 traffic on one device and not melt my Linksys.
I went on a trip this past weekend with one of my good buddies who happens to be a very knowledgeable Network Engineer. We were discussing some on the technologies his company was implementing in their environment, and got on the topic of their new VDI rollout and how a decision was made to use iSCSI vs FC SAN for their shared storage for their ESX backend.
Back when I got started on ESX, it seemed like all the architects preached FC SAN storage if it were at all possible. Speed, reliability and scalability were just a few of the many arguments for it. Sure it was costly, but it was the best option out there.
But these days, technology is growing at a rapid pace. You can spend your whole day browsing tech sites and blogs reading about new technologies and products being released by multiple vendors. All these new technologies create more choices, and create different approaches to designs that used to be one-sided debates.
Back to my buddy’s company. Their VDI team actually decided to go with iSCSI over FC SAN for their storage. Their reasoning had a lot to do with what we deal with everyday. First and foremost was cost. Lets face it, FC SAN is one of the most expensive solutions to shared storage. Fiber switches, HBAs, SAN devices; the cost starts to add up, and very quickly. NAS and iSCSI have always been cheaper solutions, but in the past have been less superior options than FC SAN. But even that is changing.
These days, network companies are coming out with faster and faster switches. With switches commonly available in up to 10gbps speeds, FC SAN is no longer the fastest game in town. Before, Fiber was the way to get a reliable and speedy connections between your host and your storage. Now with copper moving into new speeds, that is no longer the case. Security and speed as well as scalability are now available through the deployments of dedicated iSCSI networks, creating new design and deployment options for companies.
As a colleague pointed out to me today, an advantage of using iSCSI is that since it is network traffic over copper, it makes it easier to troubleshoot and exam the traffic during any issues, via sniffers, etc. FC makes things a lot more difficult to track down issues on the fiber network between the host, the switches and the SAN devices, since you cannot just break into the fiber to catch the packets.
All and all, I think iSCSI deployments will become a lot more popular and you will see a lot of companies lean that way versus FC SAN storage. This isn’t to say FC is now the wrong way to go, because it’s not. It’s a tried and proven solution for shared storage. I just think that during these tighter economical times, that companies will explore cheaper yet reliable solutions and I think iSCSI will be one implemented a lot.
What are your thoughts on things? Are you using FC or iSCSI? Which do you prefer and why?
Citrix has released their Type-1 Hypervisor and boy is it a mess. To start things off, a Type-1 Hypervisor is a system that runs on the first layer of a machine, versus a Type-2 that runs on top of an existing OS, such as VMware Workstation or Player.
Back to the product. This thing has more bugs than the rain forest. The release notes show 59 known issues. Why in the world would Citrix release a product outside of the beta stage with 59 known issues? That I’m not sure of, but I’d suspect its a classic case of Marketing dictating product release, not the product dictating it. The release notes listing the Known Issues can be found HERE.
Second, this version requires Intel vPro equipped laptops, dwindling number of compatible systems down to 23. It also only supports VMs running Windows XP, and Vista in 32bit, and Windows 7 is both 32 and 64bit. The list of systems can be found HERE.
Another setback is the fact that major components are experimental features that seem vital to make XenClient successful. The list includes Dynamic VM Image Mode, or the ability to use a gold image shared between multiple users. 3D Graphics Support is also experimental, which HDX is a huge bonus to XenDesktop, so why would it get left out here. Secure Application Sharing, or the ability to stay in 1 VM focus and run apps out of other VMs without having to flip between them. This was an exciting feature for me, and I thought it would be a huge selling point. You can view the list of experimental features and the details about them HERE.
XenClient is available in two flavors. The first is their free version that supports clients called XenClient Express. The full version will be packaged with XenDesktop for free.
Needless to say, I think this is a big step in the wrong direction for Type-1 Hypervisors. It just seems like Citrix was racing to get their product out there and weren’t focusing on making it the best product it could be. I will be looking to avoid deploying this technology from Citrix until a lot of these issues get addressed. I am still exciting about XenClient and where it can go in the future. I just don’t think it’s quite ready for prime time yet. And hopefully, these issues wont give the technology a bad rep.