As a virtualization engineer of sorts, I should probably summarize what 
I know so hopefully someone else will find it useful.

If you're seriously looking into virutalization, only look at the bare 
metal hypervisors. That means the virtualization product you're using is 
not running on top of some other operating system, but rather has its 
own operating system. This improves efficiency and reliability as the 
product can only consist of the bare essentials. While products such as 
Virtualbox, VMware Server which run top of some other operating systems 
have merit and are good stepping stones for virtualizing your 
infrastructure, over time they will prove harder to management. 
Microsoft's Hyper-V is a bit of a mix as between a bare metal hypervisor 
and something running on top of another OS as many things that Hyper-V 
does not need are disabled.

Virtualbox and VMware Workstation are useful for desktop users or any 
other situation where few virtual machines are required. Pushing those 
two products to do server VM hosting will quickly produce a suboptimal 
situation.

For hosting servers, you should be looking at VMware's ESX and ESXi, 
Citrix XenServer. I am not going to talk about Xen and KVM as my 
experience with those two is quite limited.

ESX is VMware's flagship hypervisor which has a service console running 
on RHEL5. The service console is not meant to be used as a general 
purpose RHEL install so it restricted to what you can do with it. ESXi 
is the hypervisor without the service console. There are other issues 
like lack of jumbo frame support and certain other features that enabled 
ESXi to have a smaller install and memory footprint. The Service Console 
can take close to 300MB of RAM, so ESXi allows for slightly greater 
resource utilization. ESXi is also comes embedded on a server so that 
using VirtualCenter, that ESXi host can join a cluster and start 
working. For large deployments, the management capabilities and web 
access provided by the Service Console on ESX outweigh the licensing costs.

I mentioned VirtualCenter earlier and it is a application residing on a 
Windows 2003+ virtual machine or physical machine (hooked up to SQL 
Server, or Oracle). What it provides is a central management point for 
all of your ESX and ESXi machines and allows for things like resource 
scheduling, power management, virtual machine and storage migrations 
among a few other things. It is not provided with ESX or ESXi, so it 
costs extra. Where ESX(i) really shines is in its ability to enable high 
consolidation ratios with things like thin provisioning of storage 
(enabled by NFS or your storage array), memory page sharing (same thing 
your OS does with apps using the same libraries, but at a VM level), and 
resource overcommitting.

Resource overcommitting is really useful as it will allow you to give a 
VM more resources than actually exists in a physical host. While 
initially this might seem like a silly thing, note that no other 
hypervisor supports it. If we consider a virtual desktop implementation, 
we could get as many as 7 virtual machines per processing core (industry 
standard, more or less). For a XP Professional desktop, 256MB is 
required to install it which means for a 16 core system, you need 28GB 
of RAM to meet just the bare minimum. Often, you would give each virtual 
machine 512 MB or even 1 GB of RAM which means 56 GB and 112 GB total 
RAM respectively. It becomes pretty cost prohibitive to add that much 
RAM to a server so with overcommiting, we can give every virtual machine 
1 GB of RAM without buying all of those extra DIMMs. If we think back 
about the memory page sharing and the fact that most business user 
desktops don't actually use so much memory, resource overcommitting 
seems like a great feature that will allow you to have greater 
consolidation ratios (how many VMs per host) compared to hypervisors 
that do not support overcommitting. While the example I used is about 
virtual desktops, it does extend to virtual servers as well.

The features ESX(i) and XenServer share are things like live VM 
migration, which means a VM running on one physical host can be migrated 
to run on a different physical host with very limited interruption to 
the functioning of the virtual machine. For the current versions, the 
downtime for ESX is usually much less than a second while for XenServer, 
it is slightly higher. This live migration functionality (vMotion for 
VMware and XenMotion for XenServer) is used for other things like 
resource scheduling (power management), and fault tolerance. There are a 
lot of other cool things, but I'll stop here as I've already said too 
much without little focus.

If there are any other questions about anything virtualization related, 
I would be happy to answer.

greg wm wrote:
> the time has come for me to take the leap into virtualizing, partly to 
> run a mix of older/newer pgsql/php versions, partly to better isolate 
> eager clients from other established sites.  we've been presuming 
> we'll just go with rhel5.4 kvm.  all the namedropping in the vmware 
> thread makes me wonder what feature/gotcha tradeoffs exist.  i'll 
> appreciate good comparisons, pointers, stories, warnings, encourgement..
> ty,
> -g
> ------------------------------------------------------------------------
>
> _______________________________________________
> TCLUG Mailing List - Minneapolis/St. Paul, Minnesota
> tclug-list at mn-linux.org
> http://mailman.mn-linux.org/mailman/listinfo/tclug-list
>   

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.mn-linux.org/pipermail/tclug-list/attachments/20091030/ec175646/attachment.htm