[colug-432] virtualization, clouds, and infrastructure

Richard Troth rmt at casita.net
Thu Dec 16 14:27:40 EST 2010


Oh goodie ... my FAVorite topic.
It's something well worth discussing because the industry is mixed up
about managing V12N.  Vendors want to sell you tools which do *some*
things but not enough and then also want to play the lock-in game.  On
the other extreme is OpenVirt, (not a vendor, but a standard) which
covers all the bases, in a vendor-neutral way, but tries to boil the
ocean.

First thing:  be careful to distinguish between resource allocation
(creating and managing virtual machines, including cloning) and
software provisioning (yum, zypper, apt-get).  A lot of V12N
manglement solutions (OVirt) try to do too much and wind up not doing
any of it well.  If you have a working config database, it's a good
idea to hold information about physical machines and virtual machines
in the same place.  If you're going with an open solution (Xen or KVM)
you can wire that in with your CMDB.  If you go with a vended solution
(VMware, Citrix) you will have to pressure them to get config
transparency.  Good luck there.

Second thing:  avoid partitioning virtual disks.  If the hypervisor
exposes the virtual disks as files (to the host environment) and you
DON'T partition them (in the guests) then you can mount them  '-o
loop'  and fix things ... even play 'chroot' if you need.  All kinds
of good things happen when you remove the partitioning layer.  If you
need something that works like partitioning, go LVM.  It's better
anyway.  (Any virtual disk used as a PV will not be as readily usable,
I am aware, but you should avoid partitioning PV containers too.)  I
run LVM on the host but not on the guests.  GRUB fights me on this.
[sigh]

Angelo is spot-on:  cold migration is easy (on the same V12N tech).
Thankfully, Xen and KVM (and VMware) all use plain text to define a
virtual machine.  I presume VMware still does.  They used to.  (But I
haven't gotten my fingers into ESX4.)  You can copy it.  You can also
eyeball it so that you know what pieces the hypervisor is pointing to.
 You do not need a costly tool for cold migration.  You just need to
properly inventory the pieces.

I have heard others bemoan the on-again / off-again SNMP support in
VMware.  Only way they'll get it right is customer pressure.  Demand
it.  Heck, demand a decent CLI for that matter.  (Xen and KVM have
solid CLI.  VMware never has.)

-- R;   <><





On Wed, Dec 15, 2010 at 15:38, Scott Merrill <skippy at skippy.net> wrote:
> The discussion on Hadoop is interesting, and I do hope a presentation
> or two results. If I can help coordinate, please let me know.
>
> On a similar vein, how are you folks handling the management of
> virtualized servers in your environments? It's trivial with KVM and
> similar tools to run a couple of virtual instances on a single
> physical box. It's not so trivial to run those virtual instances in a
> highly available fashion across a cluster of physical machines.
>
> VMware makes a pretty penny with VMotion, Storage VMotion, and similar
> technology, and by providing a nice management interface to this
> issue. Alas, those management tools are Windows-only.
>
> Red Hat has a product (by way of acquisition) called Red Hat
> Enterprise Virtualization Manager (RHEV-M). This, too, is currently a
> Windows-only solution.
>
> I've looked at a lot of IaaS ("Infrastructure as a Service"), elastic
> cloud, and similar solutions in the last couple of days. None of these
> are what I want, particularly. I'm not looking to sell or chargeback
> for our infrastructure, and I'm not looking for variable ad-hoc
> resource allocation.
>
> What I want is a reliable means of keeping a number of virtual servers
> up and accessible across a number of physical boxes, such that any
> physical box can be taken offline for maintenance (for example,
> rolling reboots across physical hosts).
>
> luci and ricci are are start, but they're nowhere near as end user
> friendly as the VMware tools. This is not a big concern for me, but it
> is an issue for those with whom I work, who might be expected to
> provide on-demand support if I get hit by a bus (or am just home
> sick).
>
> Anyone else tackling these issues? Would you care to present to COLUG
> on how you're managing things?
>
> Thanks,
> Scott
> _______________________________________________
> colug-432 mailing list
> colug-432 at colug.net
> http://lists.colug.net/mailman/listinfo/colug-432
>



More information about the colug-432 mailing list