<br><br><div class="gmail_quote">On Thu, Dec 16, 2010 at 2:27 PM, Richard Troth <span dir="ltr"><<a href="mailto:rmt@casita.net">rmt@casita.net</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">
Oh goodie ... my FAVorite topic.<br>
It's something well worth discussing because the industry is mixed up<br>
about managing V12N. Vendors want to sell you tools which do *some*<br>
things but not enough and then also want to play the lock-in game. On<br>
the other extreme is OpenVirt, (not a vendor, but a standard) which<br>
covers all the bases, in a vendor-neutral way, but tries to boil the<br>
ocean.<br>
<br>
First thing: be careful to distinguish between resource allocation<br>
(creating and managing virtual machines, including cloning) and<br>
software provisioning (yum, zypper, apt-get). A lot of V12N<br>
manglement solutions (OVirt) try to do too much and wind up not doing<br>
any of it well. If you have a working config database, it's a good<br>
idea to hold information about physical machines and virtual machines<br>
in the same place. If you're going with an open solution (Xen or KVM)<br>
you can wire that in with your CMDB. If you go with a vended solution<br>
(VMware, Citrix) you will have to pressure them to get config<br>
transparency. Good luck there.<br></blockquote><div><br></div><div>On the devops toolchain mailing list CMDBs came up a while back and if I remember right, it is pretty hard to find a good standalone solution that has an API and is open source. What are you using?</div>
<div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">
<br>
Second thing: avoid partitioning virtual disks. If the hypervisor<br>
exposes the virtual disks as files (to the host environment) and you<br>
DON'T partition them (in the guests) then you can mount them '-o<br>
loop' and fix things ... even play 'chroot' if you need. All kinds<br>
of good things happen when you remove the partitioning layer. If you<br>
need something that works like partitioning, go LVM. It's better<br>
anyway. (Any virtual disk used as a PV will not be as readily usable,<br>
I am aware, but you should avoid partitioning PV containers too.) I<br>
run LVM on the host but not on the guests. GRUB fights me on this.<br>
[sigh]<br></blockquote><div><br></div><div>I think libguestfs and guestfish solves most of this. You can even modify a file live if you are super careful.</div><div><br></div><div><a href="http://libguestfs.org/recipes.html#editgrub">http://libguestfs.org/recipes.html#editgrub</a></div>
<div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">
<br>
Angelo is spot-on: cold migration is easy (on the same V12N tech).<br>
Thankfully, Xen and KVM (and VMware) all use plain text to define a<br>
virtual machine. I presume VMware still does. They used to. (But I<br>
haven't gotten my fingers into ESX4.) You can copy it. You can also<br>
eyeball it so that you know what pieces the hypervisor is pointing to.<br>
You do not need a costly tool for cold migration. You just need to<br>
properly inventory the pieces.<br>
<br>
I have heard others bemoan the on-again / off-again SNMP support in<br>
VMware. Only way they'll get it right is customer pressure. Demand<br>
it. Heck, demand a decent CLI for that matter. (Xen and KVM have<br>
solid CLI. VMware never has.)<br></blockquote><div><br></div><div>Agreed</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">
<br>
-- R; <><<br>
<br>
<br>
<br>
<br>
<br>
On Wed, Dec 15, 2010 at 15:38, Scott Merrill <<a href="mailto:skippy@skippy.net">skippy@skippy.net</a>> wrote:<br>
> The discussion on Hadoop is interesting, and I do hope a presentation<br>
> or two results. If I can help coordinate, please let me know.<br>
><br>
> On a similar vein, how are you folks handling the management of<br>
> virtualized servers in your environments? It's trivial with KVM and<br>
> similar tools to run a couple of virtual instances on a single<br>
> physical box. It's not so trivial to run those virtual instances in a<br>
> highly available fashion across a cluster of physical machines.<br>
><br>
> VMware makes a pretty penny with VMotion, Storage VMotion, and similar<br>
> technology, and by providing a nice management interface to this<br>
> issue. Alas, those management tools are Windows-only.<br>
><br>
> Red Hat has a product (by way of acquisition) called Red Hat<br>
> Enterprise Virtualization Manager (RHEV-M). This, too, is currently a<br>
> Windows-only solution.<br>
><br>
> I've looked at a lot of IaaS ("Infrastructure as a Service"), elastic<br>
> cloud, and similar solutions in the last couple of days. None of these<br>
> are what I want, particularly. I'm not looking to sell or chargeback<br>
> for our infrastructure, and I'm not looking for variable ad-hoc<br>
> resource allocation.<br>
><br>
> What I want is a reliable means of keeping a number of virtual servers<br>
> up and accessible across a number of physical boxes, such that any<br>
> physical box can be taken offline for maintenance (for example,<br>
> rolling reboots across physical hosts).<br>
><br>
> luci and ricci are are start, but they're nowhere near as end user<br>
> friendly as the VMware tools. This is not a big concern for me, but it<br>
> is an issue for those with whom I work, who might be expected to<br>
> provide on-demand support if I get hit by a bus (or am just home<br>
> sick).<br>
><br>
> Anyone else tackling these issues? Would you care to present to COLUG<br>
> on how you're managing things?<br>
><br>
> Thanks,<br>
> Scott<br>
<div><div></div><div class="h5">> _______________________________________________<br>
> colug-432 mailing list<br>
> <a href="mailto:colug-432@colug.net">colug-432@colug.net</a><br>
> <a href="http://lists.colug.net/mailman/listinfo/colug-432" target="_blank">http://lists.colug.net/mailman/listinfo/colug-432</a><br>
><br>
<br>
_______________________________________________<br>
colug-432 mailing list<br>
<a href="mailto:colug-432@colug.net">colug-432@colug.net</a><br>
<a href="http://lists.colug.net/mailman/listinfo/colug-432" target="_blank">http://lists.colug.net/mailman/listinfo/colug-432</a><br>
</div></div></blockquote></div><br>