<div dir="ltr">I have run openstack on openstack. For training we launch a rackspace VM, and install openstack on that VM and launch instances inside that host. <div><br></div><div>So we have</div><div>host1>hostA>hardware</div><div><br></div><div>You would expose the inner host to exploit if you gave it a routable network. If you made host1 host only network and hostA with some public route I think you would have what you are looking for. </div><div><br></div><div>Maybe better labels would be</div><div>legacyhost>modernhost>hardware</div><div><br></div><div>This thread wandered into containers, but speaking only of VM's what is the problem with this approach? </div><div><br></div><div>Also, it was a management disaster but I believe I ran classes for a stretch that had the users laptops run VM's and within those VM's we ran VM's. In that case I think we had public IP's in the nested instances. <br><div><br></div><div><div>--</div><div>Tom</div><div><br></div></div></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Tue, Jun 21, 2016 at 3:40 PM, Jeff Frontz <span dir="ltr"><<a href="mailto:jeff.frontz@gmail.com" target="_blank">jeff.frontz@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote"><span class="">On Tue, Jun 21, 2016 at 2:41 PM, Roberto C. Sánchez <span dir="ltr"><<a href="mailto:roberto@connexer.com" target="_blank">roberto@connexer.com</a>></span> wrote:<br></span><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span><span class="">On Tue, Jun 21, 2016 at 02:24:58PM -0400, Jeff Frontz wrote:<br>
><br></span><span class="">> do the<br>
> processor primitives that support virtualization allow too much access to<br>
> the physical hardware, and thus they're not exposed to the<br></span>
> hosted/top-level instance?.<br>
><br></span><span class=""><br>
I did do some research and found this Xen wiki page:<br>
<br>
<a href="http://wiki.xen.org/wiki/Nested_Virtualization_in_Xen" rel="noreferrer" target="_blank">http://wiki.xen.org/wiki/Nested_Virtualization_in_Xen</a><br>
<br>The<br></span><span class="">
highlighted warning near the bottom of the page makes it clear that even<br>
having nested virtualization enabled would be a danger to the admin of<br>
the top-level host.</span></blockquote><div><br></div><div>OK, thanks -- that's what I suspected. I'm guessing that processors would have needed to be designed with nested virtualization in mind (which, as I'm finding, is way too nichey).</div><div><br></div><div>On your other suggestion-- I've been toying with that, but haven't found any big-name (or not-so-big-name but US-based) providers that offer a true private VLAN between a client-controllable subset of hosted instances. Linode offers a VLAN (with an unroutable IP range), but the network is common to all of their clients' hosted instances at a location (which gets me back to relying on the legacy distro/kernel for its own security -- where I am now). My searching also yielded something called "vRack" offered by OVH (who doesn't seem to have a footprint in the US) and references to "Private VLAN" (again, by providers that seem to be euro-centric -- gandi, elastichosts, Rackulus -- or small -- servernorth). Are there any well-known (or personally well-regarded) providers that offer a truly "private VLAN"?</div><div><br></div><div><br></div><div>Thanks,</div><div>Jeff</div><div><br></div></div></div></div>
<br>_______________________________________________<br>
colug-432 mailing list<br>
<a href="mailto:colug-432@colug.net">colug-432@colug.net</a><br>
<a href="http://lists.colug.net/mailman/listinfo/colug-432" rel="noreferrer" target="_blank">http://lists.colug.net/mailman/listinfo/colug-432</a><br>
<br></blockquote></div><br></div>