Xen? (was Re: [wellylug] Mailing List & Webserver)

michael at diaspora.gen.nz michael at diaspora.gen.nz
Mon Feb 27 16:57:09 NZDT 2006


>I believe you may be mistaken here, but correct me if I'm wrong.

I've only been experimenting recently with Xen myself, so I may well
be mistaken.

The comment about "access to DMA" is just that if (and only if)
a guest VM is given access to a PCI device, and that PCI device can
perform DMA, then the guest VM can overwrite arbitrary areas of memory,
by instructing the device to DMA to arbitrary parts of memory.  Hence,
full system compromise.

The guest VM's kernel would have to be compromised, admittedly.

See: 

    http://mail-index.netbsd.org/port-xen/2006/01/06/0004.html

For more details.  The "solution" is to virtualise the access to all
devices, as you describe below.  However, there's performance overhead
there.

I believe that "real" mainframe architectures have finer grained controls
at the firmware and hardware level to deal with this; it's a detail of
the PC architecture, rather than Xen itself.

>You also seem to be mistaken in saying that Xen cannot provide a
>solution for managing network and IO bandwidth. Xen does not control
>details of how devices are used directly, and depends on the Dom0 domain
>to provide these resources. It is a fairly simple matter to configure
>your 'router' VM, whether that be Dom0 or another domain, to restrict
>network bandwidth to the virtual interfaces which the other domains are
>connected to. 

So all network traffic has to be duplicate copied between VMs?  Other
than that, I can see how iptables et. al. could be configured to
control network traffic.

>Storage is more of an issue but by serving your VMs via
>NFS you can gain as good control as for a physical network using a file
>server. 

Ummmm, yuck.

>Running Solaris on Dom0 might be rather interesting as you might
>be able to harness the resource manager to gain even finer control.

Conceeded.

>The VMs should not be set up to depend on a physical disk if at all
>feasible as this ties them to a physical machine, preventing migration.
>Only a single domain used as a file server (again, usually Dom0, which
>is tied to the host anyway) needs to be physically tied to the machine.
>With the Dom0 domain running as a file server and providing the network
>interface, the VMs become almost completely host-agnostic, allowing you
>to upgrade hardware and shuffle VMs between physical servers without
>more than a few tens of milliseconds downtime for each migration. For
>critical applications, a combination of Xen and redundant hardware
>allows for almost instantaneous recovery from failure.

I'm interested in IO saturation from one VM affecting another VM, whether
to a shared disk, or to a shared bus (eg, SCSI).  Of course VMs should
not depend on a physical disk, but their disk storage does need to be
hosted somewhere, and can therefore contend with other VMs.

At a previous customer, great problems were had with an OLTP system that
was sharing a common disk bus (via SAN technology) with a data warehouse
system -- suddenly, OLTP performance went through the floor while the
data warehouse rebuilds were going on.

Migration is a great technology, to be sure, but that's orthogonal to
the point I was trying to make.
    -- michael.




More information about the wellylug mailing list