RE: [Bridge] [PATCH] macvlan: add tap device backend

From: david
Date: Fri Aug 07 2009 - 16:18:49 EST


On Fri, 7 Aug 2009, Fischer, Anna wrote:

Subject: RE: [Bridge] [PATCH] macvlan: add tap device backend

Subject: Re: [Bridge] [PATCH] macvlan: add tap device backend

On Fri, 7 Aug 2009 12:10:07 -0700
"Paul Congdon \(UC Davis\)" <ptcongdon@xxxxxxxxxxx> wrote:

Responding to Daniel's questions...

I have some general questions about the intended use and benefits
of
VEPA, from an IT perspective:

In which virtual machine setups and technologies do you forsee this
interface being used?

The benefit of VEPA is the coordination and unification with the
external network switch. So, in environments where you are
needing/wanting your feature rich, wire speed, external network device
(firewall/switch/IPS/content-filter) to provide consistent policy
enforcement, and you want your VMs traffic to be subject to that
enforcement, you will want their traffic directed externally. Perhaps
you have some VMs that are on a DMZ or clustering an application or
implementing a multi-tier application where you would normally place a
firewall in-between the tiers.

I do have to raise the point that Linux is perfectly capable of keeping
up without
the need of an external switch. Whether you want policy external or
internal is
a architecture decision that should not be driven by mis-information
about performance.

VEPA is not only about enabling faster packet processing (like firewall/switch/IPS/content-filter etc) by doing this on the external switch.

Due to rather low performance of software-based I/O virtualization approaches a lot of effort has recently been going into hardware-based implementations of virtual network interfaces like SRIOV NICs provide. Without VEPA, such a NIC would have to implement sophisticated virtual switching capabilities. VEPA however is very simple and therefore perfectly suited for a hardware-based implementation. So in the future, it will give you direct I/O like performance and all the capabilities your adjacent switch provides.


the performance overhead isn't from switching the packets, it's from running the firewall/IDS/etc software on the same system.

with VEPA the communications from one VM to another VM running on the same host will be forced to go out the interface to the datacenter switching fabric. The overall performance of the network link will be slightly slower, but it allows for other devices to be inserted into the path.

this is something that I would want available if I were to start using VMs for things. I don't want to have to duplicate my IDS/firewalling functions within each host system as well as having them as part of the switching fabric.

David Lang
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/