Re: Network filesystems and netmem
From: Mina Almasry
Date: Fri Aug 08 2025 - 13:58:13 EST
On Fri, Aug 8, 2025 at 6:16 AM David Howells <dhowells@xxxxxxxxxx> wrote:
>
> Hi Mina,
>
> Apologies for not keeping up with the stuff I proposed, but I had to go and do
> a load of bugfixing. Anyway, that gave me time to think about the netmem
> allocator and how *that* may be something network filesystems can make use of.
> I particularly like the way it can do DMA/IOMMU mapping in bulk (at least, if
> I understand it aright).
>
What are you referring to as the netmem allocator? Is it the page_pool
in net/core/page_pool.c? That one can indeed alloc in bulk via
alloc_pages_bulk_node, but then just loops over them to do DMA mapping
individually. It does allow you to fragment a piece of dma-mapped
memory via page_pool_fragment_netmem though. Probably that's what
you're referring to.
I have had an ambition to reuse the netmem_ref infra we recently
developed to upgrade the page_pool such that it actually allocs a
hugepage and maps it once and reuses shards of that chunk, but never
got around to implementing that.
> So what I'm thinking of is changing the network filesystems - at least the
> ones I can - from using kmalloc() to allocate memory for protocol fragments to
> using the netmem allocator. However, I think this might need to be
> parameterisable by:
>
> (1) The socket. We might want to group allocations relating to the same
> socket or destined to route through the same NIC together.
>
> (2) The destination address. Again, we might need to group by NIC. For TCP
> sockets, this likely doesn't matter as a connected TCP socket already
> knows this, but for a UDP socket, you can set that in sendmsg() (and
> indeed AF_RXRPC does just that).
>
the page_pool model groups memory by NIC (struct netdev), not socket
or destination address. It may be feasible to extend it to be
per-socket, but I don't immediately understand what that entails
exactly. The page_pool uses the netdev for dma-mapping, i'm not sure
what it would use the socket or destination address for (unless it's
to grab the netdev :P).
> (3) The lifetime. On a crude level, I would provide a hint flag that
> indicates whether it may be retained for some time (e.g. rxrpc DATA
> packets or TCP data) or whether the data is something we aren't going to
> retain (e.g. rxrpc ACK packets) as we might want to group these
> differently.
>
Today the page_pool doesn't really care how long you hold onto the mem
allocated from it. It kinda has to, because the mem goes to different
sockets ,and some of these sockets are used by applications that will
read the memory and free it immediately, and some sockets may not be
read for a while (or leaked from the userspace entirely - eek). AFAIU
the page_pool lets you hold onto any mem you
> So what I'm thinking of is creating a net core API that looks something like:
>
> #define NETMEM_HINT_UNRETAINED 0x1
> void *netmem_alloc(struct socket *sock, size_t len, unsigned int hints);
> void *netmem_free(void *mem);
>
> though I'm tempted to make it:
>
> int netmem_alloc(struct socket *sock, size_t len, unsigned int hints,
> struct bio_vec *bv);
> void netmem_free(struct bio_vec *bv);
>
> to accommodate Christoph's plans for the future of bio_vec.
>
Honestly the subject of whether to extend the page_pool or implement a
new allocator kinda comes up every once in a while.
The key issue is that the page_pool has quite strict benchmarks for
how fast it does recycling, see
tools/testing/selftests/net/bench/page_pool/. Changes that don't
introduce overhead to the fast-path could be accomodated, I think. I
don't know how the maintainers are going to feel about extending its
uses even further. It took a bit of convincing to get the zerocopy
memory provider stuff in as-is :D
--
Thanks,
Mina