Re: [PATCH net-next v4 08/11] net/mlx5e: Add support for UNREADABLE netmem page pools
From: Mina Almasry
Date: Thu Jun 12 2025 - 16:47:56 EST
On Thu, Jun 12, 2025 at 1:46 AM Dragos Tatulea <dtatulea@xxxxxxxxxx> wrote:
>
> On Wed, Jun 11, 2025 at 10:16:18PM -0700, Mina Almasry wrote:
> > On Tue, Jun 10, 2025 at 8:20 AM Mark Bloch <mbloch@xxxxxxxxxx> wrote:
> > >
> > > From: Saeed Mahameed <saeedm@xxxxxxxxxx>
> > >
> > > On netdev_rx_queue_restart, a special type of page pool maybe expected.
> > >
> > > In this patch declare support for UNREADABLE netmem iov pages in the
> > > pool params only when header data split shampo RQ mode is enabled, also
> > > set the queue index in the page pool params struct.
> > >
> > > Shampo mode requirement: Without header split rx needs to peek at the data,
> > > we can't do UNREADABLE_NETMEM.
> > >
> > > The patch also enables the use of a separate page pool for headers when
> > > a memory provider is installed for the queue, otherwise the same common
> > > page pool continues to be used.
> > >
> > > Signed-off-by: Saeed Mahameed <saeedm@xxxxxxxxxx>
> > > Reviewed-by: Dragos Tatulea <dtatulea@xxxxxxxxxx>
> > > Signed-off-by: Cosmin Ratiu <cratiu@xxxxxxxxxx>
> > > Signed-off-by: Tariq Toukan <tariqt@xxxxxxxxxx>
> > > Signed-off-by: Mark Bloch <mbloch@xxxxxxxxxx>
> > > ---
> > > drivers/net/ethernet/mellanox/mlx5/core/en_main.c | 9 ++++++++-
> > > 1 file changed, 8 insertions(+), 1 deletion(-)
> > >
> > > diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
> > > index 5e649705e35f..a51e204bd364 100644
> > > --- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
> > > +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
> > > @@ -749,7 +749,9 @@ static void mlx5e_rq_shampo_hd_info_free(struct mlx5e_rq *rq)
> > >
> > > static bool mlx5_rq_needs_separate_hd_pool(struct mlx5e_rq *rq)
> > > {
> > > - return false;
> > > + struct netdev_rx_queue *rxq = __netif_get_rx_queue(rq->netdev, rq->ix);
> > > +
> > > + return !!rxq->mp_params.mp_ops;
> >
> > This is kinda assuming that all future memory providers will return
> > unreadable memory, which is not a restriction I have in mind... in
> > theory there is nothing wrong with memory providers that feed readable
> > pages. Technically the right thing to do here is to define a new
> > helper page_pool_is_readable() and have the mp report to the pp if
> > it's all readable or not.
> >
> The API is already there: page_pool_is_unreadable(). But it uses the
> same logic...
>
Ugh, I was evidently not paying attention when that was added. I guess
everyone thinks memory provider == unreadable memory. I think it's
more a coincidence that the first 2 memory providers give unreadable
memory. Whatever I guess; it's good enough for now :D
> However, having a pp level API is a bit limiting: as Cosmin pointed out,
> mlx5 can't use it because it needs to know in advance if this page_pool
> is for unreadable memory to correctly size the data page_pool (with or
> without headers).
>
Yeah, in that case mlx5 would do something like:
return !rxq->mp_params.mp_ops->is_readable();
If we decided that mp's could report if they're readable or not. For
now I guess assuming all mps are unreadable is fine.
--
Thanks,
Mina