Re: [RFC PATCH] provide per numa cma with an initial default size

From: Robin Murphy
Date: Mon Dec 06 2021 - 10:02:10 EST


[ +Barry ]

On 2021-11-30 07:45, Jay Chen wrote:
In the actual production environment, when we open
cma and per numa cma, if we do not increase the per
numa size configuration in cmdline, we find that our
performance has dropped by 20%.
Through analysis, we found that the default size of
per numa is 0, which causes the driver to allocate
memory from cma, which affects performance. Therefore,
we think we need to provide a default size.

Looking back at some of the review discussions, I think it may have been intentional that per-node areas are not allocated by default, since it's the kind of thing that really wants to be tuned to the particular system and workload, and as such it seemed reasonable to expect users to provide a value on the command line if they wanted the feature. That's certainly what the Kconfig text implies.

Thanks,
Robin.

Signed-off-by: Jay Chen <jkchen@xxxxxxxxxxxxxxxxx>
---
kernel/dma/contiguous.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/dma/contiguous.c b/kernel/dma/contiguous.c
index 3d63d91cba5c..3bef8bf371d9 100644
--- a/kernel/dma/contiguous.c
+++ b/kernel/dma/contiguous.c
@@ -99,7 +99,7 @@ early_param("cma", early_cma);
#ifdef CONFIG_DMA_PERNUMA_CMA
static struct cma *dma_contiguous_pernuma_area[MAX_NUMNODES];
-static phys_addr_t pernuma_size_bytes __initdata;
+static phys_addr_t pernuma_size_bytes __initdata = size_bytes;
static int __init early_cma_pernuma(char *p)
{