Re: [PATCH v4 0/5] cpumask: Fix invalid uniprocessor assumptions

From: Sander Vanheule
Date: Sun Jul 03 2022 - 03:50:58 EST


On Sat, 2022-07-02 at 13:38 -0700, Andrew Morton wrote:
> On Sat,  2 Jul 2022 18:08:23 +0200 Sander Vanheule <sander@xxxxxxxxxxxxx> wrote:
>
> > On uniprocessor builds, it is currently assumed that any cpumask will
> > contain the single CPU: cpu0. This assumption is used to provide
> > optimised implementations.
> >
> > The current assumption also appears to be wrong, by ignoring the fact
> > that users can provide empty cpumask-s. This can result in bugs as
> > explained in [1].
>
> It's a little unkind to send people off to some link to explain the
> very core issue which this patchset addresses!  So I enhanced this
> paragraph:
>
> : The current assumption also appears to be wrong, by ignoring the fact that
> : users can provide empty cpumasks.  This can result in bugs as explained in
> : [1] - for_each_cpu() will run one iteration of the loop even when passed
> : an empty cpumask.

Makes sense to add this, sorry for the inconvenience.

Just to make sure, since I'm not familiar with the process for patches going through the mm tree,
can I still send a v5 to move the last patch forward in the series, and to include Yury's tags?

Best,
Sander

> > This series introduces some basic tests, and updates the optimisations
> > for uniprocessor builds.
> >
> > The x86 patch was written after the kernel test robot [2] ran into a
> > failed build. I have tried to list the files potentially affected by the
> > changes to cpumask.h, in an attempt to find any other cases that fail on
> > !SMP. I've gone through some of the files manually, and ran a few cross
> > builds, but nothing else popped up. I (build) checked about half of the
> > potientally affected files, but I do not have the resources to do them
> > all. I hope we can fix other issues if/when they pop up later.
> >
> > [1] https://lore.kernel.org/all/20220530082552.46113-1-sander@xxxxxxxxxxxxx/
> > [2] https://lore.kernel.org/all/202206060858.wA0FOzRy-lkp@xxxxxxxxx/
>