Re: [PATCH] Percpu tag allocator

From: Andrew Morton
Date: Thu Jun 13 2013 - 15:05:05 EST

On Thu, 13 Jun 2013 11:53:18 -0700 Tejun Heo <tj@xxxxxxxxxx> wrote:

> Hello, Andrew, Kent.
> (cc'ing NFS folks for id[r|a] discussion)
> On Wed, Jun 12, 2013 at 08:03:11PM -0700, Andrew Morton wrote:
> > They all sound like pretty crappy reasons ;) If the idr/ida interface
> > is nasty then it can be wrapped to provide the same interface as the
> > percpu tag allocator.
> >
> > I could understand performance being an issue, but diligence demands
> > that we test that, or at least provide a convincing argument.
> The thing is that id[r|a] guarantee that the lowest available slot is
> allocated

That isn't the case for ida_get_new_above() - the caller gets to
control the starting index.

> and this is important because it's used to name things which
> are visible to userland - things like block device minor number,
> device indicies and so on. That alone pretty much ensures that
> alloc/free paths can't be very scalable which usually is fine for most
> id[r|a] use cases as long as lookup is fast. I'm doubtful that it's a
> good idea to push per-cpu tag allocation into id[r|a]. The use cases
> are quite different.

You aren't thinking right.

The worst outcome here is that idr.c remains unimproved and we merge a
new allocator which does basically the same thing.

The best outcome is that idr.c gets improved and we don't have to merge
duplicative code.

So please, let's put aside the shiny new thing for now and work out how
we can use the existing tag allocator for these applications. If we
make a genuine effort to do this and decide that it's fundamentally
hopeless then this is the time to start looking at new implementations.

(I can think of at least two ways of making ida_get_new_above() an
order of magnitude faster for this application and I'm sure you guys
can as well.)
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at
Please read the FAQ at