Re: [this_cpu_xx V7 0/8] Per cpu atomics in core allocators andcleanup

From: Mathieu Desnoyers
Date: Tue Dec 15 2009 - 12:43:17 EST


* Christoph Lameter (cl@xxxxxxxxxxxxxxxxxxxx) wrote:
> Leftovers from the earlier patchset. Mostly applications of per cpu counters
> to core components.
>
> After this patchset there will be only one user of local_t left: Mathieu's
> trace ringbuffer. Does it really need these ops?
>

Besides my own ring buffer implementation in LTTng, at least Steven's
kernel/trace/ring_buffer.c (in mainline) use this too. We would need a
way to directly map to the same resulting behavior with per-cpu
variables.

In LTTng, I use local_cmpxchg, local_read, local_add and, in some
setups, local_add_return to manage the write counter and commit
counters. These per-cpu counters are kept in per-cpu buffer management
data allocated for each data collection "channel".

The current way I allocate this structure for all cpus is:

chan->buf = alloc_percpu(struct ltt_chanbuf);

But note that each struct ltt_chanbuf contains a pointer to an array
containing each sub-buffer commit counters for the given buffer:

struct commit_counters {
local_t cc;
local_t cc_sb; /* Incremented _once_ at sb switch */
local_t events; /* Event count */
};

struct ltt_chanbuf {
struct ltt_chanbuf_alloc a; /* Parent. First field. */
/* First 32 bytes cache-hot cacheline */
local_t offset; /* Current offset in the buffer */
struct commit_counters *commit_count;
/* Commit count per sub-buffer */
atomic_long_t consumed; /*
* Current offset in the buffer
* standard atomic access (shared)
*/
....

So I think accessing the "local_t offset" through percpu pointers should
be fine if I allocate struct ltt_chanbuf through the per cpu API.
However, I wonder how to deal with the commit_count counters, because
there is an indirection level.

Thanks,

Mathieu

--
Mathieu Desnoyers
OpenPGP key fingerprint: 8CD5 52C3 8E3C 4140 715F BA06 3F25 A8FE 3BAE 9A68
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/