Re: [PATCH v7 2/2] mm: make lru_add_drain_all() selective

From: Chris Metcalf
Date: Tue Aug 13 2013 - 19:45:01 EST


On 8/13/2013 7:29 PM, Tejun Heo wrote:
> It won't nest and doing it simultaneously won't buy anything, right?
> Wouldn't it be better to protect it with a mutex and define all
> necessary resources statically (yeah, cpumask is pain in the ass and I
> think we should un-deprecate cpumask_t for static use cases)? Then,
> there'd be no allocation to worry about on the path.

Here's what lru_add_drain_all() looks like with a guarding mutex.
Pretty much the same code complexity as when we have to allocate the
cpumask, and there really aren't any issues from locking, since we can assume
all is well and return immediately if we fail to get the lock.

int lru_add_drain_all(void)
{
static struct cpumask mask;
static DEFINE_MUTEX(lock);
int cpu, rc;

if (!mutex_trylock(&lock))
return 0; /* already ongoing elsewhere */

cpumask_clear(&mask);
get_online_cpus();

/*
* Figure out which cpus need flushing. It's OK if we race
* with changes to the per-cpu lru pvecs, since it's no worse
* than if we flushed all cpus, since a cpu could still end
* up putting pages back on its pvec before we returned.
* And this avoids interrupting other cpus unnecessarily.
*/
for_each_online_cpu(cpu) {
if (pagevec_count(&per_cpu(lru_add_pvec, cpu)) ||
pagevec_count(&per_cpu(lru_rotate_pvecs, cpu)) ||
pagevec_count(&per_cpu(lru_deactivate_pvecs, cpu)) ||
need_activate_page_drain(cpu))
cpumask_set_cpu(cpu, &mask);
}

rc = schedule_on_cpu_mask(lru_add_drain_per_cpu, &mask);

put_online_cpus();
mutex_unlock(&lock);
return rc;
}

--
Chris Metcalf, Tilera Corp.
http://www.tilera.com

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/