Re: [PATCH v7 2/2] mm: make lru_add_drain_all() selective

From: Chris Metcalf
Date: Wed Aug 14 2013 - 13:18:47 EST


On 8/14/2013 12:57 PM, Tejun Heo wrote:
> Hello, Chris.
>
> On Wed, Aug 14, 2013 at 12:03:39PM -0400, Chris Metcalf wrote:
>> Tejun, I don't know if you have a better idea for how to mark a
>> work_struct as being "not used" so we can set and test it here.
>> Is setting entry.next to NULL good? Should we offer it as an API
>> in the workqueue header?
> Maybe simply defining a static cpumask would be cleaner?

I think you're right, actually. Andrew, Tejun, how does this look?


static DEFINE_PER_CPU(struct work_struct, lru_add_drain_work);

void lru_add_drain_all(void)
{
static DEFINE_MUTEX(lock);
static struct cpumask has_work;
int cpu;

mutex_lock(&lock);
get_online_cpus();
cpumask_clear(&has_work);

for_each_online_cpu(cpu) {
struct work_struct *work = &per_cpu(lru_add_drain_work, cpu);

if (pagevec_count(&per_cpu(lru_add_pvec, cpu)) ||
pagevec_count(&per_cpu(lru_rotate_pvecs, cpu)) ||
pagevec_count(&per_cpu(lru_deactivate_pvecs, cpu)) ||
need_activate_page_drain(cpu)) {
INIT_WORK(work, lru_add_drain_per_cpu);
schedule_work_on(cpu, work);
cpumask_set_cpu(cpu, &has_work);
}
}

for_each_cpu(cpu, &has_work)
flush_work(&per_cpu(lru_add_drain_work, cpu));

put_online_cpus();
mutex_unlock(&lock);
}

--
Chris Metcalf, Tilera Corp.
http://www.tilera.com

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/