Re: [RFC][PATCH v3 7/10] workqueue: add WQ_IDLEPRI

From: Tejun Heo
Date: Thu May 26 2011 - 05:38:18 EST


Hello, KAMEZAWA.

On Thu, May 26, 2011 at 02:30:24PM +0900, KAMEZAWA Hiroyuki wrote:
> When this idea came to me, I wonder which is better to maintain
> memcg's thread pool or add support in workqueue for generic use. In
> genral, I feel enhancing genric one is better...so, wrote this one.

Sure, if it's something which can be useful for other users, it makes
sense to make it generic.

> Index: memcg_async/include/linux/workqueue.h
> ===================================================================
> --- memcg_async.orig/include/linux/workqueue.h
> +++ memcg_async/include/linux/workqueue.h
> @@ -56,7 +56,8 @@ enum {
>
> /* special cpu IDs */
> WORK_CPU_UNBOUND = NR_CPUS,
> - WORK_CPU_NONE = NR_CPUS + 1,
> + WORK_CPU_IDLEPRI = NR_CPUS + 1,
> + WORK_CPU_NONE = NR_CPUS + 2,
> WORK_CPU_LAST = WORK_CPU_NONE,

Hmmm... so, you're defining another fake CPU a la unbound CPU. I'm
not sure whether it's really necessary to create its own worker pool
tho. The reason why SCHED_OTHER is necessary is because it may
consume large amount of CPU cycles. Workqueue already has UNBOUND -
for an unbound one, workqueue code simply acts as generic worker pool
provider and everything other than work item dispatching and worker
management are deferred to scheduler and the workqueue user.

Is there any reason memcg can't just use UNBOUND workqueue and set
scheduling priority when the work item starts and restore it when it's
done? If it's gonna be using UNBOUND at all, I don't think changing
scheduling policy would be a noticeable overhead and I find having
separate worker pools depending on scheduling priority somewhat silly.

We can add a mechanism to manage work item scheduler priority to
workqueue if necessary tho, I think. But that would be per-workqueue
attribute which is applied during execution, not something per-gcwq.

Thanks.

--
tejun
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/