Re: [PATCHSET] workqueue: concurrency managed workqueue, take#5

From: Jeff Garzik
Date: Mon Jun 14 2010 - 21:20:41 EST

On 06/14/2010 05:37 PM, Tejun Heo wrote:
Hello, all.

This is the fifth take of cmwq (concurrency managed workqueue)
patchset. It's on top of v2.6.35-rc3 + sched/core patches. Git tree
is available at

git:// review-cmwq

Changes from the last take[L] are...

* fscache patches are omitted for now.

* The patchset is rebased on cpu_stop + sched/core, which now includes
all the necessary scheduler patches. cpu_stop already reimplements
stop_machine so that it doesn't use RT workqueue, so this patchset
simply drops RT wq support.

* __set_cpus_allowed() was determined to be unnecessary with recent
scheduler changes. On cpu re-onlining, cmwq now kills all idle
workers and tells busy ones to rebind after finishing the current
work by scheduling a dedicated work. This allows managing proper
cpu binding without adding overhead to hotpath.

* Oleg's clear work->data patch moved at the head of the queue and now
lives in the for-next branch which will be pushed to mainline on the
next merge window.

* Applied Oleg's review.

* Comments updated as suggested.

* work_flags_to_color() replaced w/ get_work_color()

* nr_cwqs_to_flush bug which could cause premature flush completion

* Replace rewind + list_for_each_entry_safe_continue() w/

* Don't directly write to *work_data_bits() but use __set_bit()

* Fixed cpu hotplug exclusion bug.

* Other misc tweaks.

Now that all scheduler bits are in place, I'll keep the tree stable
and publish it to linux-next soonish, so this hopefully is the last of
exhausting massive postings of this patchset.

Jeff, Arjan, I think it'll be best to route the libata and async
patches through wq tree. Would that be okay?

ACK for libata bits routing through wq tree... you know I support this work, as libata (and the kernel, generally speaking) has needed something like this for a long time.

To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at
Please read the FAQ at