Re: [PATCH] fs-writeback: drop wb->list_lock during blk_finish_plug()

From: Linus Torvalds
Date: Thu Sep 17 2015 - 21:50:35 EST


On Thu, Sep 17, 2015 at 5:37 PM, Dave Chinner <david@xxxxxxxxxxxxx> wrote:
>> >
>> > I'm not seeing why that should be an issue. Sure, there's some CPU
>> > overhead to context switching, but I don't see that it should be that
>> > big of a deal.
>
> It may well change the dispatch order of enough IOs for it to be
> significant on an IO bound device.

Hmm. Maybe. We obviously try to order the IO's a bit by inode, and I
could see the use of a workqueue maybe changing that sufficiently. But
it still sounds a bit unlikely.

And in fact, I think I have a better explanation.

> In outright performance on my test machine, the difference in
> files/s is noise. However, the consistency looks to be substantially
> improved and the context switch rate is now running at under
> 3,000/sec.

Hmm. I don't recall seeing you mention how many context switches per
second you had before. What is it down from?

However, I think I may have found something more interesting here.

The fact is that a *normal* schedule will trigger that whole
blk_schedule_flush_plug(), but a cond_sched() or a cond_sched_lock()
doesn't actually do a normal schedule at all. Those trigger a
*preemption* schedule.

And a preemption schedule does not trigger that unplugging at all.
Why? A kernel "preemption" very much tries to avoid touching thread
state, because the whole point is that normally we may be preempting
threads in random places, so we don't run things like
sched_submit_work(), because the thread may be in the middle of
*creating* that work, and we don't want to introduce races. The
preemption scheduling can also be done with "task->state" set to
sleeping, and it won't actually sleep.

Now, for the explicit schedules like "cond_resched()" and
"cond_resched_lock()", those races with obviously don't exist, but
they happen to share the same preemption scheduling logic.

So it turns out that as far as I can see, the whole "cond_resched()"
will not start any IO at all, and it will just be left on the thread
plug until we schedule back to the thread.

So I don't think this has anything to do with kblockd_workqueue. I
don't think it even gets to that point.

I may be missing something, but just to humor me, can you test the
attached patch *without* Chris's patch to do explicit plugging? This
should make cond_resched() and cond_resched_lock() run the unplugging.

It may be entirely broken, I haven't thought this entirely through.

Linus
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 97d276ff1edb..388ea9e7ab8a 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -4548,6 +4548,7 @@ SYSCALL_DEFINE0(sched_yield)
int __sched _cond_resched(void)
{
if (should_resched(0)) {
+ sched_submit_work(current);
preempt_schedule_common();
return 1;
}
@@ -4572,9 +4573,10 @@ int __cond_resched_lock(spinlock_t *lock)

if (spin_needbreak(lock) || resched) {
spin_unlock(lock);
- if (resched)
+ if (resched) {
+ sched_submit_work(current);
preempt_schedule_common();
- else
+ } else
cpu_relax();
ret = 1;
spin_lock(lock);