Re: [PATCH] Use min of two prio settings in calculating distressfor reclaim

From: Nick Piggin
Date: Tue Oct 17 2006 - 13:15:55 EST


Martin Bligh wrote:
Distress is a per-zone thing. It is precisely that way because there *are*
different types of reclaim and you don't want a crippled reclaimer (which
might indeed be having trouble reclaiming stuff) from saying the system
is in distress.

If they are the *only* reclaimer, then OK, distress will go up.


So you'd rather the "crippled" reclaimer went and fire the OOM killer
and shoot someone instead?

No, so I fixed that.
http://www.kernel.org/git/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commit;h=408d85441cd5a9bd6bc851d677a10c605ed8db5f

I don't see why we should penalise them,
especially as the dirty page throttling is global, and will just kick
pretty much anyone trying to do an allocation. There's nothing magic

How does dirty page throttling kick anyone trying to do an allocation?
It kicks at page dirtying time.

about the "crippled" reclaimer as you put it. They're doing absolutely
nothing wrong, or that they should be punished for. They need a page.

When did I say anything about magic or being punished? They need a page
and they will get it when enough memory gets freed. Pages being reclaimed
by process A may be allocated by process B just fine.

I don't agree that the thing to aim for is ensuring everyone is able
to reclaim something.

And why do you ignore the other side of the coin, where now reclaimers
that are easily able to make progress are being made to swap stuff out?


Because I'd rather err on the side of moving a few mapped pages from the
active to the inactive list than cause massive latencies for a page
allocation that's dropping into direct reclaim and/or going OOM.

We shouldn't go OOM. And there are latencies everywhere and this won't
fix them. A GFP_NOIO allocator can't swap out pages at all, for example.

If the GFP_NOFS reclaimer is having a lot of trouble reclaiming, and so
you decide to turn on reclaim_mapped, then it is not suddenly going to
be able to free those pages.


Well it's certainly not going to work if we don't even try. There were
ZERO pages in the inactive list at this point. The system is totally
frigging hosed and we're not even trying to reclaim pages because
we're in deluded-happy-la-la land and we think everything is fine.

So that could be the temp_priority race. If no progress is being made
anywhere, the current logic (minus races) says that prev_prio should
reach 0. Regardless of whether it is GFP_NOFS or whatever.

This is what happens as we kick down prio levels in one thread:

priority = 12 active_distress = 0 swap_tendency = 0 gfp_mask = d0
priority = 12 active_distress = 0 swap_tendency = 0 gfp_mask = d0
priority = 11 active_distress = 25 swap_tendency = 106 gfp_mask = d0
priority = 10 active_distress = 25 swap_tendency = 106 gfp_mask = d0
priority = 9 active_distress = 0 swap_tendency = 81 gfp_mask = d0
priority = 8 active_distress = 0 swap_tendency = 81 gfp_mask = d0
priority = 7 active_distress = 25 swap_tendency = 106 gfp_mask = d0
priority = 6 active_distress = 25 swap_tendency = 106 gfp_mask = d0
priority = 5 active_distress = 25 swap_tendency = 106 gfp_mask = d0
priority = 4 active_distress = 25 swap_tendency = 106 gfp_mask = d0
priority = 3 active_distress = 25 swap_tendency = 106 gfp_mask = d0
priority = 2 active_distress = 50 swap_tendency = 131 gfp_mask = d0
priority = 1 active_distress = 0 swap_tendency = 81 gfp_mask = d0
priority = 0 active_distress = 0 swap_tendency = 81 gfp_mask = d0

Notice that distress is not kicking up as priority kicks down (see
1 and 0 at the end). Because some other idiot reset prev_priority
back to 12.

Fine, so fix that race rather than papering over it by using the min
of prev_priority and current priority.

--
SUSE Labs, Novell Inc.
Send instant messages to your online friends http://au.messenger.yahoo.com -
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/