does swsusp suck aftre resume for you? [was Re: [ck] Re: Faster resuming of suspend technology.]

From: Pavel Machek
Date: Mon Mar 13 2006 - 06:35:15 EST


On Po 13-03-06 12:13:26, Andreas Mohr wrote:
> Hi,
>
> On Mon, Mar 13, 2006 at 11:43:15AM +0100, Pavel Machek wrote:
> > On Po 13-03-06 21:35:59, Con Kolivas wrote:
> > > wouldn't be too hard to add a special post_resume_swap_prefetch() which
> > > aggressively prefetches for a while. Excuse my ignorance, though, as I know
> > > little about swsusp. Are there pages still on swap space after a resume
> > > cycle?
> >
> > Yes, there are, most of the time. Let me explain:
> >
> > swsusp needs half of memory free. So it shrinks caches (by emulating
> > memory pressure) so that half of memory if free (and optionaly shrinks
> > them some more). Pages are pushed into swap by this process.
> >
> > Now, that works perfectly okay for me (with 1.5GB machine). I can
> > imagine that on 128MB machine, shrinking caches to 64MB could hurt a
> > bit. I guess we'll need to find someone interested with small memory
> > machine (if there are no such people, we can happily ignore the issue
> > :-).
>
> Why not simply use the mem= boot parameter?
> Or is that impossible for some reason in this specific case?
>
> I have a P3/450 256M machine where I could do some tests if really needed.

Yes, I can do mem=128M... but then, I'd prefer not to code workarounds
for machines noone uses any more.

So, I'm looking for a volunteer:

1) Does the swsusp work for you (no => bugzilla, but not interesting
here)

2) Does interactivity suck after resume (no => you are not the right
person)

3) Does it still suck after setting image_size to high value (no =>
good, we have simple fix)

[If there are no people that got here, I'll just assume problem is
solved or hits too little people to be interesting.]

4) Congratulations, you are right person to help. Could you test if
Con's patches help?
Pavel
--
114: }
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/