inode highmem imbalance fix [Re: Bug with shared memory.]

From: Denis Lunev (den@asplinux.ru)
Date: Thu May 30 2002 - 06:25:04 EST


Hello!

The patch itself cures my problems, but after a small fix concerning
uninitialized variable resulting in OOPS.

Regards,
        Denis V. Lunev


--- linux/fs/inode.c.old Wed May 29 20:16:17 2002
+++ linux/fs/inode.c Wed May 29 20:17:08 2002
@@ -669,6 +669,7 @@
         struct inode * inode;
 
         count = pass = 0;
+ entry = &inode_unused;
 
         spin_lock(&inode_lock);
         while (goal && pass++ < 2) {

Andrea Arcangeli writes:
> On Mon, May 20, 2002 at 06:30:40AM +0200, Andrea Arcangeli wrote:
> > As next thing I'll go ahead on the inode/highmem imbalance repored by
> > Alexey in the weekend. Then the only pending thing before next -aa is
>
> Here it is, you should apply it together with vm-35 that you need too
> for the bh/highmem balance (or on top of 2.4.19pre8aa3). I tested it
> slightly on uml and it didn't broke so far, so be careful because it's not
> very well tested yet. On the lines of what Alexey suggested originally,
> if goal isn't reached, in a second pass we shrink the cache too, but
> only if the cache is the only reason for the "pinning" beahiour of the
> inode. If for example there are dirty blocks of metadata or of data
> belonging to the inode we wakeup_bdflush instead and we never shrink the
> cache in such case. If the inode itself is dirty as well we let the two
> passes fail so we will schedule the work for keventd. This logic should
> ensure we never fall into shrinking the cache for no good reason and
> that we free the cache only for the inodes that we actually go ahead and
> free. (basically only dirty pages set with SetPageDirty aren't trapped
> by the logic before calling the invalidate, like ramfs, but that's
> expected of course, those pages cannot be damaged by the non destructive
> invalidate anyways)
>
> Comments?

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/



This archive was generated by hypermail 2b29 : Fri May 31 2002 - 22:00:28 EST