Re: can't mlockall() more than 128MB, is this a kernel limitation?

From: Chris Wedgwood (cw@f00f.org)
Date: Tue Aug 08 2000 - 16:37:27 EST


On Tue, Aug 08, 2000 at 01:10:36PM -0700, David Hinds wrote:

    Actually, since the present test is only done on the number of pages
    to be locked at the time of the system call, it is nearly worthless.
    You can already mlock more pages than the "limit": just do mlockall()
    with MCL_FUTURE, and allocate your big block of memory afterwards.

Long term we need to track this properly and send a signal to the
process as it grows... I actually like the idea of using ulimits for
locked pages so we can have soft and hard limits, it also means we
can allow idividual processes to lock one of two pages -- something
that can be very useful.

That said, we also need to make sure we have global settings, lest I
do something like:

        for(;;){
                mlock(buf,ps);
                if(fork())
                        sleep(60);
                touch_me(buf,ps); /* touch memory */
        }

and lock down gobs of memory

  --cw

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/



This archive was generated by hypermail 2b29 : Tue Aug 15 2000 - 21:00:16 EST