Cannot use SHM_HUGETLB as a regular user

From: Ravikiran G Thirumalai
Date: Wed Feb 04 2009 - 17:04:46 EST


Looks like a regular user cannot shmget more than 64k of memory using hugetlb!!
Atleast if we go by Documentation/vm/hugetlbpage.txt

Quote Documentation/vm/hugetlbpage.txt:

"Users who wish to use hugetlb page via shared memory segment should be a
member of a supplementary group and system admin needs to configure that
gid into /proc/sys/vm/hugetlb_shm_group."

However, setting up hugetlb_shm_group with the right gid does not work!
Looks like hugetlb uses mlock based rlimits which cause shmget
with SHM_HUGETLB to fail with -ENOMEM. Setting up right rlimits for mlock
through /etc/security/limits.conf works though (regardless of
hugetlb_shm_group).

I understand most oracle users use this rlimit to use largepages.
But why does this need to be based on mlock!? We do have shmmax and shmall
to restrict this resource.

As I see it we have the following options to fix this inconsistency:

1. Do not depend on RLIMIT_MEMLOCK for hugetlb shm mappings. If a user
has CAP_IPC_LOCK or if user belongs to /proc/sys/vm/hugetlb_shm_group,
he should be able to use shm memory according to shmmax and shmall OR
2. Update the hugetlbpage documentation to mention the resource limit based
limitation, and remove the useless /proc/sys/vm/hugetlb_shm_group sysctl

Which one is better? I am leaning towards 1. and have a patch ready for 1.
but I might be missing some historical reason for using RLIMIT_MEMLOCK with
SHM_HUGETLB.

Thanks,
Kiran
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/