Re: Shared memory

Brad Pepers (pepersb@cuug.ab.ca)
Wed, 31 Jul 1996 18:28:35 +1800 (MDT)


> Hey,
>
> Im working on a project, where a process should maintain a huge memory
> array (about 30-40 MB).
>
> Other processes or communicating via connect/bind/accept to this process.
> Each time a connection is made, the server-process is forked. But all my
> server processes, has to have access to the same memory array.
>
> How do I make this?
>
> shmget can only take 16MB arrays, why? Can I change the #define in
> include/asm/shmparam.h?

I think you can increase the shared memory size to a max of 128Mb. The
linux/include/asm/shmparam.h include has a max shm system wide (SHMALL)
as (1 << 15) pages so if it 4Kb pages its 128Mb. Looking at that I
think you should be able to up SHMMAX to say 64Mb which is large enough
for what you want to do and leaves 64Mb for anything else that uses
shm.

> I cannot use fork, because it makes a new array... I have read about the
> clone call, can I use it?

Could take a look at one of the threads packages. I think there is one
using clone now (but likely still in beta). Wonder what all the down
sides would be to using a shared mmap of a 40Mb file???

> Please help...
>
> Thomas Bjoerk
> Denmark

======================================================================
Brad Pepers Proud supporter of Linux in
Ramparts Management Group Ltd. Canada!
ramparts@agt.net
http://www.agt.net/public/ramparts Linux rules!