Re: [PATCH 1/4] compcache: xvmalloc memory allocator

From: Pekka Enberg
Date: Mon Aug 24 2009 - 15:44:00 EST


Hi Nitin,

On Mon, Aug 24, 2009 at 10:36 PM, Nitin Gupta<ngupta@xxxxxxxxxx> wrote:
> On 08/24/2009 11:03 PM, Pekka Enberg wrote:
>
> <snip>
>
>> On Mon, Aug 24, 2009 at 7:37 AM, Nitin Gupta<ngupta@xxxxxxxxxx>  wrote:
>>>
>>> +/**
>>> + * xv_malloc - Allocate block of given size from pool.
>>> + * @pool: pool to allocate from
>>> + * @size: size of block to allocate
>>> + * @pagenum: page no. that holds the object
>>> + * @offset: location of object within pagenum
>>> + *
>>> + * On success,<pagenum, offset>  identifies block allocated
>>> + * and 0 is returned. On failure,<pagenum, offset>  is set to
>>> + * 0 and -ENOMEM is returned.
>>> + *
>>> + * Allocation requests with size>  XV_MAX_ALLOC_SIZE will fail.
>>> + */
>>> +int xv_malloc(struct xv_pool *pool, u32 size, u32 *pagenum, u32 *offset,
>>> +                                                       gfp_t flags)
>
> <snip>
>
>>
>> What's the purpose of passing PFNs around? There's quite a lot of PFN
>> to struct page conversion going on because of it. Wouldn't it make
>> more sense to return (and pass) a pointer to struct page instead?
>
> PFNs are 32-bit on all archs while for 'struct page *', we require 32-bit or
> 64-bit depending on arch. ramzswap allocates a table entry <pagenum, offset>
> corresponding to every swap slot. So, the size of table will unnecessarily
> increase on 64-bit archs. Same is the argument for xvmalloc free list sizes.
>
> Also, xvmalloc and ramzswap itself does PFN -> 'struct page *' conversion
> only when freeing the page or to get a deferencable pointer.

I still don't see why the APIs have work on PFNs. You can obviously do
the conversion once for store and load. Look at what the code does,
it's converting struct page to PFN just to do the reverse for kmap().
I think that could be cleaned by passing struct page around.

Pekka
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/