Re: [Xen-devel] [PATCH] xen-blkback: use bigger array for batch gntoperations

From: Roger Pau MonnÃ
Date: Thu Aug 01 2013 - 10:19:00 EST


On 01/08/13 14:30, David Vrabel wrote:
> On 01/08/13 13:08, Roger Pau Monne wrote:
>> Right now the maximum number of grant operations that can be batched
>> in a single request is BLKIF_MAX_SEGMENTS_PER_REQUEST (11). This was
>> OK before indirect descriptors because the maximum number of segments
>> in a request was 11, but with the introduction of indirect
>> descriptors the maximum number of segments in a request has been
>> increased past 11.
>>
>> The memory used by the structures that are passed in the hypercall was
>> allocated from the stack, but if we have to increase the size of the
>> array we can no longer use stack memory, so we have to pre-allocate
>> it.
>>
>> This patch increases the maximum size of batch grant operations and
>> replaces the use of stack memory with pre-allocated memory, that is
>> reserved when the blkback instance is initialized.
> [...]
>> --- a/drivers/block/xen-blkback/xenbus.c
>> +++ b/drivers/block/xen-blkback/xenbus.c
> [...]
>> @@ -148,6 +155,16 @@ static struct xen_blkif *xen_blkif_alloc(domid_t domid)
>> if (!req->indirect_pages[j])
>> goto fail;
>> }
>> + req->map = kcalloc(GNT_OPERATIONS_SIZE, sizeof(req->map[0]), GFP_KERNEL);
>> + if (!req->map)
>> + goto fail;
>> + req->unmap = kcalloc(GNT_OPERATIONS_SIZE, sizeof(req->unmap[0]), GFP_KERNEL);
>> + if (!req->unmap)
>> + goto fail;
>> + req->pages_to_gnt = kcalloc(GNT_OPERATIONS_SIZE, sizeof(req->pages_to_gnt[0]),
>> + GFP_KERNEL);
>> + if (!req->pages_to_gnt)
>> + goto fail;
>
> Do these need to be per-request? Or can they all share a common set of
> arrays?

No, we cannot share them unless we serialize the unmap of grants using a
spinlock (like we do when writing the reponse on the ring).

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/