Re: [PATCH v3 0/2] iov_iter: allow iov_iter_get_pages_alloc to allocate more pages per call

From: Miklos Szeredi
Date: Mon Feb 06 2017 - 04:08:22 EST


On Mon, Feb 6, 2017 at 4:05 AM, Al Viro <viro@xxxxxxxxxxxxxxxxxx> wrote:

> Some observations regarding the arguments:
> * stack footprint is atrocious. Consider e.g. fuse_mknod() - you
> get 16 bytes of fuse_mknod_in + 120 bytes of struct fuse_args + 128 bytes
> of fuse_entry_out. All on stack, and that's on top of whatever the
> callchain already has eaten, which might include e.g. nfsd stuff or
> ecryptfs, etc. Or fuse_get_parent(), for that matter, with 128 bytes of
> fuse_entry_out + 120 bytes of fuse_args, both on stack. This one is
> guaranteed to have a nasty call chain - fuse_get_parent() <- reconnect_one()
> <- reconnect_path() <- exportfs_decode_fh() (itself with a 256-byte array of
> char on stack) <- nfsd_set_fh_dentry() <- fh_verify() <- a bunch of call
> chains in nfsd.

Indeed.

> * "out" args (i.e. reply) are probably best dealt with by having
> coallocated with request itself - some already are and the sizes tend
> to be fixed and not too large (->get_link() is an exception, and it's
> probably better handled as mentioned above).
> * "in" args (request) are in some cases easily dealt with by
> coallocating with request, but there's a large class of situations where
> we are passing dentry->d_name.name and then there's fuse_symlink().
> The last one is ugly - potentially up to a page worth of data, coming
> straight from method caller; usually it's a part of getname() result,
> but e.g. ecryptfs might have it kmalloc'ed, nfsd - picked from sunrpc
> request payload, etc.
>
> AFAICS, your argument applies to the requests that have
> some page(s) locked until the request completion (unlock_page() either
> by ->end() callback or in the originator of request). If so, I would
> rather mark those as "call request_end() early"; they seem to have
> the non-page parts of args hosted in req->misc, so for them it's not
> a problem.

Yes, I think only page lock can be used to deadlock inside
fuse_dev_read/write(). So requests that don't have locked pages
should be okay with just waiting until copy_to/from_user() finishes
and only then proceeding with the abort.

Those that have locked pages must be able to be aborted during
copy_to/from_user() because the copy itself may try to acquire the
page lock.

So yes, if we want to switch to copy_to/from_user(), then we can just
fix the page refcounting for read and write requests and handle the
two cases differently.

> So how about this:
>
> * explicit FR_END_IMMEDIATELY on read/write-related requests
> * no FR_LOCKED flipping in lock_request()/unlock_request()
> * modifying the call of end_requests() in fuse_abort_conn() so that it
> would skip request_end() for everything that isn't marked FR_END_IMMEDIATELY
> * make fuse_copy_pages() grab page references around the actual
> fuse_copy_page() - grab req->waitq.lock, check FR_ABORTED, grab a page
> reference in case it's not, drop req->waitq.lock and bugger off if FR_ABORTED
> was set. Adjust fuse_try_move_page() accordingly.
>
> Do you see any problems with that approach for minimal fix? If all requests
> in need of FR_END_IMMEDIATELY turn out to have non-page part of args already
> embedded into req->misc, it looks like this ought to suffice. I probably
> could post something along those lines tomorrow, if you see any serious
> problems with that - please yell...

See previous mail, I don't think there's an issue with the current
code. Other than being convoluted as hell.

Thanks,
Miklos