Re: [RFC 08/13] cifs: Add a function to read into an iter from a socket

From: David Howells
Date: Thu Jan 26 2023 - 11:42:32 EST


David Laight <David.Laight@xxxxxxxxxx> wrote:

> > It shouldn't matter as the only problematic iterator is ITER_PIPE
> > (advancing that has side effects) - and splice_read is handled specially
> > by patch 4. The problem with splice_read with the way cifs works is that
> > it likes to subdivide its read/write requests across multiple reqs and
> > then subsubdivide them if certain types of failure occur. But you can't
> > do that with ITER_PIPE.
>
> I was thinking that even if ok at the moment it might be troublesome later.
> Somewhere I started writing a patch to put the iov_cache[] for user
> requests into the same structure as the iterator.
> Copying those might cause oddities.

Well, there is dup_iter(), but that copies the vector table, which isn't what
we want in a number of cases. You probably need to come up with a wrapper for
that.

But we copy iters by assignment in a lot of places. With regards to msg_hdr,
it might be worth giving it an iterator pointer rather than its own iterator.

I've just had a go at attempting to modify the code.
cifs_read_iter_from_socket() wants to copy the iterator and truncate the copy,
which makes things slightly trickier. For both of the call sites,
receive_encrypted_read() and cifs_readv_receive(), it can do the truncation
before calling cifs_read_iter_from_socket(), I think - but it may have to undo
the truncation afterwards.

> > I build an ITER_BVEC from ITER_PIPE, ITER_UBUF and ITER_IOVEC in the top
> > levels with pins inserted as appropriate and hand the ITER_BVEC down. For
> > user-backed iterators it has to be done this way because the I/O may get
> > shuffled off to a different thread.
>
> For sub-page sided transfers it is probably worth doing a bounce buffer
> copy of user requests - just to save all the complex page pinning code.

You can't avoid it for async DIO reads. But that sort of thing I'm intending
to do in netfslib.

David