Re: [RFC PATCH V2 0/5] vhost: accelerate metadata access through vmap()

From: Michael S. Tsirkin
Date: Tue Mar 12 2019 - 07:54:08 EST


On Tue, Mar 12, 2019 at 03:17:00PM +0800, Jason Wang wrote:
>
> On 2019/3/12 äå11:52, Michael S. Tsirkin wrote:
> > On Tue, Mar 12, 2019 at 10:59:09AM +0800, Jason Wang wrote:
> > > On 2019/3/12 äå2:14, David Miller wrote:
> > > > From: "Michael S. Tsirkin" <mst@xxxxxxxxxx>
> > > > Date: Mon, 11 Mar 2019 09:59:28 -0400
> > > >
> > > > > On Mon, Mar 11, 2019 at 03:13:17PM +0800, Jason Wang wrote:
> > > > > > On 2019/3/8 äå10:12, Christoph Hellwig wrote:
> > > > > > > On Wed, Mar 06, 2019 at 02:18:07AM -0500, Jason Wang wrote:
> > > > > > > > This series tries to access virtqueue metadata through kernel virtual
> > > > > > > > address instead of copy_user() friends since they had too much
> > > > > > > > overheads like checks, spec barriers or even hardware feature
> > > > > > > > toggling. This is done through setup kernel address through vmap() and
> > > > > > > > resigter MMU notifier for invalidation.
> > > > > > > >
> > > > > > > > Test shows about 24% improvement on TX PPS. TCP_STREAM doesn't see
> > > > > > > > obvious improvement.
> > > > > > > How is this going to work for CPUs with virtually tagged caches?
> > > > > > Anything different that you worry?
> > > > > If caches have virtual tags then kernel and userspace view of memory
> > > > > might not be automatically in sync if they access memory
> > > > > through different virtual addresses. You need to do things like
> > > > > flush_cache_page, probably multiple times.
> > > > "flush_dcache_page()"
> > >
> > > I get this. Then I think the current set_bit_to_user() is suspicious, we
> > > probably miss a flush_dcache_page() there:
> > >
> > >
> > > static int set_bit_to_user(int nr, void __user *addr)
> > > {
> > > ÂÂÂÂÂÂÂ unsigned long log = (unsigned long)addr;
> > > ÂÂÂÂÂÂÂ struct page *page;
> > > ÂÂÂÂÂÂÂ void *base;
> > > ÂÂÂÂÂÂÂ int bit = nr + (log % PAGE_SIZE) * 8;
> > > ÂÂÂÂÂÂÂ int r;
> > >
> > > ÂÂÂÂÂÂÂ r = get_user_pages_fast(log, 1, 1, &page);
> > > ÂÂÂÂÂÂÂ if (r < 0)
> > > ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ return r;
> > > ÂÂÂÂÂÂÂ BUG_ON(r != 1);
> > > ÂÂÂÂÂÂÂ base = kmap_atomic(page);
> > > ÂÂÂÂÂÂÂ set_bit(bit, base);
> > > ÂÂÂÂÂÂÂ kunmap_atomic(base);
> > > ÂÂÂÂÂÂÂ set_page_dirty_lock(page);
> > > ÂÂÂÂÂÂÂ put_page(page);
> > > ÂÂÂÂÂÂÂ return 0;
> > > }
> > >
> > > Thanks
> > I think you are right. The correct fix though is to re-implement
> > it using asm and handling pagefault, not gup.
>
>
> I agree but it needs to introduce new helpers in asm for all archs which is
> not trivial.

We can have a generic implementation using kmap.

> At least for -stable, we need the flush?
>
>
> > Three atomic ops per bit is way to expensive.
>
>
> Yes.
>
> Thanks

See James's reply - I stand corrected we do kunmap so no need to flush.

--
MST