Re: [PATCHv 2] tcp: properly initialize tcp memory limits part 2(fix nfs regression)

From: Sergei Trofimovich
Date: Mon Mar 05 2012 - 13:23:30 EST


> >>>>>>>> The change looks like a typo (division flipped to multiplication):
> >>>>>>>>> limit = nr_free_buffer_pages() / 8;
> >>>>>>>>> limit = nr_free_buffer_pages()<< (PAGE_SHIFT - 10);
> >>>>>>> Hi, thanks for the reporting. It's not a typo. It was previously:
> >>>>>>> sysctl_tcp_mem[1]<< (PAGE_SHIFT - 7). Looks like we need to do the
> >>>>>>> limit check before shift the value. Please try the following patch, thanks.
> >>>>>> Still does not help. I test it by checking sha1sum of a large file over NFS
> >>>>>> (small files seem to work simetimes):
> >>>>>>
> >>>>>> $ strace sha1sum /gentoo/distfiles/gcc-4.6.2.tar.bz2
> >>>>>> ...
> >>>>>> open("/gentoo/distfiles/gcc-4.6.2.tar.bz2", O_RDONLY
> >>>>>> <HUNG>
> Hi Sergei:
>
> Looks like the client does not even start to read the file.
> >>>>>> After a certain timeout dmesg gets odd spam:
> >>>>>> [ 314.848094] nfs: server vmhost not responding, still trying
> >>>>>> [ 314.848134] nfs: server vmhost not responding, still trying
> >>>>>> [ 314.848145] nfs: server vmhost not responding, still trying
> >>>>>> [ 314.957047] nfs: server vmhost not responding, still trying
> >>>>>> [ 314.957066] nfs: server vmhost not responding, still trying
> >>>>>> [ 314.957075] nfs: server vmhost not responding, still trying
> >>>>>> [ 314.957085] nfs: server vmhost not responding, still trying
> >>>>>> [ 314.957100] nfs: server vmhost not responding, still trying
> >>>>>> [ 314.958023] nfs: server vmhost not responding, still trying
> >>>>>> [ 314.958035] nfs: server vmhost not responding, still trying
> >>>>>> [ 314.958044] nfs: server vmhost not responding, still trying
> >>>>>> [ 314.958054] nfs: server vmhost not responding, still trying
> >>>>>>
> >>>>>> looks like bogus messages. Might be relevant to mishandled timings
> >>>>>> somewhere else or a bug in nfs code.
>
> Did you use a virtual machine as your NFS server? Have you tried to
> bisect the server side code?
> >>>>> And after 120 seconds hung tasks shows it might be an OOM issue
> >>>>> Likely caused by patch, as it's a 2GB RAM +4GB swap amd64 box
> >>>>> not running anything heavy:
> >>>> That is a bit weird.
> >>>>
> >>>> First because with Jason's patch, we should end up with the very same
> >>>> calculation, at the same exact order, as it was in older kernels.
> >>>> Second, because by shifting<< 10, you should be ending up with very
> >>>> small numbers, effectively having tcp_rmem[1] == tcp_rmem[2], and the
> >>>> same for wmem.
> >>>>
> >>>> Can you share which numbers you end up with at
> >>>> /proc/sys/net/ipv4/tcp_{r,w}mem ?
> >>>>
> >>> Sure:
> >>>
> >>> $ cat /proc/sys/net/ipv4/tcp_{r,w}mem
> >>> 4096 87380 1999072
> >>> 4096 16384 1999072
> >>>
> >> Sergei,
> >>
> >> Sorry for not being clearer. I was expecting you'd post those values
> >> both in the scenario in which you see the bug, and in the scenario you
> >> don't.
> > Ah, I see. Sorry. Patches are on top of v3.3-rc5-166-g1f033c1. Buggy one:
> >> - limit = nr_free_buffer_pages()<< (PAGE_SHIFT - 10);
> >> - limit = max(limit, 128UL);
> >> + limit = nr_free_buffer_pages() / 8;
> >> + limit = max(limit, 128UL)<< (PAGE_SHIFT - 7);
> >> max_share = min(4UL*1024*1024, limit);
> >> + printk(KERN_INFO "TCP: max_share=%u\n", max_share);
> > $ cat /proc/sys/net/ipv4/tcp_{r,w}mem
> > 4096 87380 1999072
> > 4096 16384 1999072
>
> Nothing strange to me.
> > Working one:
> >> - limit = nr_free_buffer_pages()<< (PAGE_SHIFT - 10);
> >> + limit = nr_free_buffer_pages()>> (PAGE_SHIFT - 10);
> >> limit = max(limit, 128UL);
> >> max_share = min(4UL*1024*1024, limit);
> >> + printk(KERN_INFO "TCP: max_share=%u\n", max_share);
> > $ cat /proc/sys/net/ipv4/tcp_{r,w}mem
> > 4096 87380 124942
> > 4096 16384 124942
>
> This one looks small to me, as the tcp_{r,w}mem were count by bytes and
> limit were count by number of pages, so we need to shift PAGE_SHIFT.
>
> As I can't reproduce this locally, in order to narrow down the problem,
> could you please help to check whether the issue were
> introduced/eliminated by commit 4acb4190 or 3dc43e3?

I didn't think of server problem. I did run 3.3-rc0 kernel there
from the kvm tree (v3.2-10396-g05ef4c6):
commit 05ef4c60568ed1740f65bf66a76da30b19060119
Author: Michael S. Tsirkin <mst@xxxxxxxxxx>
Date: Wed Jan 18 20:07:09 2012 +0200

kvm: fix error handling for out of range irq

from git://git.kernel.org/pub/scm/virt/kvm/kvm.git

Updating to current vanilla 3.3-rc6 solved the problem.
Are you interested in digging that issue further to find commit
breaking the server?

--

Sergei

Attachment: signature.asc
Description: PGP signature