smbfs: page write patch, dropping mounts, tunneling in 2.2.1

vherva@turing.netspan.fi
Wed, 17 Feb 1999 23:15:46 +0200


> From: Trond Myklebust <trond.myklebust@fys.uio.no>
> Date: 31 Jan 1999 11:30:08 +0100
> Subject: SMBfs: bug in 2.2.1 ?
>
> I've noticed what seems to be a bug in the SMBfs page write code. It seems
> that the code in generic_file_write has changed fairly recently, and
> it is now a bug for the 'updatepage' code to clear the page lock and
> serve the wait queue. Unfortunately nobody seems to have removed the
> call to 'smb_unlock_page(page)' in 'smb_writepage_sync'.
>
> Would people who've been seeing problems with the smbfs client in
> 2.2.1 please try the appended one-liner and see if it works?

I'm not aware of what kind of symtoms the bug shoud have caused, but it seems
that it fixed at least one on my 2.2.1-ac5. Before, the mounts would drop when
NT peer would close the socket. After this, any access to the mount -- ls, cd,
umount, or even df -- would hang and refuse to obey SIGKILL. After applying the
patch, the mounts still drop, but the hanging processes can be SIGINT'ed.
Moreover the processes seem to hang less often. I seem to remember similar
enhancement updating from 2.0.34 to 2.0.36 -- perhaps there was a similar
change in the code?

The dropping mounts, however, were not cured. With 2.0.36 there is no problem
with NT (or other smb server) closing the socket -- smbfs seems to transparently
reconnect when needed. With 2.2.1, the mounts drop after few minutes of idle time
when NT closes the socket. (With samba, the problem does not show up, at least
that often, since samba does not close the socket all that often.) I suppose this
behaviour has something to do with the recent smbfs delayed mount discussion. As
I understood, in 2.0 the smbmount would fork a child that did remounting in the
background every time smbfs asked it to. Now that the smbmount scheme has changed,
this does not happen any more, am I correct?

There's yet one more thing strange thing with smbfs. I use ssh to create a secure
channel through which I -- among other things -- mount shares with smbfs. If the
throughput without ssh is, say, 200KB/s, through ssh it is only 20-40KB/s. With
smbclient, however, the throughput only drops to something like 180KB/s. With
sharity through ssh, the performance is ~150KB/s. The slowdown is not specific to
ssh tunneling, tunneling through socket (the executable that creates a socket to
a specific destination) had similar effect. I seem to remember that NT had
similar loss of smb performance when tunneling through ssh (which was a heck of
a battle to get working at the first place.)

Obviously, there is something in the smbfs code that causes the tunneling to be
slow. I took a quick look, but did not find anything. Could anyone point me
an exact spot to suspect, so I could compare that to the smbclient equivalent.

-- v --

--
Ville Herva   Ville.Herva@netspan.fi   +358-50-5164500
Netspan Oy    netspan@netspan.fi       PL 65  FIN-02151 Espoo    
              http://www.netspan.fi
For my PGP key, finger vherva@netspan.fi.

- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.rutgers.edu Please read the FAQ at http://www.tux.org/lkml/