gnbd, the G* network block device.

Mike Tilstra (tilstra@lcse.umn.edu)
Wed, 29 Sep 1999 17:48:02 -0500


--7ZAtKRhVyVSsbBD2
Content-Type: text/plain; charset=us-ascii

Ok, bit of an announcement and plee. Just newly added to the GFS cvs, a
network block device of our own design. Read the attached file for some
tidbits, look over the source, give opinions.

(Links for those on the outside:
gfs hompage: http://gfs.lcse.umn.edu/
gfs pub cvs: http://gfs.lcse.umn.edu/cgi-bin/cvsweb
or: http://gfs.lcse.umn.edu/software/cvs.html
)

Later,

-- 
Mike Tilstra                          tilstra@lcse.umn.edu
Every nonzero finite dimensional inner product space has an orthonormal
basis.

--7ZAtKRhVyVSsbBD2 Content-Type: text/plain; charset=us-ascii Content-Disposition: attachment; filename=HowWhy

This file ---------

Ok, this is an attempt to give the basics on why we are building our own network block device (nbd), and a bit on how it works.

The possible need for such a device come from the desire to have gfs mounted file systems over IP as well as FibreChannel. The least painful approach to this was to have a nbd and just put gfs on top of it. We are perfectly aware that there exists a nbd in the current Linux distributions and we looked over that one many times. And given the features we wanted to add to the nbd, we decided that it would be simpler to start over. The current nbd wasn't that spectacular, and hasn't been touch in a while (at least as far as I'm aware).

I've named our version of a nbd to `gnbd'. Where the `g' may be global or gfs or goo or some other `g' word, and the rest is just network block device.

Things we want in our block device. ----------------------------------- (+ means its done or started, - means untouched)

+Multiple clients connecting to a single server sharing the same set of blocks. This is just the same type of situation we find in FibreChannel or parallel SCSI, but for ip.

+Full 64bit devices. No small beans here.

+Automatic re-attaching to servers after errors. The foundation for auto re-attaching to a lost server has been set, and it works for controlled situations. (Uncontrolled situations have not been tested for various reasons including laziness.)

+Block sizes other than 1k. Performance thing, nice for those 64bit disks.

+Multiple IOs outstanding with support for out-of-order completion. More performance stuff.

- Kernel-mode server.

Random comments on how things work ----------------------------------

Currently gnbd has a kernel thread and a TCP socket for each opened block device. If either of these fail, the device is still opened. This is to allow modification of some parameters via ioctl.

There is a short and simple userland program to set the ip and port. Done with those ioctls.

The current server was written in less than an hour to get work on the kernel module started. It sucks. (but works)

For all sorts of gory details, scan the source, the comments can be entertaining. And possible informative.

--7ZAtKRhVyVSsbBD2--

- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.rutgers.edu Please read the FAQ at http://www.tux.org/lkml/