Tux3 Report: Our blocker list

From: Daniel Phillips
Date: Mon Feb 03 2014 - 20:39:29 EST


At Korea Linux Forum last fall, Linus asked, haven't I been hearing
about Tux3 for ten years? I said, no, that was Tux2, completely
different. You only heard about Tux3 for six years.

Well, that is about long enough to keep hearing about an out of tree
filesystem. Last time we talked about merging, the main criticisms were
stylistic things, long since fixed. After that, we decided to address
some glaring issues rather than let innocent victims hit them the hard
way. Though victims of a new filesystem are theoretically limited to
battle hardened veterans, in practice it does not work out that way. In
reality, if you just need to flip a config flag then all kinds of
people will try the code. If it then does stupid things it immediately acquires a reputation that could take years to shake. Not fun. So we decided to fill in some holes first.

Here is our remaining blocker list:

1) Allocation policy: simple minded linear block allocation is good
for benchmarks but ages poorly, so add a respectable allocation
policy.

2) Mmap consistency: mmap writes may interact with block forking
caused by write(2) to leave stale pages in cache - fix it.

3) ENOSPC: Volume full conditions must be predicted by the frontend,
not detected in the backend when it is too late to enforce ACID
guarantees, and the prediction must be accurate or users will be
annoyed by ENOSPC errors on a volume that is far from full.

After that, plenty of issues remain before anyone should deploy Tux3 for
real work, however none are in the "fill up your volume and it eats
itself" category. Items 1 and 2 above are nearly done and item 3 is
designed in detail, so we are close to a flag day where we offer up the Tux3 patch for serious review.

You can watch our progress here:

https://github.com/OGAWAHirofumi/tux3/commits/hirofumi

and here:

http://buildbot.tux3.org:8010/waterfall

This is the amazing test infrastructure Hirofumi set up using buildbot
and hardware contributed by Miracle Linux. It goes to work whenever new
patches arrive on Github. You can see it testing the allocation patches
that landed this weekend.

One thing that happened over the last couple of months is, we added
allocation group counts and thus adopted yet another main design
feature of Ext4. This required some new, persistent metadata, with a
risk of regressing our benchmarks, but we will actually end up more
efficient for reasons I will delve into on the Tux3 mailing list.

Incidentally, the Tux3 kernel patch grew very little over the last
year. In spite of many improvements, we remain just over 18K lines of
code including whitespace. By comparison, Ext4 is 52K, Btrfs is 94K and
XFS is 96K. Though none of these can be reasonably described as
bloated, Tux3 is tighter by a multiple.

Overall, we tend to devote as much work to removing code as adding it.
As a result, we think Tux3 upholds the traditional Unix Philosophy
pretty well. Though it is fashionable to attack this time honored credo
on the basis of practicality, you can have orthogonal design and great
functionality too. We view lightness and tightness as a major
contribution, ranking just as high as performance and resilience. This
is about both maintainability and personal satisfaction.

We do expect our code base to grow faster as the focus shifts from base
functionality to features and scaling. To name a few: snapshots; data
compression; directory indexing; online repair; quotas. But we also have
opportunities to remove code, so a year from now I expect a code base
that is only modestly bigger and includes most of this list.

Interested developers and testers are welcome to drop by for a chat:

http://tux3.org/contribute.html
irc.oftc.net #tux3

With most of the tedious groundwork out of the way, this is the fun
part of the process where we get to obsess endlessly over matters of
fit and finish.

Regards,

Daniel

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/