Re: xfs: does mkfs.xfs require fancy switches to get decent performance? (was Tux3 Report: How fast can we fsync?)

From: David Lang
Date: Tue May 12 2015 - 14:39:58 EST


On Mon, 11 May 2015, Daniel Phillips wrote:

On Monday, May 11, 2015 10:38:42 PM PDT, Dave Chinner wrote:
I think Ted and I are on the same page here. "Competitive
benchmarks" only matter to the people who are trying to sell
something. You're trying to sell Tux3, but....

By "same page", do you mean "transparently obvious about
obstructing other projects"?

The "except page forking design" statement is your biggest hurdle
for getting tux3 merged, not performance.

No, the "except page forking design" is because the design is
already good and effective. The small adjustments needed in core
are well worth merging because the benefits are proved by benchmarks.
So benchmarks are key and will not stop just because you don't like
the attention they bring to XFS issues.

Without page forking, tux3
cannot be merged at all. But it's not filesystem developers you need
to convince about the merits of the page forking design and
implementation - it's the mm and core kernel developers that need to
review and accept that code *before* we can consider merging tux3.

Please do not say "we" when you know that I am just as much a "we"
as you are. Merging Tux3 is not your decision. The people whose
decision it actually is are perfectly capable of recognizing your
agenda for what it is.

http://www.phoronix.com/scan.php?page=news_item&px=MTA0NzM
"XFS Developer Takes Shots At Btrfs, EXT4"

umm, Phoronix has no input on what gets merged into the kernel. they also hae a reputation for trying to turn anything into click-bait by making it sound like a fight when it isn't.

The real question is, has the Linux development process become
so political and toxic that worthwhile projects fail to benefit
from supposed grassroots community support. You are the poster
child for that.

The linux development process is making code available, responding to concerns from the experts in the community, and letting the code talk for itself.

There have been many people pushing code for inclusion that has not gotten into the kernel, or has not been used by any distros after it's made it into the kernel, in spite of benchmarks being posted that seem to show how wonderful the new code is. ReiserFS was one of the first, and part of what tarnished it's reputation with many people was how much they were pushing the benchmarks that were shown to be faulty (the one I remember most vividly was that the entire benchmark completed in <30 seconds, and they had the FS tuned to not start flushing data to disk for 30 seconds, so the entire 'benchmark' ran out of ram without ever touching the disk)

So when Ted and Dave point out problems with the benchmark (the difference in behavior between a single spinning disk, different partitions on the same disk, SSDs, and ramdisks), you would be better off acknowledging them and if you can't adjust and re-run the benchmarks, don't start attacking them as a result.

As Dave says above, it's not the other filesystem people you have to convince, it's the core VFS and Memory Mangement folks you have to convince. You may need a little benchmarking to show that there is a real advantage to be gained, but the real discussion is going to be on the impact that page forking is going to have on everything else (both in complexity and in performance impact to other things)

IOWs, you need to focus on the important things needed to acheive
your stated goal of getting tux3 merged. New filesystems should be
faster than those based on 20-25 year old designs, so you don't need
to waste time trying to convince people that tux3, when complete,
will be fast.

You know that Tux3 is already fast. Not just that of course. It
has a higher standard of data integrity than your metadata-only
journalling filesystem and a small enough code base that it can
be reasonably expected to reach the quality expected of an
enterprise class filesystem, quite possibly before XFS gets
there.

We wouldn't expect anyone developing a new filesystem to believe any differently. If they didn't believe this, why would they be working on the filesystem instead of just using an existing filesystem.

The ugly reality is that everyone's early versions of their new filesystem looks really good. The problem is when they extend it to cover the corner cases and when it gets stressed by real-world (as opposed to benchmark) workloads. This isn't saying that you are wrong in your belief, just that you may not be right, and nobody will know until you are to a usable state and other people can start beating on it.

David Lang
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/