Re: POSIX violation by writeback error

From: Theodore Y. Ts'o
Date: Tue Sep 25 2018 - 18:31:08 EST


On Tue, Sep 25, 2018 at 12:41:18PM -0400, Jeff Layton wrote:
> That's all well and good, but still doesn't quite solve the main concern
> with all of this. It's suppose we have this series of events:
>
> open file r/w
> write 1024 bytes to offset 0
> <background writeback that fails>
> read 1024 bytes from offset 0
>
> Open, write and read are successful, and there was no fsync or close in
> between them. Will that read reflect the result of the previous write or
> no?

If the background writeback hasn't happened, Posix requires that the
read returns the result of the write. And the user doesn't know when
or if the background writeback has happened unless the user calls
fsync(2).

Posix in general basically says anything is possible if the system
fails or crashes, or is dropped into molten lava, etc. Do we say that
Linux is not Posix compliant if a cosmic ray flips a few bits in the
page cache? Hardly! The *only* time Posix makes any guarantees is if
fsync(2) returns success. So the subject line, is in my opinion
incorrect. The moment we are worrying about storage errors, and the
user hasn't used fsync(2), Posix is no longer relevant for the
purposes of the discussion.

> The answer today is "it depends".

And I think that's fine. The only way we can make any guarantees is
if we do what Alan suggested, which is to imply that a read on a dirty
page *block* until the the page is successfully written back. This
would destroy performance. I know I wouldn't want to use such a
system, and if someone were to propose it, I'd strongly argue for a
switch to turn it *off*, and I suspect most system administators would
turn it off once they saw what it did to system performance. (As a
thought experiment, think about what it would do to kernel compiles.
It means that before you link the .o files, you would have to block
and wait for them to be written to disk so you could be sure the
writeback would be successful. **Ugh**.)

Given that many people would turn such a feature off once they saw
what it does to their system performance, applications in general
couldn't rely on it. which means applications who cared would have to
do what they should have done all along. If it's precious data use
fsync(2). If not, most of the time things are *fine* and it's not
worth sacrificing performance for the corner cases unless it really is
ultra-precious data and you are willing to pay the overhead.

- Ted