Nope, I think your missing my point. I agree that files should not rely on
e.g. size for setting up their buffer when doing a copy. They would be wrong.
I agree 100% with that.
> relationship of the real size to the reported size. It is not sane to
> generate true sizes for the /proc files (we'd have to generate all the
Also true. Real sizes would be nice, but are in general too expensive and
not necessary ('though it could be done for more files than it is now)
> contents, which is hardly efficient). Thus, a reported size of 0 is
> preferable. That way, if you rely on reading the file size, you will
> *always* get wrong results. This makes buggy programs obvious and
> repeatable, which is much preferable to covering up the problem.
Not true. 0 is the ONLY size that fully determines the file contents. All
I'm asking is for ANY size different from 0. 1 would work perfectly.
> In this case, the NFS server that skips the read of a zero-byte file is
> the buggy program, and the current implementation of /proc makes this
No, it's size 0 thats buggy. Why should not a program that sees a zero
size file conclude it's empty ? That's plain normal semantics of size 0.
> [to recap: we're not talking about 'special file sizes' -- we're talking
> about repeatable behaviour.]
> @ @
Nope, were talking about a buggy /proc that screws up the semantics of
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to email@example.com