I am definitely voting against an infinite number of retries. I'm
working on FhGFS, which supports distributed meta data servers. So when
a file is moved around between directories, its file handle, which
contains the meta-data target id might become invalid. As NFSv3 is
stateless we cannot inform the client about that and must return ESTALE
then. NFSv4 is better, but I'm not sure how well invalidating a file
handle works. So retrying once on ESTALE might be a good idea, but
retrying forever is not.
It's important to note that I'm only proposing to wrap syscalls this
way that take a pathname argument. We can't do anything about those
that don't since at that point we have no way to retry the lookup.
So, I'm not sure this patch would affect the case you're concerned
about one way or another. If you move the file to a different
directory, then it's pathname would also change, and at that point
you'd end up with an ENOENT error or something on the next retry.
If the file was open and you were (for instance) reading or writing to
it from a client when you moved it, then we can't retry the lookup at
that point. The open is long since done and the pathname is now gone.
You'll get an ESTALE back in userspace regardless.
Also, what about asymmetric HA servers? I believe to remember that also
resulted in ESTALE. So for example server1 exports /home and /scratch,
but on failure server2 can only take over /home and denies access to
/scratch.
That sounds like a broken cluster configuration. Still...
Presumably at some point in the future, a sysadmin would intervene and
fix the situation such that /scratch is available again. Is it better
to return an error to the application at that point, or simply allow it
to keep retrying until the problem has been fixed?
The person with the long running job that's doing operations
in /scratch would probably prefer the latter. If not, then they could
always send the program a fatal signal to stop it altogether.