On 9/20/05, Hans Reiser <reiser@xxxxxxxxxxx> wrote:I think that would be a bad idea - that is how drives used to work and it made the higher level file system code handle odd stuff.
Another goal of the group should be to formulate a requested set of
changes or extensions to the makers of drives and other storage
systems. For example, it might be advantageous to be able to disable
bad block relocation and allow the filesystem to perform the action. The reason for this is because relocates slaughter streaming read
performance, but the filesystem could still contiguously allocate
around them...
Perhaps a more implementable alternative is just a method to find outI think that this kind of information is already at hand via smart, etc. You could write an application to query this data base, but to do the reverse mapping from block number to file is not easy (i.e., you need to fibmap each file in the file system in order to construct the mapping).
which sectors have been relocated so the data can be moved off of them
and they be avoided. (and potentially they be 'derelocated' to
preserve the relocation space)
Ditto for other layers.. if a filesystem has some internal integrityOne thing we do on our boxes is to run a sweep program that does a "read-verify" command that allows us to flag bad sectors on the platter without transferring data, polluting caches, etc. A second, repair phase goes in and pokes at the suspect sectors trying to force a remap. If you have the original data (as in the raid case), you can rewrite the sector and all is well. If not, you need to unmount, re-fsck and try to revalidate the contents of individual files (this is where the digital signatures comes in handy).
function and a raid sweep has found that the parity doesn't agree, it
would be nice if the FS could check all possible decodings and see if
there is one that is more sane than all the others... This is even
more useful when you have raid-6 and there is a lot more potential
decoding.
Also things like bubbling up to userspace.. If there is anGood idea, but we don't have that reverse mapping at hand for most file systems.
unrecoverable read error in a file found during operation or an
automated scan, it should show up in syslog with some working complete
path to the file (as canonical as the fs can provide), and hopefully
an offset to the error. Then my package manager could see if this is a
file replaceable out of a package or if it's user data... If it's user
data, my backup scripts can check the access time on the file and
silently restore it from backup if the file hasn't changed. ... only
leaving me an email "Dear operator, I saved your butt yet again
--love, mr computer"
And finally operator policy.. I'd like corrupted user files to become
permission denied until I run some command to make the accessible,
don't let me hang my apps trying to access them..