On Wed, Jul 23, 2014 at 07:10:32AM -0700, Howard Chu wrote:
Matthew Wilcox wrote:
One of the primary uses for NV-DIMMs is to expose them as a block device
and use a filesystem to store files on the NV-DIMM. While that works,
it currently wastes memory and CPU time buffering the files in the page
cache. We have support in ext2 for bypassing the page cache, but it
has some races which are unfixable in the current design. This series
of patches rewrite the underlying support, and add support for direct
access to ext4.
This is an awful lot of work to go thru just to get a glorified ext4
RAMdisk. RAMdisks are one of the worst possible uses for RAM, requiring
users to explicitly copy files to them before getting any benefit. Using RAM
for a page cache instead brings benefits to all file accesses without
requiring any user intervention.
Perhaps you misunderstand the problem. There are many different kinds
of NV-DIMM out there today with different performance characteristics.
One that has been described to me has write times 1000x slower than read
times. In that situation, you can't possibly "just use it as page cache";
you need to place the read-often; write-rarely files on that media.
If the NVDIMM range was reserved for exclusive use of the page cache, then
you would have an avenue to get persistence/safety for every filesystem
mounted on a machine, not just a special case ext4.
No you wouldn't; you'd also need to have a mechanism to store the state
of the page cache persistently.
And you have to make sure that the
filesystem does appropriate cache invalidations. By going the route
here, we can use the existing caching mechanisms (eg FS-Cache) which
have solved all the hard problems of making sure that local caches are
coherent with storage.