On Mon, Sep 14, 2009 at 07:50:56PM +0200, Jan Kara wrote:
I've found some time to look into this and I can see a few problems inext4_mb_mark_diskspace_used use ac->ac_b_ex.fe_len which is NOT the normalized
the code. Firstly, what may cause your problems:
vfs_dq_claim_blocks() is called in ext4_mb_mark_diskspace_used(). But
as far as I can understand the code, ext4_mb_normalize_request() can
increase the amount of space we really allocate and thus we try to
allocate more blocks than we have actually reserved in quota. Aneesh, is
that right?
request len. it is min(allocated_len, original_len). So i guess that code
should be safe
Secondly, ext4_da_reserve_space() seems to have a bug that it can reserveThat would imply the file system was full. But the dumpe2fs ouput list
quota blocks multiple times if ext4_claim_free_blocks() fail and we retry
the allocation. We should release the quota reservation before restarting.
Actually, when we find out we cannot reserve quota space, we could force
some delayed allocated writes to disk (thus possibly release some quota
in case we have overestimated the amount of blocks needed). But that's
a different issue.
large number of free blocks. But yes the code should have released the
quota reservation before trying block reservation again.
Thirdly, ext4_indirect_calc_metadata_amount() is wrong for sparse files.-aneesh
The worst case is 3 metadata blocks per data block if we make the file
sufficiently sparse and there's no easy way around that...