Re: EXT4-ish "fixes" in UBIFS

From: Trenton D. Adams
Date: Thu Apr 02 2009 - 23:13:40 EST


On Thu, Apr 2, 2009 at 8:58 PM, David Rees <drees76@xxxxxxxxx> wrote:
> On Thu, Apr 2, 2009 at 7:28 PM, Trenton D. Adams
>> That's the odd thing, I was setting them to 2 and 1.  I was just
>> looking at the 2.6.29 code, and it should have made a difference.  I
>> don't know what version of the kernel I was using at the time.  And,
>> I'm not sure if I had the 1M fsync tests in place at the time either,
>> to be sure about what I was testing.  It could be that I wasn't being
>> very scientific about it at the time.  Thanks though, that setting
>> makes a huge difference.
>
> Well, it depends on how much memory you have.  Keep in mind that those
> are percentages - so if you have 2GB RAM, that's the same as setting
> it to 40MB and 20MB respectively - both are a lot larger than the 1M
> you were setting the dirty*bytes vm knobs to.
>
> I've got a problematic server with 8GB RAM.  Even if set both to 1,
> that's 80MB and the crappy disks I have in it will often only write
> 10-20MB/s or less due to the seekiness of the workload.  That means
> delays of 5-10 seconds worst case which isn't fun.
>
> -Dave
>

Yeah, I just finished doing the calculation. :P 40M is what I'm
seeing. Yeah, that sounds like the same as my problem. Even setting
it to 10M dirty_bytes has a very serious latency problem. I'm glad
that option was added, because 1M works much better. I'll have to
change my shell script to dynamically tune on that. Because under
normal load, I want the 40M+ of queueing. It's just when things get
really heavy, and stuff starts getting flushed, that this problem
starts happening.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/