Re: I/O issues, iowait problems, 2.4 v 2.6

From: Paul Venezia
Date: Tue Nov 11 2003 - 00:07:32 EST


On Mon, 2003-11-10 at 23:54, Andrew Morton wrote:

> > 0 10 0 1146444 18940 286856 0 0 0 2106 21450 25860 4 14 37 45
>
> OK, the IO rates are obviously very poor, and the context switch rate is
> suspicious as well. Certainly, testing with the single disk would help.

I'll get to that as soon as I can.

>
> But. If the workload here was a simple dd of /dev/zero onto a regular
> file then why on earth is the pagecache size not rising?

This vmstat output was shot when I was first noticing this problem. The
nbench tests were running at the time. Seems to indicate the same as
below.

> Could you please
> do:
>
> rm foo
> cat /dev/zero > foo
>
> and rerun the `vmstat 1' trace? Make sure that after the big initial jump,
> the `cache' column is increasing at a rate equal to the I/O rate. Thanks.

When I first ran this test, I killed it after 45s or so, noting that the vmstat
output didn't look right. I then deleted the sample file. The file no longer existed,
but the rm didn't exit in a timely fashion, the CPUs were at 100% iowait, the load
was rising and vmstat was showing a consistent pattern of 5056 blocks out every two
seconds.

I rebooted and shot these, starting 5 seconds before the cat:

0 0 0 1474524 7084 42420 0 0 0 0 1033 47 0 0 100 0
0 0 0 1474524 7084 42420 0 0 0 0 1031 38 0 0 100 0
0 0 0 1474524 7084 42420 0 0 0 0 1016 12 0 0 100 0
1 0 0 1373716 7184 140376 0 0 0 0 1020 14 0 10 90 0
1 2 0 1166548 7392 341652 0 0 8 18836 1028 56 0 21 43 36
1 2 0 994132 7556 509312 0 0 4 1696 1030 63 0 17 27 56
1 2 0 867732 7684 632264 0 0 4 2400 1033 65 0 12 27 60
0 3 0 817748 7732 680700 0 0 4 9632 1033 66 0 5 27 67
0 4 0 817748 7732 680700 0 0 0 0 1029 47 0 0 25 75
2 2 0 817748 7732 680700 0 0 0 5372 1032 48 0 0 25 75
0 4 0 810324 7740 688104 0 0 0 104 1032 49 0 1 25 74
0 4 0 810324 7740 688104 0 0 0 0 1029 48 0 0 25 75
0 4 0 810324 7740 688104 0 0 0 4892 1038 54 0 0 25 75
0 4 0 810324 7740 688104 0 0 0 0 1024 46 0 0 25 75
0 4 0 793492 7756 704544 0 0 0 9952 1033 52 0 2 25 73
0 4 0 793492 7756 704544 0 0 0 0 1032 48 0 0 25 75
0 4 0 793428 7756 704544 0 0 0 0 1031 48 0 0 25 75
0 4 0 793428 7756 704544 0 0 0 0 1028 52 0 0 25 75
0 4 0 768276 7780 729136 0 0 0 4996 1032 51 0 2 25 72
0 4 0 768276 7780 729136 0 0 0 0 1035 46 0 0 25 75
0 4 0 768276 7780 729136 0 0 0 4892 1026 50 0 0 25 75
0 4 0 768276 7780 729136 0 0 0 0 1037 46 0 0 25 75
0 4 0 763988 7784 733212 0 0 0 5060 1032 56 0 0 25 75
0 4 0 763988 7784 733212 0 0 0 0 1032 46 0 0 25 75
0 4 0 763988 7784 733212 0 0 0 4892 1033 48 0 0 25 75
0 4 0 763988 7784 733212 0 0 0 0 1029 50 0 0 25 75
0 4 0 751316 7796 745508 0 0 0 5060 1039 52 0 1 25 74
0 4 0 751316 7796 745508 0 0 0 0 1025 52 0 0 25 75

Very similar.

-Paul

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/