Re: Patch for block write clustering

Emil Briggs (briggs@bucky.physics.ncsu.edu)
Thu, 5 Mar 1998 01:17:36 -0500 (EST)


> ... (I can post the benchmark program I used if anyone wants to try it
> themselves)
>
> time ( du ~squid/cache/ >/dev/null ; sync )
>
>
>(with atime updates enabled) - Perhaps a bit contrived?
>
>

Heres mine. Just adjust MAX_LOOPS so that the files are several times
as big as your physical memory and bdflush will get a good workout.

#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <stdlib.h>
#include <unistd.h>

#define BLOCK_SIZE 1024
#define MAX_LOOPS 32000

char tbuf1[4*BLOCK_SIZE];
char tbuf2[4*BLOCK_SIZE];

int main()
{

int fd1, fd2, count, retval;
char val;

if(-1 == (fd1 = open("/tmp/test1", O_RDWR|O_CREAT|O_TRUNC))) {

printf("Can't open test1.\n");
exit(0);

}

if(-1 == (fd2 = open("/tmp/test2", O_RDWR|O_CREAT|O_TRUNC))) {

printf("Can't open test2.\n");
exit(0);

}

for(count = 0;count < MAX_LOOPS;count++) {

memset(tbuf1, count % 255, 2*BLOCK_SIZE);
if( -1 == write(fd1, tbuf1, 2*BLOCK_SIZE)) {

printf("Write failed\n");
exit(0);

}

memset(tbuf2, count % 255, 2*BLOCK_SIZE);
if( -1 == (retval = write(fd2, tbuf2, 2*BLOCK_SIZE))) {

printf("Write failed\n");
exit(0);

}
}

close(fd1);
close(fd2);
}

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu