File questions: Head truncate? sharing FDs between procs?

From: Ian S. Nelson (ian.nelson@echostar.com)
Date: Tue May 23 2000 - 12:50:55 EST


I've got some questions for you file and filesystem gurus:

I think I know the answer to this first question already. Is there a
head trunc function? I don't think there is but you never know. The
situation is like this: we've got a large file (multiple gigs) and at
some point while still writing data to it we realize that we don't want
the first 500MB so we want to trunc it off the front of the file will
keeping it open and keep writing to it. Anyone want head trunc if I get
strapped in to writing it?

The second one is a bit different. We're working on the same large file
and we've got two threads accessing it: a reader and a writer. For the
sake of this example, let's "pretend" that this is something like video
data. It works something like this: in normal operation the user can
"watch TV" and record what he is watching while he is watching it, a
writer thread does the recording and there isn't a reader thread since
we just pump data from a "tuner" to the "TV" and we write it to disk in
the middle. The user also has the option of pressing "pause." When
the user presses pause, a reader thread is spawned off and it opens a
file descriptor (FD) on the file that the writer is writing and then
seeks to the end. When the user presses "play" after that the reader
thread starts reading data and pumping it out to the TV, since the
writer kept writing there is a gap between the reader and the writer
because the end of the file keeps growing (call the gap X
seconds/bytes/or however you wish to think about it.) This code mostly
works, as described. The problem occurs when the gap, X, between the
reader and the writer is less than some buffer size in the kernel, Y.
The hacky solution that doesn't work because it causes too much of a
performance hit is to then fflush the data from the writer thread every
X seconds. The other hacky solution is to say that you have to "pause"
for Z seconds and you cannot do it for a smaller time, of course Z is
variable depending on system memory and other things, plus it just sucks
to have to pause for 20 seconds and not be able to for 3 seconds..
What I'd like to do is to just reuse the pages in memory before they get
flushed if they are still there. I bet a shared mmap will do the trick
but is there some way to share FDs and the pages associated with them
between different processes? Any ideas on if this can work with
open/close? I already know what has to be done to the kernel to "fix"
this but it's really breaking the kernel and there might already be a
way to do it.

thanks,
Ian S. Nelson

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/



This archive was generated by hypermail 2b29 : Tue May 23 2000 - 21:00:24 EST