Re: seq_file and exporting dynamically allocated data

From: Harald Welte
Date: Sun Nov 16 2003 - 15:51:18 EST


On Sat, Nov 15, 2003 at 08:50:55PM +0000, Tigran Aivazian wrote:
> But I was referring to the complication on the kernel side, whereby if the
> data (collectively, all the entries) is more than 1 page then ->stop()
> must be called and a page returned to user and then the kernel must build
> another page but all it knows is the integer 'offset'. Anything could have
> happened to the list (task list in this case) since ->stop() routine
> dropped the spinlock. So, it is not obvious from which position to
> start building the data for the new page.

Yes, I am well aware of that issue. As for the connection tracking
table or the dstlimit htables I was referring to, I am actually relating
'pos' to the bucket number, not to the logical entry in the table.

The number of hash buckets is constant, so that's at least something the
user can count on.

However, there will be a problem when there are too many entries within
a single bucket - since all of them would have to be returned within a
single read() call.

> Kind regards
> Tigran

--
- Harald Welte <laforge@xxxxxxxxxxxxx> http://www.netfilter.org/
============================================================================
"Fragmentation is like classful addressing -- an interesting early
architectural error that shows how much experimentation was going
on while IP was being designed." -- Paul Vixie

Attachment: pgp00001.pgp
Description: PGP signature