On Thu, 2006-29-06 at 16:01 -0400, Shailabh Nagar wrote:Yes.
Jamal,
any thoughts on the flow control capabilities of netlink that apply here ? Usage of the connection is to supply statistics data to userspace.
if you want reliable delivery, then you cant just depend on async events
from the kernel -> user - which i am assuming is the way stats get
delivered as processes exit?
Sorry, i dont remember the details. YouOh, yes. Dump is synchronous. So it won't be useful unless we buffer task exit records within
need some synchronous scheme to ask the kernel to do a "get" or "dump".
Lets be clear about one thing:The rates (or upper bounds) that are being discussed here, as of now, are 1000 exits/sec/CPU for
The problem really has nothing to do with gen/netlink or any other
scheme you use;->
It has everything to do with reliability implications and the fact
that you need to assume memory is a finite resource - at one point
or another you will run out of memory ;-> And of course then messages
will be lost. So for gen/netlink, just make sure you have large socket
buffer and you would most likely be fine. I havent seen how the numbers were reached: But if you say you receive
14K exits/sec each of which is a 50B message, I would think a 1M socket
buffer would be plenty.
You can find out about lack of memory in netlink when you get a ENOBUFS.Hmm. So we could buffer the per-task exit data within taskstats (the mem consumption would grow
As an example, you should then do a kernel query. Clearly if you do a
query of that sort, you may not want to find obsolete info. Therefore,
as a suggestion, you may want to keep sequence numbers of sorts as
markers. Perhaps keep a 32-bit field which monotically increases per
process exit or use the pid as the sequence number etc..
As for throttling - Shailabh, I think we talked about this:
- You could maintain info using some thresholds and timer. Then
when a timer expires or threshold is exceeded send to user space.