Re: [PATCHv2 00/25] perf tool: Add support for multiple data filestorage

From: David Ahern
Date: Tue Sep 10 2013 - 13:29:27 EST


On 9/9/13 9:06 AM, Ingo Molnar wrote:
Aren't you losing potentially important events by doing that -- FORK,
COMM, MMAP?

I suspect these could/should be tracked and emitted fully (in bulk) when a
new data file is opened, so that each partial data file is fully
consistent?

In my case I am not saving task events, but processing them. In Jiri's case where events are written to a file it should be possible to stash the unprocessed events on a list, when the exit happens move them to a dead threads list which can be cleaned up from time to time and then on file dump requests dump the task events followed by the sample events.


I have a flight recorder style command that address this problem
(long-running/daemons) by processing task events and then stashing the
sample events on a time-ordered list with chopping to maintain the time
window.

Could this be used to emit currently relevant task context?

sure.


Btw., I also think it would be useful to have kernel support for that -
the 'collections' stuff I talked about a good while ago: the kernel would
work with user-space to iterate over all MMAPs and all running COMMs at
the opening of a tracing session.

That way we could avoid racy access to /proc, we could make sure that all
information that is emitted by FORK/COMM/MMAP is also emitted for the
'bulk' data, etc.

Walking the task list and emitting events would be better but wouldn't that be a performance hit holding the task lock (tasklist_lock?)? (I thought that one is needed when walking the task list.)

David

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/