Re: [PATCH 0/4] workqueue_tracepoint: Add worklet tracepoints forworklet lifecycle tracing

From: Frederic Weisbecker
Date: Fri Apr 24 2009 - 19:27:28 EST


On Sat, Apr 25, 2009 at 12:59:10AM +0200, Frederic Weisbecker wrote:
> On Fri, Apr 24, 2009 at 01:06:16PM -0700, Andrew Morton wrote:
> > On Fri, 24 Apr 2009 19:42:19 +0800
> > Zhaolei <zhaolei@xxxxxxxxxxxxxx> wrote:
> >
> > > These patchs add tracepoints for per-worklet tracing.
> > > Now we have enough tracepoints to start makeing trace_workqueue.c support
> > > worklet time mesurement.
> >
> > I'm not seing anywhere in this patchset a description of the user
> > interface. What does the operator/programmer actually get to see from
> > the kernel as a result of these changes?
> >
> > A complete usage example would be an appropriate way of communicating
> > all of this.
> >
> > The patches introduce a moderate amount of tracing-specific hooks into
> > the core workqueue code, which inevitably increases the maintenance
> > load for that code. It is important that it be demonstrated that the
> > benefts of the code are worth that cost. Hence it is important that
> > these benefits be demonstrated to us, by yourself. Please.
> >
> > Another way of looking at it: which previously-encountered problems
> > would this facility have helped us to solve? How will this facility
> > help us to solve problems in the future? Looking at this patch series
> > I cannot answer those questions!
> >
>
>
> Hi Andrew,
>
> Although I'm not the author of this patchset, I'm somewhat
> involved in the workqueue tracer and I would like to express
> my opinion on what is happening.
>
> Until recently, the workqueue tracer was a basic machine.
> It was designed to trace the workqueue level. We were not yet
> thinking about the worklet level:
>
> - creation of a workqueue thread
> - cleanup/destruction of these threads
> - insertion of a work in a workqueue
> - execution of this work
>
> The output looked like such following histogram:
>
> # CPU INSERTED EXECUTED WORKQUEUE THREAD
> #
> 1 125 125 reiserfs/1
> 1 0 0 scsi_tgtd/1
> 1 0 0 aio/1
> 1 0 0 ata/1
> 1 114 114 kblockd/1
> 1 0 0 kintegrityd/1
> 1 2147 2147 events/1
>
> 0 0 0 kpsmoused
> 0 105 105 reiserfs/0
> 0 0 0 scsi_tgtd/0
> 0 0 0 aio/0
> 0 0 0 ata_aux
> 0 0 0 ata/0
> 0 0 0 cqueue
> 0 0 0 kacpi_notify
> 0 0 0 kacpid
> 0 149 149 kblockd/0
> 0 0 0 kintegrityd/0
> 0 1000 1000 khelper
> 0 2270 2270 events/0
>
>
> Its purpose and the information it gave was limited, though
> somehow useful.
> It was able to provide some informations about the frequency
> of the works inserted.
>
> Why that? Because it pushed some useful informations to help a
> kernel developer when it's time to decide how to defer async jobs.
> Should he use the traditional and whole purposed events workqueues
> or a devoted workqueue?
>

Moreover, I also think that such aync job behaviour instrumentation
may help to choose more deeply and carefully the best facility for
the right work, especially since the kernel gained more infrastructure
in that field:

- kevent/%d
- private workqueue
- async job
- slow work

Frederic.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/