Re: Work queue questions

From: Chinmay V S
Date: Mon Sep 24 2012 - 13:08:12 EST


Hi,

Looking at the timestamps in your previous logs(copied below for reference),

kworker/u:1-21 [000] 110.964895: task_event: MYTASKJOB2381 XStarted
kworker/u:1-21 [000] 110.964909: task_event: MYTASKJOB2381 Xstopped
kworker/u:1-21 [000] 110.965137: task_event: MYTASKJOB2382 XStarted
kworker/u:1-21 [000] 110.965154: task_event: MYTASKJOB2382 Xstopped
kworker/u:5-3724 [000] 110.965311: task_event: MYTASKJOB2383 XStarted
kworker/u:5-3724 [000] 110.965325: task_event: MYTASKJOB2383 Xstopped

110.964895 to 110.964909 is 0.014ms. The supposedly "large" amount of
copying that you assume is NOT that large. Hence the kworker thread is
able to execute your work task quickly and is again available when the
next work task is ready to be scheduled.

The "large" amount of copying that you are doing is probably too small
or being run asynchronously(DMA?) Hence the work tasks finish quickly.

On Mon, Sep 24, 2012 at 10:21 PM, Deepawali Verma <dverma249@xxxxxxxxx> wrote:
>
> Hi,
>
> This is sample code snippet as I cannot post my project code. In reality here, this work handler is copying the big chunks of data that code is here in my driver. This is running on quad core cortex A9 Thats why I have point. If there are 4 cpu cores, then there must be parallelism. Now Tajun, what do you say?
>
> Regards,
> Deepa
>
> On Monday, September 24, 2012, Chinmay V S wrote:
>>
>> There is nothing in the sub_task_work_handler() to keep the CPU occupied. Try adding a significant amount of work in it to keep it occupied. Also are you running on a SMP(multicore) system?...
>>
>> On Mon, Sep 24, 2012 at 12:55 PM, Deepawali Verma <dverma249@xxxxxxxxx> wrote:
>>>
>>> Hi Tejun,
>>>
>>> Here are some code snippets from my device driver:
>>>
>>> #defind NUMBER_OF_SUBTASKS 3
>>>
>>> struct my_driver_object
>>> {
>>> struct workqueue_struct *sub_task_wq;
>>> struct work_struct sub_task_work;
>>> char my_obj_wq_name[80];
>>> int task_id;
>>> };
>>>
>>> struct my_driver_object obj[3];
>>>
>>>
>>> void my_driver_init(void)
>>> {
>>> int i =0;
>>> memset(my_obj_wq_name,0,80);
>>> --------------------------------------
>>> for (i =0; i<3; i++)
>>> {
>>> snprintf(obj[i].my_obj_wq_name,80, "Task-wq:%d",i);
>>> obj.sub_task_wq = alloc_workqueue(obj[i].my_obj_wq_name,WQ_UNBOUND,1);
>>> INIT_WORK(&obj[i].sub_task_work, sub_task_work_handler);
>>> }
>>>
>>> --------------------------------------
>>> }
>>>
>>> void start_sub_tasks()
>>> {
>>> int i =0;
>>> for (i =0; i<3; i++)
>>> {
>>> queue_work(obj[i].sub_task_wq, &obj[i].sub_task_work);
>>>
>>> }
>>>
>>>
>>> }
>>>
>>> static void sub_task_work_handler(struct work_struct work)
>>> {
>>> Ftrace marker start;
>>>
>>> Ftrace marker end
>>> }
>>>
>>> Ideally I was expecting when work is queued to three different work
>>> queues, it should run in parallel but it is not doing as per expected.
>>> Let me know about this.
>>>
>>> Regards,
>>> Deepa
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> On Sat, Sep 22, 2012 at 7:18 AM, Daniel Taylor <Daniel.Taylor@xxxxxxx> wrote:
>>> >
>>> > ...
>>> >
>>> >> >> So on so forth.
>>> >> >> Anyway how can you write chunks of data in parallel when
>>> >> >> already some worker
>>> >> >> thread is writing i.e. the system is busy.
>>> >> >> Analogy: Suppose you are ambidextrous and you are eating.Can
>>> >> >> you eat with
>>> >> >> both of your hands at a time?So worker thread are like your
>>> >> >> hands and keeping
>>> >> >> you fed all the time is the concept of concurrency.
>>> >> >>
>>> >> >> I am not an expert on this but from Tejun's reply I could
>>> >> >> make out this.
>>> >> >> Please correct me If I have wrongly understood the concept
>>> >> >> based on this mail
>>> >> >
>>> >> > I don't know how many cores are in the CPU Deepawali's
>>> >> using, but if I have four,
>>> >> Assuming single core,Is my explanation correct about concurrency?
>>> >
>>> > It is possible for his tasks to complete before scheduling occurs
>>> > again. Consuming all of the CPU and having no blocking action,
>>> > yes, then the tasks will run consecutively.
>>> >
>>> > ...
>>> --
>>> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
>>> the body of a message to majordomo@xxxxxxxxxxxxxxx
>>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>>> Please read the FAQ at http://www.tux.org/lkml/
>>
>>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/