Re: Query on moving Recovery remoteproc work to a separate wq instead of system freezable wq

From: Bjorn Andersson
Date: Mon Jan 17 2022 - 17:20:45 EST


On Mon 17 Jan 09:09 CST 2022, Mukesh Ojha wrote:

> Hi,
>
> There could be a situation there is too much load(of tasks which is affined

As in "it's theoretically possible" or "we run into this issue all the
time"?

> to particular core) on a core on which  rproc
> recovery thread will not get a chance to run with no reason but the load. If
> we make this queue unbound, then this work
> can run on any core.
>
> Kindly Let me if i can post a proper patch for this like below.
>
> --- a/drivers/remoteproc/remoteproc_core.c
> +++ b/drivers/remoteproc/remoteproc_core.c
> @@ -59,6 +59,7 @@ static int rproc_release_carveout(struct rproc *rproc,
>
>  /* Unique indices for remoteproc devices */
>  static DEFINE_IDA(rproc_dev_index);
> +static struct workqueue_struct *rproc_recovery_wq;
>
>  static const char * const rproc_crash_names[] = {
>         [RPROC_MMUFAULT]        = "mmufault",
> @@ -2487,7 +2488,7 @@ void rproc_report_crash(struct rproc *rproc, enum
> rproc_crash_type type)
>                 rproc->name, rproc_crash_to_string(type));
>
>         /* Have a worker handle the error; ensure system is not suspended */
> -       queue_work(system_freezable_wq, &rproc->crash_handler);
> +       queue_work(rproc_recovery_wq, &rproc->crash_handler);
>  }
>  EXPORT_SYMBOL(rproc_report_crash);
>
> @@ -2532,6 +2533,12 @@ static void __exit rproc_exit_panic(void)
>
>  static int __init remoteproc_init(void)
>  {
> +       rproc_recovery_wq = alloc_workqueue("rproc_recovery_wq", WQ_UNBOUND
> |
> +                               WQ_HIGHPRI | WQ_FREEZABLE |
> WQ_CPU_INTENSIVE, 0);

Afaict this is not only a separate work queue, but a high priority, "cpu
intensive" work queue. Does that really represent the urgency of getting
the recovery under way?

Regards,
Bjorn

> +       if (!rproc_recovery_wq) {
> +               pr_err("creation of rproc_recovery_wq failed\n");
> +       }
> +
>
> Thanks,
> Mukesh