Re: [PATCH 6/6] Makes procs file writable to move all threads bytgid at once

From: Serge E. Hallyn
Date: Mon Aug 03 2009 - 15:46:22 EST


Quoting Serge E. Hallyn (serue@xxxxxxxxxx):
> Quoting Benjamin Blum (bblum@xxxxxxxxxx):
> > On Mon, Aug 3, 2009 at 1:54 PM, Serge E. Hallyn<serue@xxxxxxxxxx> wrote:
> > > Quoting Ben Blum (bblum@xxxxxxxxxx):
> > > What *exactly* is it we are protecting with cgroup_fork_mutex?
> > > 'fork' (as the name implies) is not a good answer, since we should be
> > > protecting data, not code. If it is solely tsk->cgroups, then perhaps
> > > we should in fact try switching to (s?)rcu.  Then cgroup_fork() could
> > > just do rcu_read_lock, while cgroup_task_migrate() would make the change
> > > under a spinlock (to protect against concurrent cgroup_task_migrate()s),
> > > and using rcu_assign_pointer to let cgroup_fork() see consistent data
> > > either before or after the update...  That might mean that any checks done
> > > before completing the migrate which involve the # of tasks might become
> > > invalidated before the migration completes?  Seems acceptable (since
> > > it'll be a small overcharge at most and can be quickly remedied).
> >
> > You'll notice where the rwsem is released - not until cgroup_post_fork
> > or cgroup_fork_failed. It doesn't just protect the tsk->cgroups
> > pointer, but rather guarantees atomicity between adjusting
> > tsk->cgroups and attaching it to the cgroups lists with respect to the
> > critical section in attach_proc. If you've a better name for the lock
> > for such a race condition, do suggest.
>
> No the name is pretty accurate - it's the lock itself I'm objecting
> to. Maybe it's the best we can do, though.

This is probably a stupid idea, but... what about having zero
overhead at clone(), and instead, at cgroup_task_migrate(),
dequeue_task()ing all of the affected threads for the duration of
the migrate?

/me prepares to be hit by blunt objects

-serge
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/