Avi Kivity wrote:On 04/06/2010 07:14 PM, Thomas Gleixner wrote:The kernel could easily expose this information by writing into theIMO the best solution is to spin in userspace while the lock holder isThat's just not realistic as user space has no idea whether the lock
running, fall into the kernel when it is scheduled out.
holder is running or not and when it's scheduled out without a syscall :)
thread's TLS area.
So:
- the kernel maintains a current_cpu field in a thread's tls
- lock() atomically writes a pointer to the current thread's current_cpu
when acquiring
- the kernel writes an invalid value to current_cpu when switching out
- a contended lock() retrieves the current_cpu pointer, and spins as
long as it is a valid cpu
There are certainly details to sort through in the packaging
of the mechanism but conceptually that should do the job.
So here the application has chosen a blocking lock as being
the optimal synchronization operation and we're detecting a
scenario where we can factor out the aggregate overhead of two
context switch operations.
There is also the case where the application requires a
polled lock with the rational being the assumed lock
hold/wait time is substantially less than the above context
switch overhead.
But here we're otherwise completely
open to indiscriminate scheduling preemption even though
we may be holding a userland lock.
The adaptive mutex above is an optimization beyond what
is normally expected for the associated model. The preemption
of a polled lock OTOH can easily inflict latency several orders
of magnitude beyond what is expected in that model. Two use
cases exist here which IMO aren't related except for the latter
unintentionally degenerating into the former.