Re: Re : 2.4.0-test2 doesn't compile

From: Ivan Kokshaysky (ink@jurassic.park.msu.ru)
Date: Sat Jun 24 2000 - 14:17:38 EST


On Sat, Jun 24, 2000 at 10:35:42PM +1000, Andrew Morton wrote:
> --- linux-official/kernel/sched.c Thu Jun 22 20:45:36 2000
> +++ linux-akpm/kernel/sched.c Fri Jun 23 22:01:03 2000
> @@ -60,8 +60,8 @@
> * The run-queue lock locks the parts that actually access
> * and change the run-queues, and have to be interrupt-safe.
> */
> -__cacheline_aligned spinlock_t runqueue_lock = SPIN_LOCK_UNLOCKED; /* second */
> -__cacheline_aligned rwlock_t tasklist_lock = RW_LOCK_UNLOCKED; /* third */
> +spinlock_t runqueue_lock __cacheline_aligned = SPIN_LOCK_UNLOCKED; /* second */
> +rwlock_t tasklist_lock __cacheline_aligned = RW_LOCK_UNLOCKED; /* third */
>
This change was Alpha related. These spinlocks were in the same
locked range (same cache line on ev6), causing livelocks.
But
 1. __cacheline_aligned isn't enough for ev6, because L1_CACHE_BYTES
    for some reasons defined as 32 for all alphas, though it is 64 on ev6
 2. proper place for such fixes is asm/spinlock.h

Ivan.

--- 2.4.0t2/include/asm-alpha/spinlock.h Fri Feb 25 09:36:05 2000
+++ linux/include/asm-alpha/spinlock.h Sat Jun 24 23:07:22 2000
@@ -14,9 +14,16 @@
  *
  * We make no fairness assumptions. They have a cost.
  */
+/*
+ * Alpha Architecture Handbook V4:
+ * Hardware implementations are encouraged to lock no more than 128 bytes.
+ * Software implementations are encouraged to separate locked locations by
+ * at least 128 bytes from other locations that could potentially be written
+ * by another processor while the first location is locked.
+ */
 
 typedef struct {
- volatile unsigned int lock /*__attribute__((aligned(32))) */;
+ volatile unsigned int lock;
 #if DEBUG_SPINLOCK
         int on_cpu;
         int line_no;
@@ -24,7 +31,7 @@
         struct task_struct * task;
         const char *base_file;
 #endif
-} spinlock_t;
+} spinlock_t __attribute__((aligned(128)));
 
 #if DEBUG_SPINLOCK
 #define SPIN_LOCK_UNLOCKED (spinlock_t) {0, -1, 0, 0, 0, 0}
@@ -95,7 +102,7 @@
 
 typedef struct {
         volatile int write_lock:1, read_counter:31;
-} /*__attribute__((aligned(32)))*/ rwlock_t;
+} rwlock_t __attribute__((aligned(128)));
 
 #define RW_LOCK_UNLOCKED (rwlock_t) { 0, 0 }
 

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/



This archive was generated by hypermail 2b29 : Mon Jun 26 2000 - 21:00:05 EST