[RFC PATCH 03/11] x86,fpu: move __thread_fpu_begin to when the task has the fpu

From: riel
Date: Sun Jan 11 2015 - 17:07:05 EST


From: Rik van Riel <riel@xxxxxxxxxx>

Move the call to __thread_fpu_begin, which in turn calls
__thread_set_has_fpu, to a spot where the task actually has
the FPU.

This is in preparation for the next patch.

This changeset introduces an extraneous clts() call when
switching from one FPU-using task to another FPU-using
task in non-eager FPU switching mode. The next patch gets
rid of that.

Signed-off-by: Rik van Riel <riel@xxxxxxxxxx>
---
arch/x86/include/asm/fpu-internal.h | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/arch/x86/include/asm/fpu-internal.h b/arch/x86/include/asm/fpu-internal.h
index 5f8f971..27556f4 100644
--- a/arch/x86/include/asm/fpu-internal.h
+++ b/arch/x86/include/asm/fpu-internal.h
@@ -420,7 +420,6 @@ static inline void switch_fpu_prepare(struct task_struct *old, struct task_struc
if (preload) {
new->thread.fpu_counter++;
set_thread_flag(TIF_LOAD_FPU);
- __thread_set_has_fpu(new);
prefetch(new->thread.fpu.state);
} else if (!use_eager_fpu())
stts();
@@ -436,7 +435,6 @@ static inline void switch_fpu_prepare(struct task_struct *old, struct task_struc
prefetch(new->thread.fpu.state);
set_thread_flag(TIF_LOAD_FPU);
}
- __thread_fpu_begin(new);
}
/* else: CR0.TS is still set from a previous FPU switch */
}
@@ -451,6 +449,7 @@ static inline void switch_fpu_prepare(struct task_struct *old, struct task_struc
static inline void switch_fpu_finish(struct task_struct *new)
{
if (test_and_clear_thread_flag(TIF_LOAD_FPU)) {
+ __thread_fpu_begin(new);
if (unlikely(restore_fpu_checking(new)))
drop_init_fpu(new);
}
--
1.9.3

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/