Re: [PATCH 4/6] ftrace, x86: make kernel text writable only for conversions

From: Masami Hiramatsu
Date: Fri Feb 27 2009 - 15:58:07 EST


Mathieu Desnoyers wrote:
> * Masami Hiramatsu (mhiramat@xxxxxxxxxx) wrote:
>> Mathieu Desnoyers wrote:
>>> * Masami Hiramatsu (mhiramat@xxxxxxxxxx) wrote:
>>>> Steven Rostedt wrote:
>>>>> On Mon, 23 Feb 2009, Mathieu Desnoyers wrote:
>>>>>>> Hmm, lets see. I simply set a bit in the PTE mappings. There's not many,
>>>>>>> since a lot are 2M pages, for x86_64. Call stop_machine, and now I can
>>>>>>> modify 1 or 20,000 locations. Set the PTE bit back. Note, the changing of
>>>>>>> the bits are only done when CONFIG_DEBUG_RODATA is set.
>>>>>>>
>>>>>>> text_poke requires allocating a page. Map the page into memory. Set up a
>>>>>>> break point.
>>>>>> text_poke does not _require_ a break point. text_poke can work with
>>>>>> stop_machine.
>>>>> It can? Doesn't text_poke require allocating pages? The code called by
>>>>> stop_machine is all atomic. vmap does not give an option to allocate with
>>>>> GFP_ATOMIC.
>>>> Hi,
>>>>
>>>> With my patch, text_poke() never allocate pages any more :)
>>>>
>>>> BTW, IMHO, both of your methods are useful and have trade-off.
>>>>
>>>> ftrace wants to change massive amount of code at once. If we do
>>>> that with text_poke(), we have to map/unmap pages each time and
>>>> it will take a long time -- might be longer than one stop_machine_run().
>>>>
>>>> On the other hand, text_poke() user like as kprobes and tracepoints,
>>>> just want to change a few amount of code at once, and it will be
>>>> added/removed incrementally. If we do that with stop_machine_run(),
>>>> we'll be annoyed by frequent machine stops.(Moreover, kprobes uses
>>>> breakpoint, so it doesn't need stop_machine_run())
>>>>
>>> Hi Masami,
>>>
>>> Is this text_poke version executable in atomic context ? If yes, then
>>> that would be good to add a comment saying it. Please see below for
>>> comments.
>> Thank you for comments!
>> I think it could be. ah, spin_lock might be changed to spin_lock_irqsave()...
>>
>
> You are right. If we plan to execute this in both atomic and non-atomic
> context, spin_lock_irqsave would make sure we are always busy-looping
> with interrupts off.

Oops, when I tested spin_lock_irqsave, it caused warnings.

------------[ cut here ]------------
WARNING: at /home/mhiramat/ksrc/linux-2.6/kernel/smp.c:329 smp_call_function_man
y+0x37/0x1c9()
Hardware name: Precision WorkStation T5400
Modules linked in:
Pid: 1, comm: swapper Tainted: G W 2.6.29-rc6 #16
Call Trace:
[<c042f7b1>] warn_slowpath+0x71/0xa8
[<c044dccb>] ? trace_hardirqs_on_caller+0x18/0x145
[<c06dc42f>] ? _spin_unlock_irq+0x22/0x2f
[<c044efc9>] ? print_lock_contention_bug+0x14/0xd7
[<c044efc9>] ? print_lock_contention_bug+0x14/0xd7
[<c044cfbb>] ? trace_hardirqs_off_caller+0x18/0xa3
[<c045383b>] smp_call_function_many+0x37/0x1c9
[<c04138fc>] ? do_flush_tlb_all+0x0/0x3c
[<c04138fc>] ? do_flush_tlb_all+0x0/0x3c
[<c04539e9>] smp_call_function+0x1c/0x23
[<c0433ee1>] on_each_cpu+0xf/0x3a
[<c04138c6>] flush_tlb_all+0x14/0x16
[<c04946f7>] unmap_kernel_range+0xf/0x11
[<c06dd78a>] text_poke+0xf1/0x12c

unmap_kernel_range() requires irq enabled, but spin_lock_irqsave
(and stop_machine too)disables irq. so we have to solve this issue.

I have some ideas:

- export(just remove static) vunmap_page_range() and don't use
flush_tlb_all().
Because this vm_area are not used by other cpus, we don't need
to flush TLB of all cpus.

- make unmap_kernel_range_local() function.

- extend kmap_atomic_prot() to map lowmem page when the 'prot'
is different.


Thanks,

>
> Having spinlocks taken in _both_ interrupts on and off contexts leads to
> higher interrupt latencies when the interrupt-off waits for an
> interrupt-on thread.
>
>
>>>> Thank you,
>>>>
>>> [...]
>>>> Use map_vm_area() instead of vmap() in text_poke() for avoiding page allocation
>>>> and delayed unmapping.
>>>>
>>>> Signed-off-by: Masami Hiramatsu <mhiramat@xxxxxxxxxx>
>>>> ---
>>>> arch/x86/include/asm/alternative.h | 1 +
>>>> arch/x86/kernel/alternative.c | 25 ++++++++++++++++++++-----
>>>> init/main.c | 3 +++
>>>> 3 files changed, 24 insertions(+), 5 deletions(-)
>>>>
>>>> Index: linux-2.6/arch/x86/include/asm/alternative.h
>>>> ===================================================================
>>>> --- linux-2.6.orig/arch/x86/include/asm/alternative.h
>>>> +++ linux-2.6/arch/x86/include/asm/alternative.h
>>>> @@ -177,6 +177,7 @@ extern void add_nops(void *insns, unsign
>>>> * The _early version expects the memory to already be RW.
>>>> */
>>>>
>>>> +extern void text_poke_init(void);
>>>> extern void *text_poke(void *addr, const void *opcode, size_t len);
>>>> extern void *text_poke_early(void *addr, const void *opcode, size_t len);
>>>>
>>>> Index: linux-2.6/arch/x86/kernel/alternative.c
>>>> ===================================================================
>>>> --- linux-2.6.orig/arch/x86/kernel/alternative.c
>>>> +++ linux-2.6/arch/x86/kernel/alternative.c
>>>> @@ -485,6 +485,16 @@ void *text_poke_early(void *addr, const
>>>> return addr;
>>>> }
>>>>
>>>> +static struct vm_struct *text_poke_area[2];
>>>> +static DEFINE_SPINLOCK(text_poke_lock);
>>>> +
>>>> +void __init text_poke_init(void)
>>>> +{
>>>> + text_poke_area[0] = get_vm_area(PAGE_SIZE, VM_ALLOC);
>>>> + text_poke_area[1] = get_vm_area(2 * PAGE_SIZE, VM_ALLOC);
>>> Why is this text_poke_area[1] 2 * PAGE_SIZE in size ? I would have
>>> thought that text_poke_area[0] would be PAGE_SIZE, text_poke_area[1]
>>> also be PAGE_SIZE, and that the sum of both would be 2 * PAGE_SIZE..
>> Unfortunately, current map_vm_area() tries to map the size of vm_area,
>> this means, you can't use 2page-size vm_area for mapping just 1 page...
>> (or maybe, we can set pages[1] = pages[0] when 2nd page doesn't exist)
>>
>
> OK, given we sometimes have to map only a single page (e.g. at the end
> of a text section), we really need both 1 and 2 pages mapping. So I
> think you solution is good.
>
>>>> + BUG_ON(!text_poke_area[0] || !text_poke_area[1]);
>>>> +}
>>>> +
>>>> /**
>>>> * text_poke - Update instructions on a live kernel
>>>> * @addr: address to modify
>>>> @@ -501,8 +511,9 @@ void *__kprobes text_poke(void *addr, co
>>>> unsigned long flags;
>>>> char *vaddr;
>>>> int nr_pages = 2;
>>>> - struct page *pages[2];
>>>> - int i;
>>>> + struct page *pages[2], **pgp = pages;
>>> Hrm, why do you need **pgp ? Could you simply pass &pages to map_vm_area ?
>> As you know, pages means just the address(value) of an array, so you can't
>> get the address of the address...(pages and &pages are same.)
>>
>> int array[2];
>> printf("%p, %p",array, &array);
>>
>> please try it :)
>>
>> And actually, map_vm_area() requires the address of a pointer.
>
> Ah yes, thanks for the explanation.
>
> After changing the spinlock/irqsave, I think that patch would be good to
> merge. And then Steve could use text_poke within stop_machine if he
> likes.
>
> Mathieu
>
>> ---
>> int map_vm_area(struct vm_struct *area, pgprot_t prot, struct page ***pages)
>> {
>> unsigned long addr = (unsigned long)area->addr;
>> unsigned long end = addr + area->size - PAGE_SIZE;
>> int err;
>>
>> err = vmap_page_range(addr, end, prot, *pages);
>> if (err > 0) {
>> *pages += err;
>> ^^^^^^^^^^^^^^ Here, it tries to add err(=number of mapped pages)
>> to the pages pointer!
>> err = 0;
>> }
>>
>> return err;
>> }
>> ---
>>
>>
>>> Thanks,
>>>
>>> Mathieu
>>>
>>>> + int i, ret;
>>>> + struct vm_struct *vma;
>>>>
>>>> if (!core_kernel_text((unsigned long)addr)) {
>>>> pages[0] = vmalloc_to_page(addr);
>>>> @@ -515,12 +526,16 @@ void *__kprobes text_poke(void *addr, co
>>>> BUG_ON(!pages[0]);
>>>> if (!pages[1])
>>>> nr_pages = 1;
>>>> - vaddr = vmap(pages, nr_pages, VM_MAP, PAGE_KERNEL);
>>>> - BUG_ON(!vaddr);
>>>> + spin_lock(&text_poke_lock);
>>>> + vma = text_poke_area[nr_pages-1];
>>>> + ret = map_vm_area(vma, PAGE_KERNEL, &pgp);
>>>> + BUG_ON(ret);
>>>> + vaddr = vma->addr;
>>>> local_irq_save(flags);
>>>> memcpy(&vaddr[(unsigned long)addr & ~PAGE_MASK], opcode, len);
>>>> local_irq_restore(flags);
>>>> - vunmap(vaddr);
>>>> + unmap_kernel_range((unsigned long)vma->addr, (unsigned long)vma->size);
>>>> + spin_unlock(&text_poke_lock);
>>>> sync_core();
>>>> /* Could also do a CLFLUSH here to speed up CPU recovery; but
>>>> that causes hangs on some VIA CPUs. */
>>>> @@ -528,3 +543,4 @@ void *__kprobes text_poke(void *addr, co
>>>> BUG_ON(((char *)addr)[i] != ((char *)opcode)[i]);
>>>> return addr;
>>>> }
>>>> +EXPORT_SYMBOL_GPL(text_poke);
>>>> Index: linux-2.6/init/main.c
>>>> ===================================================================
>>>> --- linux-2.6.orig/init/main.c
>>>> +++ linux-2.6/init/main.c
>>>> @@ -676,6 +676,9 @@ asmlinkage void __init start_kernel(void
>>>> taskstats_init_early();
>>>> delayacct_init();
>>>>
>>>> +#ifdef CONFIG_X86
>>>> + text_poke_init();
>>>> +#endif
>>>> check_bugs();
>>>>
>>>> acpi_early_init(); /* before LAPIC and SMP init */
>>>
>> --
>> Masami Hiramatsu
>>
>> Software Engineer
>> Hitachi Computer Products (America) Inc.
>> Software Solutions Division
>>
>> e-mail: mhiramat@xxxxxxxxxx
>>
>

--
Masami Hiramatsu

Software Engineer
Hitachi Computer Products (America) Inc.
Software Solutions Division

e-mail: mhiramat@xxxxxxxxxx

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/