Re: [patch] call for testing -- MP3 skipping/PPP/serial performance

Bruce A. Locke (locke98@cs.newpaltz.edu)
Tue, 11 May 1999 14:26:32 -0400 (EDT)


Hi... I would be glad to give the patch a try... but could you please
send me a copy in a gzip file, etc, so that pine doesn't chew on any
spaces or wrap anything around?

Thanks :)

___________________________________________________________
Bruce A. Locke locke98@cs.newpaltz.edu


On Tue, 11 May 1999, Benjamin C.R. LaHaise wrote:

> Hello all,
>
> I've gotten a report from Rik van Riel that the patch below improves PPP
> throughput and makes MP3s less prone to skipping. Not too many people
> seem to have tried it, so I'm reposting with a subject that'll hopefully
> get some attention. This is different from the first version of the patch
> in that it only allows each bottom half to be run at most once during a
> successful run through do_bottom_halves (eg one in which we manage to
> obtain the locks). Comments or suggestions for a better heuristic will be
> appreciated, as stock 2.2.x bottom half handling needs to be improved.
>
> -ben
>
> --- softirq.c.orig Sun Mar 21 10:22:00 1999
> +++ softirq.c Thu May 6 21:30:47 1999
> @@ -26,45 +26,50 @@
> void (*bh_base[32])(void);
>
> /*
> - * This needs to make sure that only one bottom half handler
> - * is ever active at a time. We do this without locking by
> - * doing an atomic increment on the intr_count, and checking
> - * (nonatomically) against 1. Only if it's 1 do we schedule
> - * the bottom half.
> - *
> - * Note that the non-atomicity of the test (as opposed to the
> - * actual update) means that the test may fail, and _nobody_
> - * runs the handlers if there is a race that makes multiple
> - * CPU's get here at the same time. That's ok, we'll run them
> - * next time around.
> + * Repeatedly run over the bottom halves until there are no more, but only run
> + * each bottom half at most once. If we don't loop, if one bottom half triggers
> + * another, it might get delayed a long time. If we loop indefinately, then the
> + * system might completely when unable to handle the irq load. -bcrl
> */
> -static inline void run_bottom_halves(void)
> -{
> - unsigned long active;
> - void (**bh)(void);
> -
> - active = get_active_bhs();
> - clear_active_bhs(active);
> - bh = bh_base;
> - do {
> - if (active & 1)
> - (*bh)();
> - bh++;
> - active >>= 1;
> - } while (active);
> -}
> -
> asmlinkage void do_bottom_half(void)
> {
> int cpu = smp_processor_id();
> + unsigned long left = ~0UL;
> + unsigned long active;
> + void (**bh)(void);
>
> +again:
> if (softirq_trylock(cpu)) {
> if (hardirq_trylock(cpu)) {
> __sti();
> - run_bottom_halves();
> +
> + while ((active = left & get_active_bhs())) {
> + left &= ~active;
> + clear_active_bhs(active);
> + bh = bh_base;
> + do {
> + if (active & 1)
> + (*bh)();
> + bh++;
> + active >>= 1;
> + } while (active);
> + }
> +
> __cli();
> hardirq_endlock(cpu);
> + softirq_endlock(cpu);
> +
> + /*
> + * Avoid the race by checking if any bottom halves
> + * are active after releasing all locks. This is a
> + * rare race, but should inexpensive to check. -bcrl
> + */
> + rmb();
> + if (get_active_bhs() & left)
> + goto again;
> + return;
> }
> softirq_endlock(cpu);
> }
> }
> +
>
>
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.rutgers.edu
> Please read the FAQ at http://www.tux.org/lkml/
>

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/