please speak to your sys admin

From: rob herrmann (robherrmann@uswest.net)
Date: Thu Jan 20 2000 - 19:10:18 EST


612.285.7976
robherrmann@uswest.net

attached mail follows:


I have been trying to follow this thread but I seem to have missed the URL
or reference to the IBM analysis that is under discussion. Could someone who
has participated in this message change please repost the URL or the
reference?

Thanks,

Keith Bottner
kbottner@istation.com

-----Original Message-----
From: owner-linux-kernel@vger.rutgers.edu
[mailto:owner-linux-kernel@vger.rutgers.edu]On Behalf Of Davide Libenzi
Sent: Thursday, January 20, 2000 7:41 AM
To: Peter Rival
Cc: linux-kernel@vger.rutgers.edu
Subject: Re: Interesting analysis of linux kernel threading by IBM

Thursday, January 20, 2000 3:14 PM
Peter Rival <frival@zk3.dec.com> wrote :
> No offense, but it is this type of thinking that will keep Linux out of
the
> datacenter. What you must say it is not a typical (or realistic) workload
_for
> me_. Hundreds of tasks is trivial here - we have systems running with
well over
> 100 users actively working that are two or more generations old (that's a
much
> bigger thing for Alpha than for Intel). On our newer systems we fully
expect
> hundreds, if not thousands, of tasks. The more commercially accepted
Linux
> becomes, the more common large configurations are going to be, and we
should be
> thinking about that now - not when we're being shot all over creation for
not
> doing what everyone said we could (ala WinNT).
>

I can only agree here given that there are practically no costs to sustain
under normal situations.
And improvements are concrete under high load.

Cheers,
    Davide.

--
Debian, the freedom in freedom.

- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.rutgers.edu Please read the FAQ at http://www.tux.org/lkml/

- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.rutgers.edu Please read the FAQ at http://www.tux.org/lkml/

attached mail follows:


Hi Victor,

Thursday, January 20, 2000 4:14 PM Khimenko Victor <khim@sch57.msk.ru> wrote : > Even 8 tasks is unusual for Desktop. Usually there are 2-3 active tasks. > On other hand Desktop usually is idle all the time anyway so 1.5% looks > affordable for sure. And for server with LOTS of active processes (Apache, > *SQL, etc) it can be real win. > > Where I can find your patch ?

I'm in office now and here I've the patch coded for 2.3.5. Anyway this weekend I've planned to port my scheduler into 2.2.14 and 2.3.39 and to post on linux-kernel sunday evening or monday morning ( Europe Time ). This patch also include a semaphore patch that avoid releasing all tasks in a wait queue selecting the best inside wake_up().

Cheers, Davide.

--
Debian, the freedom in freedom.


attached mail follows:


I'm posting here because redhat/bugzilla moved my bug report from cpio to kernel, so they think this is a kernel/scsi issue.

==

kernel 2.2.14, cpio-2.4.2-13 (rh61) scsi_mod, aha152x, st as modules SCSI controller: Adaptec AVA-1050 SCSI tape unit: Tandberg SLR2 525MB

PROBLEM: I can do multivolume cpio backups on tape, but then I cannot restore them due to cpio problems, maybe due to the kernel.

I use these commands:

Backup: find /path|cpio -ovHcrc -O /dev/st0 - it starts to backup then it asks me for a 2nd tape with: "Found end of tape. Load next tape and press RETURN." I insert the 2nd tape and the backups ends regularly with the message about the blocks copied.

Restore: cpio -ivdumHcrc -I /dev/st0 - it asks me for the 2nd tape with: "Found end of tape. Load next tape and press RETURN." - then asks me for a 3rd tape!!, with: "Found end of tape. Load next tape and press RETURN." Since I have not a 3rd tape (the backup took 2 tapes only), I simply press enter (_leaving_ the 2nd tape IN) "cpio: read error: No medium found" (the 2nd tape is in).

Note that: - The stuf restored is only 5/10k smaller than the original size,very little but the restore is unreliable. - the backup took very little of the 2nd tape (say 10MB). - I even tried with -B 5120 - I tried with different tapes (3M), different controllers (all ISA adaptec with module aha152x.o) and different tape units (all Tandberg)

I found an article on dejanews by Kai Mäkisara regarding problems with cpio multivolume restore with kernels 2.0.x (cpio quits as soon as it arrives at the end of the 1st tape with an I/O error). I found nothing regarding kernels 2.2.x.

===begin deja article on 2.0.x kernels On Tue, Mar 24, 1998 at 11:26:10AM +0100, Stephane KLEIN wrote: > But the restoring (cpio -i ..) fails at the end of the first tape with a > read IO error.

I think this is a known bug in 2.0.X. The following message is from Kai Mäkisara, maintainer of the SCSI tape driver:

| On Wed, 28 May 1997, Jan Echternach wrote: | > There is a problem with taper not detecting the end of tape. With my | > SCSI DAT drive and kernel 2.0.30, read() returns 0 only one time, and | > all subsequent reads return EIO. It seems that ftape returns _two_ | > times 0, which taper handles correctly. | > | ... | > | > Do 2.1.x kernels return two times 0? If so, should 2.0.30 also return | > two times 0 at end of tape? | > | What should happen, according to the BSD semantics, is this: | - at filemark, the first read returns 0 bytes, the second read returns | data from the next file | - at EOD, the first and the second read return 0 bytes, the third returns | error (-1) | i.e., 2.1.x operates correctly. | | The EOD behaviour was fixed at 2.1.20. In principle, it should be fixed | also at 2.0.31 but I think the risks are too big: fixing EOD behaviour | may break something else (the fix from 2.1.20 can't be used directly). | | Kai

===end deja article

Thanks.

--
giulioo@pobox.com

- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.rutgers.edu Please read the FAQ at http://www.tux.org/lkml/

- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.rutgers.edu Please read the FAQ at http://www.tux.org/lkml/



This archive was generated by hypermail 2b29 : Sun Jan 23 2000 - 21:00:23 EST