Re: DEVFSv50 and /dev/fb? (or /dev/fb/? ???)

Terry L Ridder (terrylr@tbcnet.com)
Mon, 10 Aug 1998 01:33:12 -0500


Hello Everyone;

Shawn below is your original reaction to my post:

<Begin Quote>
On Sun, 9 Aug 1998, Terry L Ridder wrote:

> Hello Everyone;
>
> Shawn Leas wrote:
> The second part is:
>
> "If you have a static /dev on a normal filesystem, you have to have
> all 8 million possible SCSI devices. With devfs, you don't."
>
> Which is the part that I am asking Albert to explain. I explictly
> asked him to explain the use of
>
> "you have to have all 8 million possible SCSI devices."
>
> I would maintain that I do not have to have any more SCSI devices
> than I require/want/need, and that I do have the ability to manage
> those devices myself. Therefore Albert's statement is false.
<End Quote>

Please look at the above Shawn. That would seem to indicate the
you were the author of the very parts you reacted to.

Shawn Leas wrote:
>
> On Sun, 9 Aug 1998, Terry L Ridder wrote:
>
> > Hello Everyone;
> >

<snip>

>
> > I am saying that Albert's statement:
> >
> > <Begin Quote>
> > "If you have a static /dev on a normal filesystem, you have to have
> > all 8 million possible SCSI devices. With devfs, you don't."
> > <End Quote>
> >
> > is false. Nothing more nothing less.
> >
> > I gave factual details as to why that statement is false.
>
> Because a fully populated SCSI situation demands this. Read the devfs
> FAQ. Boy, what does it take to get you to RTFM???

A fully populated SCSI situation of which you are referring is
in my opinion a myth. If you take a moment and consider the numbers
you are talking about for both SCSI host adapters, disk drives, tape
drives,
etc, you will see that that is the case.

Using Richard Gooch's own numbers:
<Begin Quote>
An example of how big /dev can grow is if we consider SCSI devices:
host 6 bits (say up to 64 hosts on a really big machine)
channel 4 bits (say up to 16 SCSI buses per host)
id 4 bits
lun 3 bits
partition 6 bits
TOTAL 23 bits
<End Quote>

Max number of Hosts -- 64
Max number of Channels -- 16
Max number of Ids -- 16
Max number of lun's -- 8
Max number of partitions -- 64

So a totally maximum system would have:
64 hosts * 16 channels per host == 1024 SCSI channels

Since the SCSI Host adapter requires an ID there are only 15 SCSI ID
left for devices.

1024 SCSI channels * 15 id per SCSI channel == 15360 ID's
15360 ID's * 8 lun's per SCSI ID == 122880 Lun's
122880 Lun's * 64 partitions per lun == 7864320 partions

Consider the machine that this would be.

The machine would have a total of 122880 disks.

Using the technical specifications for a Quantum Atlas III 18.2 GB
hard drive.

Typical Power Dissipation is 14.5W idle.

122880 disks * 14.5W per disk == 1781760 Watts for the disks

1781760 Watts is approximately 6078494 Btu/hr

To remove that much heat would require air conditioning rated
at approximately 506 tons.

( Math is given below for those who are interested

Conversion factors used are
.73726541 ft-lbs per sec == 1 Watt
1 Btu = 778 ft-lb
1781760 * .73726541 ft-lbs per sec == 1313630.0169216 ft-lb per sec
1313630.0169216 ft-lb per sec * .00128534 Btu per ft-lb ==
1688.46120595 Btu per sec
1688.46120595 Btu per sec * 3600 sec per hour == 6078460.34142 Btu per
hour
1 ton of air conditioning == 12,000 Btu per hr
6078460.34142 Btu per hour * .00008333 tons of air conditioning per Btu
per hour == 506.51810025 tons of air conditioning.

By comparision the last computer room I designed and built for a company
had 10 tons of air conditioning installed.
)

122880 disks * 2 lbs per disk == 245760 lbs or 122.88 tons

Each disk is 4.0 inches wide x 5.75 inches long x 1.6 inches heigh ==
36.8 cubic inches

122880 disks * 36.8 cubic inches per disk == 4521984 cubic inches

4521984 cubic inches * .00002143 cubic inches per cubic yard ==
96.90611712 cubic yards

If arranged in a cube the cube would have dimensions of approximately
4.593 yards per side or 55.116 feet per side.

That is not counting mount brackets, external chassis, cables, etc.

Disk space would be approximately

122880 * 18.2 GB == 2236416 GB or 2184 TB (TeraBytes) or 2.184 PB
(PetaBytes)

Finally there is the cost of just the SCSI disks.

Using the price from http://www.megahaus.com for a Quantum 18.2GB drive
of
$1129.00 USD.

122880 * $1129.00 USD == $138,731,520.00 USD

The drives only would cost over $138,000,000.00 USD.

The number of partitions would be 7864320.

I cannot begin to image what purpose that many partitions would be
needed for.

I could not image even attempting an fsck on that many partitions.

Therefore, based on the above facts, the maximum SCSI configuration
that you keep referring to is a myth.

Since a maximum SCSI configuration based on Richard Gooch's own numbers
from the dev_fs FAQ is in fact a mythical machine, I would ask that
you begin to deal in facts as you demand of us and particularly me.

Currently based on the the SCSI disk limitation of 16 and the limitation
of 16 partitions with one partition being /dev/sd[a-p], there are
a possible 256 device nodes possible for SCSI disks. I am purposely
not addressing SCSI tape drivers, SCSI CD-ROM drives, SCSI scanners,
etc.

If SCSI generic devices are also enabled, this would add only an
additional 16 device nodes.

You have again not answered the original question, but I respond to
what you have stated.

I have read the dev_fs FAQ.
I would be more than happy to read the dev_fs "manual", so if you would
provide an URL to the dev_fs "manual" I would appreciate it.
BTW, I do not equate the dev_fs FAQ to be the same as the dev_fs
"manual".

>
> > > That's what it sounded like you said. A distribution wanting to support
> > > large setups would force a user to mknod/rm devices himself, and FORCING
> > > him to do that would be less flexible than simply ALOWING him to, and
> > > having it setup itself AUTOMATICALLY.
> >
> > I said nothing about either dev_fs, any distributions, or anyone forcing
> > anyone to do anything. My original question concerned Albert's
> > statement,
> > which I quote again below:
>
> DevFS is the ONLY coded working thing right now that can do this. You are
> strongly ANTI-DEVFS, and that is equivelent.

Well no it is not the only code.
mknod, and rm have existed for quite some time now.
There is also scsidev, which does have limitations but is functional.

My original question to Albert had nothing to do with dev_fs at all.

>
> > <Begin Quote>
> > Think about it for two seconds. The devfs generates devices as needed.
> > If you have a static /dev on a normal filesystem, you have to have
> > all 8 million possible SCSI devices. With devfs, you don't.
> > <End Quote>
> >
> > I have asked Albert to explain that statement. I have explictly
> > asked him to explain the second part of the statement.
>
> Because you do not understand. Ok, so you simply expect every Sysadmin to
> mknod all of his SCSI devices even if he has 500 disks on a gagle of
> controllers?
>
> > The message to which you have reacted has neither explict or implied
> > complaints. I suggest that you re-read it.
>
> I heard your ignorance clearly the 1st time.
>
> > My points are backed by factm which I presented in detail.
> > If you believe that anything in my original
> > message, to which you have reacted, is not based in fact please
> > feel free to point out exactly what that is.
>
> Fact, like what???
>
> > Your personal attacks do not make anything you say true.
>
> You who are beyond reason only understand it when someone POUNDS it into
> your brain. I just wish I were there to do that...

Shawn I strongly suggest that you stop the personal attacks
particularlly
when they strongly suggest a desire to inflict physical harm to another
person.

>
> -Shawn

-- 
Terry L. Ridder
Blue Danube Software (Blaue Donau Software)
"We do not write software, we compose it."

When the toast is burnt and all the milk has turned and Captain Crunch is waving farewell when the Big One finds you may this song remind you that they don't serve breakfast in hell ==Breakfast==Newsboys

- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.rutgers.edu Please read the FAQ at http://www.altern.org/andrebalsa/doc/lkml-faq.html