Re: SVGA kernel chipset drivers.

Tom May (ftom@netcom.com)
Wed, 12 Jun 1996 12:04:02 -0700


Matty <matt@blitzen.canberra.edu.au> writes:

>Not really.. I think the point is that the kernel drivers do all the
>low-level stuff that requires kernel/superuser privs, and any user-level
>program that wishes to perform these functions must call the kernel
>routines to do it for them. It's not meant to be a REPLACEMENT for X,
>after all, what CAN replace X? :) Think about it - there will be no
>need for a million & one setuid-0 X server binaries that are 2-3Mb in
>size! One generic server would be enough, which calls the kernel to do
>the card-specific routines, and hence doesn't need to be setuid-0.

I can only deduce that you are talking about putting the X protocol,
or some superset, into the kernel, with card-specific drivers to
translate the protocol into card-specific graphics operations like the
X server currently does.

If you put anything less than the full X protocol into the kernel,
your X server will become less efficient.

You will end up doing a lot of work in software in the kernel. For
example, suppose you want to stipple a rectangular region with a 51x46
stipple (yes, I do tile my root window with one of these). Most PC
cards can't handle a stipple of that size directly, so you will need a
software implementation or breakdown into simpler pieces, for example,
if the raster op is GX_copy you can write one copy of the stipple to
the upper left corner then blt it across and down with hardware to
fill the region. Another example would be that some cards can draw
lines and others can't, or you may not have access to set the DDA
terms correctly which is required for X. So you'll have to draw lines
in software in the kernel.

But, for the cards that *can* handle such operations you certainly
don't want to slow things down by not taking advantage of it. You
don't want to stick yourself with a lowest common denominator approach
or nobody will want to use it. And in fact, the lowest common
denominator is a frame buffer with everything done in software.

Also, you will need to duplicate X's management of graphics contexts,
which allow the server to examine drawing parameters once and set
pointers to functions specifically optimized for those drawing
parameters. Otherwise you will be taking the approach of every other
GUI I've worked with where the parameters must be examined at each
call to determine the class of rop, patterning, etc.

And on a different topic which came up in this thread:

Someone was mentioning a breakdown by graphics chip, clock, dac, and
monitor. That is not all that defines a board, and the (possible)
physical separation between these parts does not carry over cleanly
into a logical separation. There are also wires and other logic
between these parts that are at the discretion of the board designer.
For example, what port or memory location do the clock chip
programming bits appear? Does the dac data need to be shifted to/from
a particular byte lane? I am working with a board at the moment that
uses a dac with an on-board PLL, but which uses an external clock chip
for timing. Nevertheless, the prescalar on the DAC's PLL must be
programmed correctly. And this board has another part in the frame
buffer interface that is not covered by the proposed breakdown, but
which must be initialized correctly to access the frame buffer. I
have also worked on boards with rambus memory, where the memory chips
have registers that need to be initialized to memory-manufacturer
specific values! In general, there are a variety of memory chips from
which the frame buffer attached to a particular graphics chip can be
constructed, and there are registers in the chip (or elsewhere) that
need to be set appropriately to get all the RAS/CAS/refresh stuff
sorted out, and the appropriate register settings can't always be
sussed out from software.

So it is not as easy as it seems to come up with generic software
modules to handle graphics hardware. And next month something new
will come along that breaks your model.

Tom.