Re: SVGA kernel chipset drivers.

Jon M. Taylor (taylorj@gaia.ecs.csus.edu)
Thu, 13 Jun 1996 00:37:33 -0700 (PDT)


On Wed, 12 Jun 1996, Tom May wrote:

> Matty <matt@blitzen.canberra.edu.au> writes:
>
> >Not really.. I think the point is that the kernel drivers do all the
> >low-level stuff that requires kernel/superuser privs, and any user-level
> >program that wishes to perform these functions must call the kernel
> >routines to do it for them. It's not meant to be a REPLACEMENT for X,
> >after all, what CAN replace X? :) Think about it - there will be no
> >need for a million & one setuid-0 X server binaries that are 2-3Mb in
> >size! One generic server would be enough, which calls the kernel to do
> >the card-specific routines, and hence doesn't need to be setuid-0.
>
> I can only deduce that you are talking about putting the X protocol,
> or some superset, into the kernel, with card-specific drivers to
> translate the protocol into card-specific graphics operations like the
> X server currently does.

Not a good idea.

> If you put anything less than the full X protocol into the kernel,
> your X server will become less efficient.
>
> You will end up doing a lot of work in software in the kernel. For
> example, suppose you want to stipple a rectangular region with a 51x46
> stipple (yes, I do tile my root window with one of these). Most PC
> cards can't handle a stipple of that size directly, so you will need a
> software implementation or breakdown into simpler pieces, for example,
> if the raster op is GX_copy you can write one copy of the stipple to
> the upper left corner then blt it across and down with hardware to
> fill the region. Another example would be that some cards can draw
> lines and others can't, or you may not have access to set the DDA
> terms correctly which is required for X. So you'll have to draw lines
> in software in the kernel.

You are assuming that the X server blindly says to the GDI, 'draw
a line', and either it is accelerated or not. What would really be done
is that the library that sits between the kernel graphics stuff and the
application (X, in this case) would make that determination, and *it*
would be drawing the line in software or telling the kernel driver to
draw a n accelerated line, as appropriate. All the kernel driver would
know how to do is implement its accelerations and provide a framebuffer -
the userspace code would handle everything else.

> But, for the cards that *can* handle such operations you certainly
> don't want to slow things down by not taking advantage of it. You
> don't want to stick yourself with a lowest common denominator approach
> or nobody will want to use it. And in fact, the lowest common
> denominator is a frame buffer with everything done in software.

Which is why the necessary intelligence to determine if hardware
acceleration or software drawing will be used is necessary. That doesn;t
mean that this needs to be done in the kernel.

> Also, you will need to duplicate X's management of graphics contexts,
> which allow the server to examine drawing parameters once and set
> pointers to functions specifically optimized for those drawing
> parameters. Otherwise you will be taking the approach of every other
> GUI I've worked with where the parameters must be examined at each
> call to determine the class of rop, patterning, etc.

X could actually talk to the kernel directly, and this may end up
happening with any GGI-based X server. There's no reason why every
graphics-using userland program must use the same layer between it and
the GGI.

>
> And on a different topic which came up in this thread:
>
> Someone was mentioning a breakdown by graphics chip, clock, dac, and
> monitor. That is not all that defines a board, and the (possible)
> physical separation between these parts does not carry over cleanly
> into a logical separation.

Granted, but A) it works pretty well for most of today's common
cards, and B) it all gets compiled into one module anyway, so the
distinction between the subsections can be made just about as malleable as
necessary in a pinch. What we are working with now is a useful
separation, but it isn't a fundamental part of the system.

> So it is not as easy as it seems to come up with generic software
> modules to handle graphics hardware. And next month something new
> will come along that breaks your model.

And then we will refine our model, if necessary. It probably
won't be, though - when you get right down to it, a graphics board gives
you a rectangular array of pixels and a bunch of ops on those pixels
(sometimes a z-buffer, too). There are a lot of variations on this theme,
but they can almost always be abstracted to framebuffer+drawing functions.

Jon Taylor = <taylorj@gaia.ecs.csus.edu> | <http://gaia.ecs.csus.edu/~taylorj>
------------------------------------------------------------------------------
"Everything in excess! To enjoy the flavor of life, take big bites.
Moderation is for monks." - Lazarus Long