> Take a look at this point from the opposite side. What if I am
> the only user of a piece of hardware that's currently reporting a
> bug? Well, in most cases it isn't interesting for a vendor to fix
> a rare problem until it has been reported by a number of users.
> Consider also that in many cases, after the development is
> "complete", the developers have moved on (be it to other products
> or other jobs) and the company no longer has the option of
> performing tweaks on a driver without incurring major costs.
Then at this juncture it makes a hell of a lot of sense for the
vendor to release the source code for the binary driver as Open
Source.
> How do you combat these problems with binary only drivers? You
> can't. Why should we encourage binary only drivers? We shouldn't.
One thing that everyone seems to be confusing here is that
binary portable modules == proprietry close source drivers
This is not true, because you can just as easily have Open Source
binary portable modules. Just because a module is binary portable
does not instantly mean that the source code is not available. Hell,
do you consider the pre-built kernels and libraries that come with
Linux distributions closed source just because they are binary? Of
course not, because the sources to build them are fully open.
> > 2. Binary portability requires more solid and clearly defined
> > interfaces between the kernel internals and the modules themselves.
> > By requiring that there be a clear separation or 'firewall' between
> > drivers and the kernel internals, you can more easily avoid the
> > classic problems of coupling where changes in some other part of the
> > code affect other code that should not be affected. Specifically it
> > makes it impossible for a driver to implement a 'quick hack' solution
> > by accessing the internals of some other driver or the kenerl
> > directly. However the *only* way to enforce this is to design device
> > type specific binary API's, and *require* that the only way a device
> > driver can talk to the kernel is via these API's.
>
> Sure, better defining APIs is a Good Thing, but how we possibly
> commit to etching an API in stone when we don't know what
> additional needs will exist two years from now?
That is why good design is all about. You design your API mechanism
so that there are known extension mechanism that you can use to
extend the existing models as necessary down the track. If the
existing models don't work and you *really* need to change the API's
then you end up breaking compatibility with the drivers whether they
are source drivers or binary drivers.
Anyone designing an OS kernel is clearly interested in extensibility.
Why do you think the Unix kernel is designed so heavily around the
concept of files and the file system? Because that is an incredibly
powerful metaphor for designing in future extensibility into the low
level interfaces. You can have a /dev/mydev device that exports a
particular API to user land apps. If the old API is really no good
anymore, you create a /dev/mydev2 device which exports the new
interface (and hopefully maintain a compatible interface on
/dev/mydev). The exact same concepts can be applied to internal
kernel interfaces that are exported to binary portable modules.
> > 3. Binary portability means much less regression testing is required
> > for new kernel versions. If the driver itself does not get re-
> > compiled, it does not need to be thoroughly re-tested. If the case of
> > the Linux monolithic kernel approach, every driver is compiled into
> > each new kernel version. How do you *really* know that a driver is
> > functioning properly when a final release of 2.2.100 is made, unless
> > *every* single device that is supported is properly tested with that
> > particular version?
>
> This is a false statement. Changes between versions result in
> numerous side effects that can trigger latent bugs in drivers.
> Timing changes, memory layout changes so that a previously
> harmless stray pointer can now results in a crash.
Of course. And when you figure out what type of driver bug previously
got through the OS regression and conformance test suites (which you
do have right?), you update the conformance tests to check for these
conditions to ensure these problems can be effectively caught and
fixed.
> Also, binary compatibility is rarely achived -- take a look at M$'s
> own products: can you use drivers from Windows 95 with Windows NT?
That is a ridiculous argument, similar to saying 'you cant use OS/2
drivers on NT can you?'. Of course you can't. Windows 95 and Windows
NT are architecturally different internally, so drivers are not
compatible. Note that Windows 98 implements the Windows NT driver
model (via WDM) so that all NT drivers except display drivers *can*
be used for Windows 98. Windows 3.1 display drivers and device
drivers on the other hand *can* be used with Windows 95, because they
are based on the same OS technology. In fact for quite some time
after Windows 95 was shipped, a lot of the existing device drivers
*were* Windows 3.1 drivers.
Now what about *within* the Windows NT product line, which is a more
valid comparision to what we are talking about here. Windows NT 3.51
display drivers can be used on Windows NT 4.0. Windows NT 4.0 drivers
can be used on Windows 2000. All of the above is a good thing until
the device drivers get updated to support the new OS.
What about OS/2? The OS/2 display driver model actually allows
drivers to be developed that work incredibly reliably between OS/2
2.0, 2.1, 2.11, Warp 3, Warp 4 and Warp Server for e-Business. Sure
building those drivers is a pain in the arse, but they are fully
compatible between OS versions.
> If someone wishes to keep details of their product proprietary,
> then they should provide a well defined interface to said product.
> It *is* possible to ship a driver that consists of a source portion
> and a binary portion, but in real life, it is rarely useful.
Really? That is precisely the technology that IBM licenses from us.
They certainly consider it useful.
> Pushing the hardware vendor's inability to define an interface into
> our overhead just isn't something I feel like accepting because
> there *are* hardware vendors who create good products and document
> the interfaces to them.
Eh? Since when would the hardware vendor be responsible for
developing the interface to their device drivers? It is the
responsibility of the OS developers to define the low level device
driver interfaces, and then the hardware vendors implement device
drivers for that interface.
> So long as I have choice, I'll use a driver I can hack over one
> that's closed.
Exactly. And *that* is the power that Open Source has. Vendors will
open their product specifications when they realise that it is
commercially interesting for them to do so, not because Linus and
Alan force them to. Why would they do this? Because people such as
yourself will specifically choose and recommend hardware that is well
supported by vendors in an open fashion. Binary portable modules can
just as easily be Open Source for the same reason that lots of Open
Source XFree86 drivers not exist. Because it makes *commercial sense*
for the hardware vendor to do this.
Remember, just because a device driver is a binary portable module
does not mean that it is proprietry and has no source code available!
Regards,
+---------------------------------------------------------------+
| SciTech Software - Building Truly Plug'n'Play Software! |
+---------------------------------------------------------------+
| Kendall Bennett | Email: KendallB@scitechsoft.com |
| Director of Engineering | Phone: (530) 894 8400 |
| SciTech Software, Inc. | Fax : (530) 894 9069 |
| 505 Wall Street | ftp : ftp.scitechsoft.com |
| Chico, CA 95928, USA | www : http://www.scitechsoft.com |
+---------------------------------------------------------------+
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/