imagining CPLAN, the Comprehensive Linux Patch Archive Network, or something

david nicol whatever at davidnicol.com
Wed May 28 20:28:27 CDT 2003


I doubt this is a FAQ, as frequent askers of questions rarely
think this big.  My question is, at what point, if ever, is
the Linux source going to take a core plus modules approach to
the downloading and maintenance?  It seems that the interfaces
are stable enough to offer the additional features as separate
files instead of part of a big zipball; yes disk space is
practially free these days, but it still seems foolish to download
the assembly language versions for the cores of architectures I
do not intend to build.

The Perl language has the widely emulated (well, TeX has CTAN)
comprehensive perl archive network, with an extension interface
defined clearly enough that perl's CPAN module will download and
build modules required to satisfy a dependencies in a requested module.

Is reorganizing the kernel source in such a way on the drawing board?
Currently there are heaps of unapproved modules the reasoning behind
leaving them out is source code bloat -- reiser quotas, ext3, PC speaker
sound, etc.; if the champions of some module or another were given the
ability to declare stubs in the configuration part of the kernel build
so non-core patch download and application could be automated, would
that make things easier?

And who would make sure it all worked?

I imagine a source download origin fork, starting by the instigators 
(Do I have enough time to instigate this? maybe after the kclug cluster
is up and chugging, me and my army of flying monkeys...) dividing all
the files in the kernel into lots and lots of little pieces which each
get indexed somehow, creating a layer above selection by preprocessor
directives, with the result that the configuration process would
be streamlined (perhaps starting by examining the current system
and setting defaults based on what is currently plugged into the slots,
but that is a whole nother cylinder of invertabrata isn't it) and
downloading occurs after (the first step of) configuration, or even
during building.  There is no SCSI card in my machine and I do not
intend to install one.  Why do I get a list of 429 files in response
to 	
	find /usr/src/linux-2.4.20 | grep scsi

and the same with token ring networking, or any other CRUCIAL technology
that I am not going to use?

Oh good, this IS a faq:
http://www.tux.org/lkml/#s7-7

The answer includes:

> If you are really desperate for a reduced kernel, set up some
> automated procedure yourself, which takes the patches which are made
> available, applies them to a base tree and then tars up the tree into
> multiple components. Once you've done all this, make it available to
> the world as a public service. There will be others who will
> appreciate your efforts. 
>         Under no circumstances should you complain to the kernel list.
>         I promise you that Linus and the core developers will
>         completely ignore such messages, so whinging about it is a
>         complete waste of bandwidth. The only message on this subject
>         that should be posted is an announcement of a new service
>         providing split kernel sources.

okeydokey, so the linux-kernel list is now removed from the intended
recipients of this message.

The question remains: does the KCLUG wish to develop and host
a split kernel source resource, including developing an extended
build process that will fetch files as needed?

-- 
David Nicol, independent consultant and contractor
                              Achaemenia for the Zoroastrians!




More information about the Kclug mailing list